Quality in higher education: MOOCs highlight the symptoms, are not the cure

The following blog post was written by Dr. Vanessa Dennen, Associate Professor in the College’s Educational Psychology and Learning Systems department.  Her blog can be located at http://vanessadennen.com/.  The theme of this blog coincides with the 7th annual Dean’s Symposium which will be held on Monday, October 7, at the Turnbull Florida State Conference Center from 8:30 am to 3:30 pm.  For more information on the Dean’s Symposium or to RSVP for the event, please visit the 7th annual Dean’s Symposium website.

 

Public higher education faces a variety of challenges. Funding has been cut in many states, and tuition has increased. Budgets are tight, and many would argue insufficient. Quality of education is a focal point for many, but how to achieve and measure that quality has been debated. Time to degree and attrition both remain concerns, as does access to and affordability of degree programs. These issues are highly interrelated (e.g., tuition increases prevent students from accessing a public education; budget cuts harm quality through larger class sizes, reduced services and resources, and brain drain and inability to hire faculty) and are nothing new.

Massive Open Online Courses (MOOCs) – essentially, extremely large online classes in which anyone might enroll – have been offered as one part of the solution to these problems. In the last year, some universities have begun exploring how they might partner with MOOC platforms such as Coursera and Udacity, to allow their students to take MOOCs for course credit, with mixed results (see this story about MOOC completion and pass rates when San Jose State University took this approach, as well as this follow up story).

Still, the interest in MOOCs for college credit remains. The advantages of MOOCs in this context are clear and are related to economies of scale. To accommodate the potentially “massive” course enrollment, the courses are designed to require minimal student-instructor interaction. Similarly, the production values and technology used to support these courses may be more sophisticated than what would be used in a smaller, closed course. Automated assessments can provide instant feedback with no labor. Indeed, once designed some of these courses may only need administrators and technical support, and not instructional teams.

In this way, MOOCs highlight how technology can be used to deliver course materials at a large scale, and how computer-based tools, including simulations in the case of select MOOCs, can assess learning and provide students with interactions and feedback. These ideas are not new – corporations have been doing large-scale Web-based training in a similar manner for years – but these features (content delivered via packaged materials, automated assessments) become desirable if not necessary if institutions of higher education seek to increase class size.

As these highly designed courses become available to higher education, with potentially limitless capacity for student enrollment, it’s not unreasonable to have the discussion about how many “Introduction to Whatever” courses need to be designed. After all, most instructors select from among a relatively small pool of textbooks when designing their courses. Why not simply apply the same concept at the whole course level, with either students or universities choosing from among a pool of designed and approved MOOCs? Yet this approach neglects a key element of higher education: human interaction.

Online students, regardless of course size, may feel isolated and unsupported in their learning process without human interaction. The larger the class, the less feasible it is for individual students to interact with the instructor. Peer interaction may meet some of the learning interaction needs – but learners don’t always trust their peers, peers may not be able to diagnose their fellow learners’ problems as ably as an instructor, and peer networks often need instructor support and encouragement in order to develop.

Although it is costly to support and not easily scaled, the importance of the instructor-student connection should not be minimized. Learning is not just about content delivery, and assessment is not just about test scores. Many students struggle in large courses because they lack motivation, metacognitive skills, note-taking skills, or time management skills. Often these students need to feel that an instructor is their partner in the learning process, available to help as needed and monitoring their achievements in the course. True, most of us have achieved learning outcomes in courses where we had little or no instructor contact. However, if you ask someone to recall their best course experiences or the classes in which they learned the most, odds are there was a highly engaged instructor at the helm.

As someone who holds four degrees from three traditional universities, I have experienced some of the best and worst of classroom-based instruction. And as a researcher of online learning for the last fifteen years, I have observed some of the best and worst in that realm. Regardless of modality, the best classes consistently have involved solid pedagogy and instructional design, expert instructors, and a high degree of interaction and engagement among the members of the class community. The worst have suffered due to a lack of one or more of these elements.

Based on these experiences, I can understand how well-designed MOOCs look like an attractive solution to some of higher education’s problems. Frankly, I cannot argue that an impersonal course held in a large lecture hall with a “live” professor and multiple choice tests is any better than a MOOC. In fact, the MOOC’s recorded lectures, if well done, may be of greater pedagogical value than the live lecture since students can start, stop, and replay them at will. However, given the choice between the MOOC and a well-designed campus-based or online course with an accessible and knowledgeable professor who interacts with students, I’d pick the latter every time. I doubt I’m unique in that regard.

In short, I believe that MOOCs could have a transformative effect on quality higher education, but not in the same ways that many of the MOOC evangelists claim. It is my hope that we will use MOOCs to help us reflect on what quality higher education should be, to develop a greater appreciation for the interaction that occurs between students and instructors, and to strive for excellence in pedagogy and instructional design. Although this will not solve their financial woes, if higher education institutions can increase the quality of instruction where needed and better articulate and demonstrate this quality to their constituents, they will be taking a step in the right direction.

Note: The types of MOOCs to which I refer in this post are xMOOCs, and reflect the typical MOOC offered via the major platforms. cMOOCs, which support connectivist learning, are a bit different. For an explanation of the difference, I recommend this chapter by George Siemens.

Travels to Korea and Singapore

Earlier this summer, a team of us traveled to Korea (and some on to Singapore as well). The idea for the trip grew initially out of an interest in better understanding why students in countries such as Finland, Korea, Singapore, and Japan are performing so well on international achievement tests, in most cases far above students in the United States. Related to this question is how these countries prepare teachers and what we might learn from their experience.

A couple of years ago, some of the same group traveled to Finland and visited both schools and universities there. We saw firsthand Finland’s strong commitment to and investment in education, primarily in early intervention and special education to assure that all students meet the high standards that are set. We saw examples of both vocational high schools and traditional high schools and how the schools articulate with universities and technical institutions for advanced education. We learned from university teacher education faculty about highly selective, research-based programs for preparing teachers. We talked with researchers in an institute comparable to the Learning Systems Institute about opportunities for collaborating on international comparative studies.

With the College’s interest in increasing our international initiatives and raising our international profile, Korea seemed a logical choice as a country to visit. We have a longstanding connection with Korea through the work of the late Robert M. Morgan. Morgan, the founder of the Instructional Systems program and the Learning Systems Institute, worked with the South Korean Ministry of Education to help create a new public education system in the country that produced a 25% increase in student achievement. We were interested in learning what has happened since then to the education system in Korea and whether there are similarities with Finland.  As we did in Finland, we visited universities and schools, including a vocational high school for girls that emphasizes preparation for e-commerce.

In addition, visiting Korea offered an opportunity to engage the many alumni that FSU has there and lay the foundation for establishing a Korean alumni association. So we hosted a reception one evening for all FSU alumni in Korea. Alumni from 8 different colleges attended, including not only Koreans but also Americans who are living and working in Korea. It was a wonderful opportunity for alumni to make connections with one another and for us to share what is going on at FSU and the College of Education.

The visit to Singapore paralleled that in Korea but with an additional component – exploring the possibility of offering our Instructional Systems degree program in Singapore to aid in their workforce development efforts.

So what’s next? How do we leverage what we learned and what we accomplished to further advance the goals of the College and University? Each member of the travel team has set specific follow-up actions for the coming year that will build on the foundation we established. With interest blossoming across the University in international initiatives, it’s exciting to see the leadership our College is demonstrating in this arena.

Response to Broad and NCTQ Report

Eli Broad, founder of the Broad Foundation, begins his critique of university-based teacher preparation programs (Tallahassee Democrat, July 11, 2013) by comparing teacher education to medical schools. The comparison, although common, is inaccurate and misleading. Upon graduation with their bachelor’s degrees, would-be doctors face four years of medical school followed by 3-7 years of residency for up to 11 years of postgraduate training. By contrast, would-be teachers can attain certification in most states, including Florida, with their bachelor’s degrees alone. The notion that new teachers should perform with the same proficiency as new doctors – and that it’s the education school’s fault if they do not – is ludicrous.

Broad’s claim that universities are failing to prepare teachers adequately for the classroom is based on a report issued recently by the National Council for Teacher Quality (NCTQ). The “Teacher Prep Review” relies on course syllabi, admission requirements, and student teaching handbooks to rate teacher preparation programs in elementary education, secondary education, and special education. The report issues “Consumer Alerts” to warn away prospective teachers from programs that do not earn stars on its rating system.

Although the review focused on some aspects of teacher preparation that may help in evaluating the quality of programs, it completely ignored other aspects such as the quality of instruction, actual candidate qualifications, employers’ assessments of candidates’ readiness, and graduates’ performance in classrooms. Moreover, the degree of inaccuracy of the information in the report is shocking. Reactions from institutions across the country suggest that NCTQ made mistakes on virtually every school of education they examined (for further information, see the American Association for Colleges of Teacher Education, http://aacte.org/resources/nctq-usnwr-review/).

For example, NCTQ rated the Florida State College at Jacksonville with a Consumer Alert on a program that FSCJ has never offered. At Florida State University, NCTQ failed to rate or indeed acknowledge our Special Education program, even though all the information requested on that program was submitted.

It is unfortunate that NCTQ didn’t consider any of the exciting and innovative teacher preparation programs that are making a difference in educating great teachers. For example, FSU offers an interdisciplinary math and science teaching program, called FSU-Teach, where candidates graduate with a major in education and a major in a science or math. The program, a collaborative effort between the Colleges of Education and Arts and Science, is designed to assure that graduates acquire deep knowledge of their subject matter and deep knowledge of the evidence-based pedagogy to teach that subject. Candidates are recruited as freshmen, and intensive clinical training begins immediately, with students working under the guidance and supervision of highly skilled master teachers. This guidance extends throughout the candidates’ four years of college into their first years of teaching. FSU-Teach is modeled after U-Teach; a program launched at the University of Texas-Austin some 15 years ago that has successfully placed highly effective teachers in high needs schools.

This seems to be precisely the kind of solution that Broad advocates, and education schools are doing it.

Much Ado About Ratings

The US News and World Report (USNWR) recently published its 2014 edition of Best Grad Schools that contains ratings of graduate schools of education. This is the publication that most of us face with some trepidation, because it gives us bragging rights if our overall ranking goes up but hurts our reputation if our ranking goes down.  I am pleased to report that we went up 9 spots to 44th overall, which is in the top 20% of education schools. This is counting only the 235 schools that provided sufficient data to be ranked of the 278 schools that were surveyed.

 So what makes up the scores that determine our ranking, and what can we do to influence the results? There are four categories for which we provide a variety of data. They include Quality Assessment, Student Selectivity, Faculty Resources, and Research Activity.

 Quality assessment accounts for 40% of the total score, and it is measured by two surveys sent to education deans of the schools participating in the review (25% of the total score) and a national sample of school superintendents (15% of the total score). Almost half the deans responded (43%), including yours truly. The deans’ survey consists of the list of 278 schools that we are asked to rate on a scale of 1 – marginal to 5 – outstanding. There is also an option to indicate that we are not familiar with a particular school.  It’s not clear how the sample of school superintendents is drawn, and only 11% responded of those who were surveyed. Florida has only 67 superintendents (one per county), which is far fewer than most states. Indeed, some large cities may have almost as many superintendents as we have in the entire state. This means the likelihood is low that many Florida superintendents will be surveyed and thus even lower for there to be responses from people who actually might know something about our programs. Nonetheless, we have substantially increased the news and publications that we produce about our college in the belief that the more people know about the good work we’re doing, especially the deans, the higher our quality assessment rating will be.

Student selectivity accounts for 18% of the total score, and it is based on the mean verbal and quantitative GRE scores of doctoral students from the prior year along with the acceptance rate of doctoral students from the prior fall term. Each accounts for one-third of the measure. For this metric, higher GRE scores are obviously better, but so is a highly selective acceptance rate.

The category of faculty resources (12% of the total score) includes the prior year’s ratio of full-time-equivalent doctoral students to full-time faculty, the percentage of full-time faculty holding awards or editorships at selected education journals during the prior year, and the ratio of doctoral degrees awarded to full-time faculty. The degrees granted per faculty member counts the most in this metric (41.7%).

Finally, research activity, which accounts for 30% of the total score, is measured by the total research expenditures and average expenditures per faculty member for externally funded research. An average is taken for the three prior fiscal years, which helps to smooth out the peaks and valleys that can occur with funded research projects. After reputation, research activity makes the most difference in a college’s overall score. We are steadily improving on this metric, so I would expect to see us continue to rise in the rankings.

It is interesting to look at the rankings across multiple years. The top 6 schools are the same top 6 every year since I’ve been paying attention, although they may shift a rank or two from year to year: Vanderbilt, Johns Hopkins, Harvard, UT-Austin, Stanford, and Teachers College, Columbia University. The next 15 or 20 also tend to be the same institutions across years, but there is more variability from year to year. However, from ranks 25-50, schools may shift dramatically, gaining or losing as many as 17 spots in a single year. For example, the University of Nebraska – Lincoln fell to 51st compared to last year’s rank of 34th whereas the College of William and Mary went up from 43rd to 32nd.

I am taking a close look at how we compare to our peer and aspirational institutions on each of the metrics to understand what it might take to keep us moving up. I’ve been asked the question, is it really important for us to pay attention to ratings like this? It’s a good question, and there was an article in the Chronicle recently that criticized university presidents for paying too much attention to ratings. But it’s hard to argue that some of these metrics are in fact related to quality of programs, and there is evidence that prospective students – and their parents – pay attention and make decisions based on the rankings.

To MOOC or not to MOOC…

I don’t really know what to make of MOOCs (massive open online courses). On one hand, MOOCs appear to be all the rage. Some very elite universities (Stanford, Harvard, UVa) have gotten on the MOOC bandwagon and begun offering these free online courses. Noted authors such as Thomas Friedman in The New York Times have led the media hype about MOOCs, declaring a revolution in the university. He quoted M.I.T. President L. Rafael Reif as saying of universities, “There’s a whole new world unfolding. Everyone will have to adapt.”

As an administrator who has to deal with fiscal realities and think about strategic investments, I have to wonder what the budget model is for MOOCs. How does the university that invests in MOOCs recoup its investment when students pay no tuition to take the courses? A recent article in the Chronicle of Higher Education reported results of the “largest-ever survey of professors who have taught MOOCs” (March 18, 2013). Not surprisingly, these professors reported that they spent a lot of time developing and teaching a MOOC, with 55% indicating that teaching a MOOC caused them to divert time from other assigned duties. Most enjoyed the experience and felt that their regular classroom duties should be reduced to allow them sufficient time to teach MOOCs.

For public institutions at least, therein lies the problem. Our budgets are predicated on the student credit hours that we generate. Since MOOCs don’t generate any, allowing professors to teach them means paying the faculty to do something that will not result in any revenue for the university. Bear in mind that faculty members do other things that do not result in direct revenue to the university, such as unfunded research and service to the university or profession. But these things presumably yield other kinds of benefits, such as enhancing the reputation of the university and contributing to the betterment of society by solving problems through research.

Kevin Carey (CHE, March 25, 2013) argued that MOOCs offer a brand exploitation strategy, allowing elite colleges to enhance their brand through technology. “Elite colleges are willing to run [MOOCs] at a loss forever, because of the good will – and thus status – they create. Free online courses…could ultimately become as important to institutional status as the traditional markers of exclusivity and scholarly prestige.” So a question for us could be, will investing faculty resources into teaching MOOCs enhance our brand as a technology-savvy College of Education? Will it return dividends such as attracting degree-seeking students and cementing our reputation as a technology leader on campus?

AEFP Conference Presentations on Teacher Effectiveness

Because the Florida legislature seems intent on using student achievement data to evaluate how effectively colleges of education are preparing teachers, I was especially interested at the AEFP conference to attend sessions on this topic. There were many presentations on teacher effectiveness (see http://www.aefpweb.org/annualconference/download for the range of topics and papers presented at the conference), and a few examined the impact of teacher preparation programs (TPPs) on student achievement. Their results consistently indicated that there were more differences within TPPs than between them, thus calling into question any attempt to rank them on the basis of effectiveness.

One study, for example, examined TPPs in Texas where, like Florida, student test scores are beginning to be linked to the performance of recent TPP graduates. The study revealed that over 90% of the apparent differences between TPPs was due to statistical noise in the data, not true differences between programs. In fact, when the researchers completely randomized the data, they obtained the same results as when student test data were accurately paired with the TPP graduates who taught them. They concluded, “TPP rankings have little validity and run the risk of encouraging inappropriate policy actions where none is needed.”

In a second study, similar comparisons were made involving TPPs in Missouri. Consistent with the first study, results showed that differences in effectiveness between teachers from different TPPs were extremely small. Instead, virtually all of the variation in teacher effectiveness came from differences within programs, not differences between programs. That is, all programs graduated some highly effective teachers as well as some not so effective teachers. Therefore, it is impossible to say that one teacher preparation program is truly better than another. Instead, this suggests to me that we should look carefully within our programs to see what practices are producing the best results in preparing teachers.

There was another result in the Missouri study that interested me. The researchers took a look at ACT scores, which are often used as an indicator of how selective programs are. That is, are TPPs recruiting the best and brightest undergraduate students to become teachers? Critics of university-based TPPs often point to programs such as Teach for America (TFA), which is highly selective, and suggest that our programs should be equally selective. Most research on the effectiveness of TFA teachers has revealed little difference between them and their university-prepared counterparts. However, one thing is clear. TFA teachers generally do not stay in the classroom. When their 2-year commitment to teaching is up, they move on to other careers. So selectivity by teacher preparation providers may not be a good predictor of teacher effectiveness or, certainly, teacher retention.

Interestingly, the Missouri study revealed some support for the latter conclusion. Researchers found that the ACT scores of teacher education graduates were, on average, lower than all university graduates at the same institution, suggesting that TPPs were less selective than other majors. However, they also found that those who remained in teaching after a period of time had lower ACT scores than teacher education graduates as a group. So what does this mean?

With only the information we have at the moment, not much. We don’t know if the evidence from Missouri holds up in other states. If it does, we still don’t know why higher ACT scorers are leaving the classroom or, when they stay, whether they are any more effective than lower ACT scorers. If ACT scores bear little relation to how good teachers are in the long run, then we probably shouldn’t put much stake in their value as selection criteria. Rather, we should look for indicators that do a better job of predicting who will make a highly effective teacher.

A Quick Update on the Legislative Front

As a quick update on the legislative front, HB 863 and its companion bill 1664 are wending their way through committees. The SUS Education Deans and the Florida Association of Colleges of Teacher Education (FACTE) collaborated on proposed changes to some of the language in the bill, and I learned this morning that the House Committee has passed three amendments that included the changes we recommended. It’s nice to know that our voices have been heard in the legislative process.

As I began my adventures in blogging more than a week ago, I was attending the 38th annual meeting of the Association for Education Finance and Policy (AEFP) in New Orleans. This meeting ran back to back with the annual meeting of the Comparative and International Education Society (CIES), and many of our faculty and doctoral students attended one or the other.

I attended AEFP for the first time last year and found it to be an interesting mix of economists and education policy folks. The conference format encourages conversation and debate among participants, which facilitates getting more deeply into topics than is generally the case in many scholarly meetings.

One of the most interesting sessions I attended was entitled “Reforming a State Education System,” about the efforts in Nevada to institute statewide education reform. James Guthrie (who held faculty positions in education policy at UC-Berkeley, Vanderbilt, and SMU before assuming the position of Superintendent of Public Instruction in the Nevada Department of Education) discussed a variety of factors that seem to be common to successful education reform. By ‘successful’ I mean sustained where changes were made in the education system that lasted over time. Guthrie reported on federal reform efforts as well as state reforms in 4 states:  Florida, Texas, Wisconsin, and Indiana.

He indicated that for reforms at the federal level to be successful, the President of the US must be a champion for the reform, and resources must be available to go to the states. Race to the Top at least partially fulfills the latter criterion, with a lot of money going to a few states (including Florida), but Guthrie questioned whether President Obama is champion enough for change to be sustained once the money goes away.

For reform initiated at a state level, money seems to be less important. Having it is good, but significant reform can be accomplished in its absence. There must still be a publicly visible champion for change (as former Governor Bush was here in Florida), and business partnerships appear to be crucial. I found this to be particularly interesting since I doubt that educators would naturally reach out to business leaders as allies. Perhaps I am wrong about that.

In Nevada, much like Florida, there is effort to measure the impact of teachers on student learning and, like FL HB 863, the impact of teacher education programs on student learning. As Guthrie put it, the point of these efforts is to focus education schools “on producing dramatically better teachers.” In the discussion that followed, I asked about this. I wondered what role education schools were playing in the reform efforts in Nevada and what role, in Guthrie’s opinion, they should play. His answer surprised me. With only two major universities in the state, only 10% or fewer of Nevada’s teachers are prepared in Nevada’s universities. Most of their teachers come from surrounding states. Guthrie never answered the second part of my question, but I am reminded of the admonition with which Jane West (legislative liaison for AACTE) always ends her updates: “If you’re not at the table, you’re probably on the menu!”