To say the least, Richard Arum and Josipa Roksa have made waves in higher education with their book Academically Adrift: Limited Learning on College Campuses (Chicago: U of Chicago P, 2010), a national, longitudinal, quantitative study of student learning in the first-two years of college in the U.S. The question driving the study is: “How much are students actually learning in contemporary higher education?” The overarching finding is unsettling. “The answer for many undergraduates,” Arum and Roksa conclude, “is not much” (p. 34).
In what is certainly the most provocative snippet of the book, Arum and Roksa write:
With a large sample of more than 2,300 students, we observe no statistically significant gains in critical thinking, complex reasoning, and writing skills for at least 45 percent of the students in our study. . . . While they may be acquiring subject-specific knowledge or greater self-awareness on their journeys through college, many students are not improving their skills in critical thinking, complex reasoning, and writing. (p. 36)
These findings are made all the more “disconcerting” when Arum and Roksa go on to point out that “at least one study has indicated that most of the gains in general skills occur in the first two years of college” and that another study “reported that students’ academic motivation and interest in academic subject matter declined during their first year in college, leaving little hope that they would notably improve their academic skills in subsequent years” (p. 36).
Not only are students not learning, they don’t even seem to be trying to learn. An analysis of student time-use surveys over the past century found that “average time studying fell from twenty-five hours per week in 1961 to twenty hours per week in 1981 and thirteen hours per week in 2003” (3). Even worse, in the present study, “37 percent of students reported spending less than five hours per week preparing for their courses” (69). So what do students do their time? On average, as the following chart indicates, they sleep a little, work a little, study a little, and socialize a lot (p. 97).
What makes these numbers so important is that, when it comes to student learning, “how much time and effort students invest in their classes is paramount: Studying is crucial for strong academic performance as ‘nothing substitutes for time on task'” (131).
If students aren’t trying to learn, that may not be entirely their fault. Arum and Roksa present another finding that suggests that many students are not challenged to work on those very areas in which they are not improving. Specifically, they found that: “Fifty percent of students in our sample had not taken a single course during the prior semester that required more than twenty pages of writing, and one-third had not taken one that required even forty pages of reading per week” (p. 71). While a much more precise inquiry into student reading and writing would certainly be beneficial, these findings seem to indicate that students are not being asked to do enough reading and writing. On that point, Arum and Roksa comment:
If students are not being asked by their professors to read and write on a regular basis in their coursework, it is hard to imagine how they will improve their capacity to master performance tasks . . . that involve critical thinking, complex reasoning, and writing. (p. 71)
Sadly but predictably, socioeconomic variables, especially race and ethnicity, played a dramatic role in which groups of students, on average, learned the least. Arum and Roksa report that African-American, Asian, and Hispanic students “not only entered higher education with lower [skills] than their white peers, they also gained less” over time, on average (39). In the typical understatement of social scientists, Arum and Roksa surmise that “[t]his pattern suggests that higher education in general reproduces social inequality” (40).
While “growth is quite limited” overall, Arum and Roksa write that “some students” do in fact “demonstrate notable gains” (p. 56). More specifically, while almost half of students seemed to learn nothing and while most students seemed to learn very little, one out of ten students seem to learn quite a bit, when it comes to writing, reasoning, and critical thinking. Arum and Roksa consider this “the most important finding missing from the popular discussion of the book” (“Questions” [PDF]).
To put it in numbers: 45 percent of students did not improve in a “statistically significant” way on the instrument Arum and Roksa used to measure learning (p. 36). The average improvement, only slightly better, was 7 percentile points—which means, for example, that those entering college at “the 50th percentile of an incoming class would reach a level equivalent to the 57th percentile of an incoming class” by the end of the second year (p. 35). In stark contrast, the top ten percent of students improved by 43 percentile points—which means, again, that those entering college at “the 50th percentile of an incoming class . . . would reach a level equivalent to the 93rd percentile” of an incoming class by the end of the second year (p. 56).
Why do some students learn when so many others don’t? As noted, socioeconomics factor in hugely. Much more positively, Arum and Roksa found that good teaching can also make a difference. They write that “when faculty have high expectations and expect students to read and write reasonable amounts,” students not only spend “more time studying” but also “learn more” (p. 119). More specifically, Arum and Roksa found evidence that more learning happens:
- when students perceived that their teachers had high expectations (p. 93);
- when teachers assigned at least 40 pages of reading per week and at least 20 pages of writing over the course of the semester—presumably communicating and enacting those high expectations (p. 94); and
- when students spend at least 12 hours per week studying alone—presumably responding to those expectations and working on that all that reading and writing (p. 97).
These findings provide additional empirical support to the idea that better teaching in general can lead to more in-depth learning. They also underscore the importance of the specific teaching practices of communicating high expectations and assigning substantial reading and writing.
It is important to note, as Doug Lederman and David Bills aptly explain, that Arum and Roksa’s findings have faced substantial controversy. To begin with, some critics argue that the instrument on which Arum and Roksa base their most important findings, the Collegiate Learning Assessment (CLA), does not meaningfully measure students’ writing and critical thinking skills. This is certainly a fair criticism, given that standardized tests, virtually by definition, define and measure writing and critical thinking reductively, reducing the qualitative to the quantitative.
Also, others question certain points of Arum and Roksa’s statistical analysis, particularly regarding their striking “45-percent conclusion,” a figure that, Alexander W. Astin notes, “is well on its way to becoming part of the folklore about American higher education.” The claim that 45 percent of students did not improve in a “statistically significant” way, Astin explains, depends entirely on an “utterly arbitrary” decision as to what counts as “statistically significant.” With a more or less stringent cut off, the figure of 45 percent could have been much higher or lower, with the very same data.
These and other criticisms qualify the study in important ways, and put it in context. But they do not invalidate it. At the very least, the book presents some substantial evidence about the state of learning. If anything, because of how much discussion and debate has surrounded the book, these detractions make it all the more important as a reference point in talking about learning and the lack of learning in higher education. Even if the findings of the study are far from perfect, Arum and Roksa warrant serious consideration.