Stuck on Student Learning

By Peter T. Ewell

Peter EwellIn 2000, the first edition of Measuring Up gave every state an “Incomplete” in Learning to highlight the fact that the United States lacks consistent measures of student learning in higher education. Over the past decade, the National Center for Public Policy and Higher Education has been consistent in reporting progress on the development of measures of student learning. Measuring Up 2004 reported learning results for five states that participated in a national demonstration project. Measuring Up 2006 recognized an additional six states that participated fully in the National Assessment of Adult Literacy (NAAL). These efforts in the states signified modest progress compared with a decade ago.

Other activities also brought attention to the importance of assessing student learning at the college level. The Collegiate Learning Assessment (CLA) and the National Survey of Student Engagement (NSSE) were both launched since we began our effort. In addition, the National Commission on the Future of Higher Education, convened by U.S. Secretary of Education Margaret Spellings, helped accreditors and institutions of higher education become more interested in assessing learning.

Despite this limited progress, however, an important dimension of assessing learning has been lost: the need for states and the nation to understand more about the “educational capital” of their population. The educational capital of a state is the level of collective knowledge and skills possessed by state residents. Assessing educational capital can be accomplished through state participation in national surveys of adult literacy, assessments of the abilities of college graduates, as well as other measures.

In its deliberations, the Spellings Commission recommended that more states take leadership in measuring educational capital through the approach pioneered by the National Center’s five-state demonstration project. The Commission also recommended increasing state participation in the National Assessment of Adult Literacy, as well as administering it more frequently. The nation and the states need these measures in order to guide investment in higher education and align public policy with the needs of state residents.

As a nation, however, we appear to be regressing in this area. Only six states signed up for the oversample of the National Assessment of Adult Literacy in 2003, down from 12 in 1992. A repeat administration of this assessment is nowhere in sight. Almost five years after the assessment was administered, the National Center for Education Statistics has yet to produce 50-state estimates of citizen performance on prose literacy. Meanwhile, the Organisation for Economic Co-operation and Development (OECD) is moving forward with an international feasibility study on collegiate learning without having a commitment from the United States to participate.

Attention to these issues at the state level is also uneven. A few states continue to assess students using established examinations for which national benchmarks are available. Among them is South Dakota, which requires all students attending public universities, as a condition of graduation, to meet a specified standard on the ACT’s Collegiate Assessment of Academic Proficiency (CAAP). Kentucky will replicate a variant of the Learning Model developed by the National Center’s five-state demonstration project. Public universities in West Virginia will administer the Collegiate Learning Assessment on a statewide basis next year. And Oregon is experimenting with portfolio measures in collaboration with the Association of American Colleges and Universities (AACU).
On the other hand, Arkansas abandoned its longstanding program of statewide testing centered on the Collegiate Assessment of Academic Proficiency last year. A recent survey by the State Higher Education Executive Officers (SHEEO) found that the engagement of state agencies in assessment at the college level is at an all-time low. Further, where states are showing interest in assessing college learning, their focus is at the campus level, to demonstrate institutional accountability. They are not measuring learning through a statewide approach, which can inform and improve state policy by identifying gaps in what college-educated residents know and can do.

A growing number of institutions are holding themselves accountable through such initiatives as the Voluntary System of Accountability (VSA) developed by the National Association of State Universities and Land Grant Colleges (NASULGC) and the American Association of State Colleges and Universities (AASCU). However admirable these efforts may be, they provide little real information for state policy. They are being undertaken largely for political reasons—to blunt attempts by the U.S. Department of Education to impose new reporting requirements about student learning through accreditation—rather than as part of a broader effort to systematically improve instruction.

In short, events in the wake of the Spellings Commission served to politicize public debate about information on student learning at precisely the point at which such information should be collectively owned and generated. Nowhere has this condition been more apparent than in the development of longitudinal databases. At a time when more than two-thirds of students earning bachelor’s degrees have attended several institutions, we as a nation lack the capacity to track student progress because of political opposition that masquerades as a concern about privacy. As 42 states have demonstrated, higher education agencies using today’s information technology are perfectly capable of creating powerful student unit databases that do not compromise security.

With America’s competitive edge in producing college graduates eroding steadily, states need benchmarked information about student learning more than ever. In the past decade, some states have developed the technical capacity to generate such information and the policy wisdom to use it effectively. But across the nation, we are no further along in producing such capacity in 2008 than we were in 2000 when Measuring Up first awarded every state an “Incomplete” in Learning.

Prior years' commentary.