Assessing Assessment

Technology can play a role in assessing learning— and more than a labor-saving role at that.

D'es technology play a role in the assessment of student learning? The answer at first blush is “yes.” After all, many instructors use the testing tools available in course management systems or more specialized assessment software to develop and save test banks and then to automate the generation, administration, and grading of quizzes and exams from those test banks. Such tools can certainly be labor-saving for instructors, but some political observation about the assessment of student learning will lead to a potentially deeper role for assessment technologies in encouraging and evaluating student learning.

Assessment: The Political Angle

Oversight and policy-making bodies from the institutional to the federal level believe, reasonably, that learning outcomes should be a key indicator of institutional performance in higher education. Accordingly, they expect nonprofit higher education to assess and report student learning. In fact, some policy makers are threatening to regulate funding subsidies or tuition levels, in an effort to encourage accountability in assessing and reporting learning outcomes (and costs) for informational or comparative purposes. Frankly, this accountability expectation may be more reasonable than most higher education leaders will concede. But avoiding its intrusive implications may require a proactive response from higher education. Never fear; some proactive responses are in the making.

Getting Proactive

In its 2004 edition of Measuring Up, the National Center for Public Policy and Higher Education (www.highereducation.org), for the first time, gives a “Plus” in learning to five states: Illinois, Kentucky, Nevada, Oklahoma, and South Carolina. These states have developed comparable learning measures by participating in a national demonstration project conducted by the National Forum on College-Level Learning, with funding from The Pew Charitable Trusts (curry.edschool.virginia.edu/centers/collegelevellearning). In contrast, the 2000 and 2002 editions of Measuring Up gave all 50 states an “Incomplete” in learning because there were no comparable data that would allow for meaningful state-by-state comparisons in this category. At the heart of this issue is a basic learning-assessment practice in the academic enterprise.

Tenure, a keystone in nonprofit higher education’s governance model, is broadly interpreted to grant any tenure-track instructor the academic freedom to determine course content coverage within the scope of a course’s catalog description, to develop and administer course examinations, and to assign course grades. So, at many institutions, the assessment of course learning g'es no further than the course instance and the grades assigned by its instructor. The institutional assessment of learning stops there.

Yet many instructors are now creating an engaging mastery learning model of study and feedback by changing their assessment strategies toward an almost continuous pattern of quizzing, practice testing, and testing that takes advantage of labor-saving testing software. I called attention to this strategy in my November column, in the context of the “common course redesign strategy” pioneered by the National Center for Academic Transformation (NCAT) and 30 partner institutions. They have demonstrated that it’s possible to assess a traditional multi-section high-enrollment course versus its redesigned counterpart to a) improve learning outcomes through diagnostic adjustments, and then to b) continue the assessment on an institutional, longitudinal basis provided that testing is standardized across the (redesigned) course as a whole. Clearly, though, to move beyond some degree of internal institutional standardization (in the interest of internal longitudinal comparative assessment and improvement strategies) will require bolder departures from the prevailing instructor-centric model of assessment.

Bolder Moves

The common course redesign strategy applies particularly well to the college-prep (developmental) and college-level basic skills courses and a few of the highest-enrollment introductory courses on any campus. These courses are taught at almost all institutions and cover essentially the same material. A consortium of institutions—a community college district or a state higher education system, for example—could adopt the above longitudinal assessment strategy across the consortium by standardizing testing in selected common courses. But an even bolder move would be to decouple teaching from testing in selected common courses. Assessment organizations such as ACT (www.act.org), College Board (www.collegeboard.com), ETS (www.ets.org), and others offer standardized tests (and course materials) in a number of skills- and intro-level common courses, and most of these are or soon will be available in an Internet-delivery format, sometimes involving sophisticated assessment methodologies beyond the true/false and multiple-choice formats. What’s more, the reductions in direct instructional expenses resulting from common course redesign—an average of 40 percent for the 30 NCAT partners—could more than offset the expenses of externally provided assessment, when those assessment expenses could not be passed along to students.

Stumbling Blocks and Solutions

The problem is, many institutions are undoubtedly hesitant to publish learning outcome data that can be compared to the same data from other institutions—the growing expectation of policy makers. Perhaps some recent developments in another aspect of institutional learning assessment could inform this comparative issue:

Retention and graduation rates are the most common externally reported measures of learning. Yet, they reflect many factors other than learning, while accounting for learning only by assuming that retained and graduated students must have learned something to justify their credit-recorded success. There is no comparative, standardized basis for reporting these rates, and they are not universally provided. But Alexander Astin, writing in the Oct. 22, 2004 issue of the Chronicle Review (“To Use Graduation Rates to Measure Excellence, You Have to Do Your Homework,”) recently reported some hope for more meaningfully reporting graduation rates. Astin made a case for reporting actual versus “expected” graduation rates as a basis for taking into account institutional type and the socio-educational profile of its first-year class. Possibly, this work will find application at the level of the set of courses most closely linked to retention and graduation—again, the college-prep and college-level basic skills courses, along with a handful of college intro courses, which taken together account for a high percentage of all enrollments, and could be externally assessed as per our discussion above.

Technology to the Rescue

Yes, technology can play a role in assessing learning; more than a labor-saving role. Today’s course management systems and assessment technologies make it possible to:

  • Deploy a continuous feedback loop of study-assessment-mastery in a variety of courses.
  • Develop a common (standardized) institutional, longitudinal assessment strategy for high-enrollment courses taught by multiple instructors—a strategy that assesses the course as a whole, not each of its instances taught by different instructors.
  • Participate in a consortium to extend the (preceding) whole-course, common-course longitudinal strategy across institutions.
  • Decouple teaching from testing in basic skills areas and selected intro-level disciplinary and professional courses by using nationally “normed” assessments (and materials) available from trusted external assessment providers.

Featured

  • a glowing gaming controller, a digital tree structure, and an open book

    Report: Use of Game Engines Expands Beyond Gaming

    Game development technology is increasingly being utilized beyond its traditional gaming roots, according to the recently released annual "State of Game Development" report from development and DevOps solutions provider Perforce Software.

  • open laptop with screen depicting a glowing, holographic figure surrounded by floating symbols of knowledge like books, equations, and lightbulbs

    Cengage Intros Gen AI Student Assistant Beta

    Ed tech company Cengage has announced the beta launch of Student Assistant, a generative AI tool designed to guide students through the learning process with personalized resources and feedback.

  • glowing neural network-like structure and balanced scale

    California AI Regulation Bill Advances to Assembly Vote with Key Amendments

    California’s Senate Bill 1047 (SB 1047), the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," spearheaded by Senator Scott Wiener (D-San Francisco), has cleared the Assembly Appropriations Committee with some significant amendments.

  • AI-inspired background pattern with geometric shapes and fine lines in muted blue and gray on a dark background

    IBM Releases Granite 3.0 Family of Advanced AI Models

    IBM has introduced its most advanced family of AI models to date, Granite 3.0, at its annual TechXchange event. The new models were developed to provide a combination of performance, flexibility, and autonomy that outperforms or matches similarly sized models from leading providers on a range of benchmarks.