Assess or Guess?

10 ways you might be fooling yourself about assessment

David SingerDavid Singer is a visiting scholar at MIT, in the Department of Brain and Cognitive Science. He is a specialist in assessment related to the application of education technology and brain science in education. Prior to his visiting post at MIT, he was the director of Education of the former American Nobel Committee. Singer received his doctorate in education from Boston University (MA). Here, the scholar and assessment authority takes a hard look at how we could be fooling ourselves, even with our best attempts at assessment.

EDITOR’S NOTE: At the Campus Technology 2006 conference in Boston (July 31-August 3), Singer will moderate a panel from MIT about the power of assessment, and its proper implementation.

Want to be considered for Campus Technology’s Top 10? Send your countdown and a brief background/bio summary to [email protected]

10

Surveys do not accurately describe students’ real behaviors, attitudes, or what they have learned.

  • Validity—and especially the reliability—of student surveys are often low.
9

Approximately 10 percent of research on the effect of education technologies focuses on student learning or performance.

  • 90 percent of such research bases its conclusions on student opinion surveys.
8

Assessment usually only deals with the mean change in student attitude or performance.

  • Assessments often assume that the shift in distribution is uniform for all students when, in fact, students who perform worse than or better than the mean may be differentially affected by any new educational strategy.
7

Once the methodologies and technologies are in place, it gets harder to apply them consistently.

  • It is a common problem that instructors fall back on old habits.
6

Even if the number of students in a research effort is large enough, the results cannot necessarily be generalized to all populations of students.

  • A study of 1,000 students in Biloxi, Mississippi may have little relevance to 1,000 students in Bangor, Alaska or Beverly Hills, California.
5

Student motivation to try a new educational methodology can decrease the validity of the findings.

  • The so-called “novelty effect” increases student motivation and d'es not address the real research questions regarding the pedagogy of the educational methods.
4

Instructors’ attentiveness and interest in research in education often detracts from the validity of results.

  • An instructor’s enthusiasm to prove the benefits of a new approach to teaching can, in itself, improve students’ motivation, interest, and therefore performance.
3

There is a not a generally well-accepted definition of what constitutes learning in education.

  • The basis for what a student must do to receive a given grade varies dramatically across schools and campuses.
2

Collaborative, interactive, and hands-on learning methods are not effective for all students and may be counterproductive for some.

  • Although these are popular and very effective methods for many students and some courses of study, some students may not benefit due to different learning and personality styles. Certain students may, in fact, do worse because of them.
1

Improvements in learning often cannot be attributed to the methodologies of technologies that are being investigated.

  • Research in education has historically been very poor because of the cost, difficulty, or because of investigators not knowing how to set methodological controls.

Featured