Assess or Guess?

10 ways you might be fooling yourself about assessment

David SingerDavid Singer is a visiting scholar at MIT, in the Department of Brain and Cognitive Science. He is a specialist in assessment related to the application of education technology and brain science in education. Prior to his visiting post at MIT, he was the director of Education of the former American Nobel Committee. Singer received his doctorate in education from Boston University (MA). Here, the scholar and assessment authority takes a hard look at how we could be fooling ourselves, even with our best attempts at assessment.

EDITOR’S NOTE: At the Campus Technology 2006 conference in Boston (July 31-August 3), Singer will moderate a panel from MIT about the power of assessment, and its proper implementation.

Want to be considered for Campus Technology’s Top 10? Send your countdown and a brief background/bio summary to [email protected]

10

Surveys do not accurately describe students’ real behaviors, attitudes, or what they have learned.

  • Validity—and especially the reliability—of student surveys are often low.
9

Approximately 10 percent of research on the effect of education technologies focuses on student learning or performance.

  • 90 percent of such research bases its conclusions on student opinion surveys.
8

Assessment usually only deals with the mean change in student attitude or performance.

  • Assessments often assume that the shift in distribution is uniform for all students when, in fact, students who perform worse than or better than the mean may be differentially affected by any new educational strategy.
7

Once the methodologies and technologies are in place, it gets harder to apply them consistently.

  • It is a common problem that instructors fall back on old habits.
6

Even if the number of students in a research effort is large enough, the results cannot necessarily be generalized to all populations of students.

  • A study of 1,000 students in Biloxi, Mississippi may have little relevance to 1,000 students in Bangor, Alaska or Beverly Hills, California.
5

Student motivation to try a new educational methodology can decrease the validity of the findings.

  • The so-called “novelty effect” increases student motivation and d'es not address the real research questions regarding the pedagogy of the educational methods.
4

Instructors’ attentiveness and interest in research in education often detracts from the validity of results.

  • An instructor’s enthusiasm to prove the benefits of a new approach to teaching can, in itself, improve students’ motivation, interest, and therefore performance.
3

There is a not a generally well-accepted definition of what constitutes learning in education.

  • The basis for what a student must do to receive a given grade varies dramatically across schools and campuses.
2

Collaborative, interactive, and hands-on learning methods are not effective for all students and may be counterproductive for some.

  • Although these are popular and very effective methods for many students and some courses of study, some students may not benefit due to different learning and personality styles. Certain students may, in fact, do worse because of them.
1

Improvements in learning often cannot be attributed to the methodologies of technologies that are being investigated.

  • Research in education has historically been very poor because of the cost, difficulty, or because of investigators not knowing how to set methodological controls.

Featured

  • row of students using computers in a library

    A Return to Openness: Apereo Examines Sustainability in Open Source

    Surprisingly, on many of our campuses, even the IT leadership responsible for the lion's share of technology deployments doesn't realize the extent to which the institution is dependent on open source. And that lack of awareness can be a threat to campuses.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cloud icon with a padlock overlay set against a digital background featuring binary code and network nodes

    New Cloud Security Auditing Tool Utilizes AI to Validate Providers' Security Assessments

    The Cloud Security Alliance has announced a new artificial intelligence-powered system that automates the validation of cloud service providers' (CSPs) security assessments, aiming to improve transparency and trust across the cloud computing landscape.

  • geometric grid of colorful faculty silhouettes using laptops

    Top 3 Faculty Uses of Gen AI

    A new report from Anthropic provides insights into how higher education faculty are using generative AI, both in and out of the classroom.