Cornell Researchers Use AI to Understand Students' Math Struggles

Researchers at Cornell University are working on software that will help math teachers understand what their students were thinking that led them to finding incorrect answers.

Cornell University researchers are working on software that will help math teachers understand what their students were thinking that led them to finding incorrect answers.

Erik Andersen, assistant professor of computer science at Cornell, said that teachers spend a lot of time grading math homework because grading is more complicated than just marking an answer as right or wrong.

"What the teachers are spending a lot of time doing is assigning partial credit and working individually to figure out what students are doing wrong," Andersen said in a prepared statement. "We envision a future in which educators spend less time trying to reconstruct what their students are thinking and more time working directly with their students."

To help teachers get through their grading and understand where students need more help, Andersen and his team have been building an algorithm that reverse engineers the way students arrived at their answers.

They began with a dataset of addition and subtraction problems solved — or not — by about 300 students and tried to infer what the students had done right or wrong.

"This was technically challenging, and the solution interesting," said Andersen in a news release. "We worked to come up with an efficient data structure and algorithm that would help the system sort through an enormous space of possible things students could be thinking. We found that 13 percent of these students made clear systematic procedural mistakes, and the researchers' algorithm learned to replicate 53 percent of these mistakes in a way that seemed accurate. The key is that we are not giving the right answer to the computer — we are asking the computer to infer what the student might be doing wrong. This tool can actually show a teacher what the student is misunderstanding, and it can demonstrate procedural misconceptions to an educator as successfully as a human expert."

Eventually the researchers hope to develop a program that will be able to offer teachers reports on learning outcomes to improve instruction and differentiation. For now, the tool only works with addition and subtraction problems, but the team plans to expand to algebra and more complicated equations eventually.

For more information, go to cs.cornell.edu.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • Three cubes of noticeably increasing sizes are arranged in a straight row on a subtle abstract background

    A Sense of Scale

    Gardner Campbell explores the notion of scale in education and shares some of his own experience "playing with scale" — scaling up and/or scaling down — in an English course at VCU.

  • AI-inspired background pattern with geometric shapes and fine lines in muted blue and gray on a dark background

    IBM Releases Granite 3.0 Family of Advanced AI Models

    IBM has introduced its most advanced family of AI models to date, Granite 3.0, at its annual TechXchange event. The new models were developed to provide a combination of performance, flexibility, and autonomy that outperforms or matches similarly sized models from leading providers on a range of benchmarks.

  • minimalist bookcase filled with textbooks featuring vibrant, solid-colored spines with no text, and a prominent number "25" displayed on one of the shelves

    OpenStax Celebrates 25th Anniversary

    OpenStax is celebrating its 25th anniversary as 2024 comes to a close. The open educational resources initiative from Rice University has served almost 37 million students in 153 countries and saved students nearly $3 billion in course material costs since its launch in 1999.

  • a professional worker in business casual attire interacting with a large screen displaying a generative AI interface in a modern office

    Study: Generative AI Could Inhibit Critical Thinking

    A new study on how knowledge workers engage in critical thinking found that workers with higher confidence in generative AI technology tend to employ less critical thinking to AI-generated outputs than workers with higher confidence in personal skills.