Open Menu Close Menu

Student Outcomes

More and Different Data Needed to Track Quality of STEM Undergrad Education

Improving STEM undergraduate education will require tracking student demographics, instructor use of evidence-based teaching practices, student transfer patterns and other yet-unmeasured dimensions of science, technology, engineering and math education. That's the main conclusion from a new report published by the National Academies of Sciences, Engineering and Medicine.

"Indicators for Monitoring Undergraduate STEM Education," requested by the National Science Foundation and produced by the National Academies, offers three broad goals for engaging, motivating and retaining students in STEM — especially those underrepresented in STEM fields: increasing student mastery of STEM concepts and skills by getting them involved in evidence-based practices and programs; providing opportunities for access and success to amplify the diversity of STEM students and instructors; and boosting the number of STEM professionals by raising the number of students who complete their STEM studies.

As the report acknowledged, multiple federal, state and local activities are underway to apply new approaches, but nobody knows how well the initiatives are doing to make improvements in undergraduate STEM education.

Currently, employment outcomes, such as earnings and the ability to find jobs in the field of study, are most commonly used to measure the success of the education delivered by a specific institution or program. That's a problem, the report noted, because these metrics tend to "vary widely, depending on the type and selectivity of the institution and the characteristics of incoming students." Also, market demand fluctuates by time and place. And STEM majors may get into occupational positions that aren't formally considered part of the STEM workforce but still use their STEM expertise and influence their earnings.

As an alternative the report offered a new framework for a "national indicator system" — 21 indicators to measure institutional quality in the three goal areas. Among them:

  • Use of evidence-based STEM educational practices in course development and delivery;
  • Extent of instructors' involvement in professional development;
  • Equitable student participation in evidence-based STEM educational practices;
  • Diversity of students who transfer from two- to four-year STEM programs in comparison with diversity of students in two-year STEM programs;
  • Diversity of STEM instructors in comparison with diversity of STEM graduate degree holders;
  • Completion of foundational courses, including developmental education courses, to ensure STEM program readiness; and
  • Retention in STEM programs, course to course and year to year.

For each indicator, the committee behind the report also identified potential data sources and how they might need to be revised to support a given indicator or, in many cases, developed anew. For example, an indicator to monitor the use of evidence-based STEM educational practices in course development and delivery will require data that doesn't exist. "Measuring teaching is difficult," surmised the report's authors. Attempts to use self-reporting "can be threatened by faking, cheating and motivation," such as the instructor's desire to be viewed in a "favorable light." Thus, the report suggested, a first step for establishing that indicator would be "to develop common definitions of evidence-based course development and delivery and to identify its specific characteristics and components." From there, existing surveys could be revised to incorporate appropriate questions in their next go-around.

The report concluded with three options for generating the data that would be needed for the proposed indicator system. The first: development of a national student unit record data system, an idea that hasn't been able to generate sufficient support in Congress, which would need to add that as a mandate in the Higher Education Act, currently undergoing rewrites in the Senate. The committee also suggested expanding National Center for Education Statistics (NCES) data collections and combining existing data from nonfederal sources.

The committee also warned that while the indicator system could help to monitor "change over time" and progress toward the three goals, it shouldn't be used to compare institutions because the preparedness of students for STEM studies could vary so widely among different schools.

The report is openly available in pre-publication form on the National Academies website.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

comments powered by Disqus