Survey of Tech in Education Finds Mixed Results
- By Dian Schaffhauser
- 09/14/17
If you're trying to hunt down research to justify the use of technology in the classroom or argue against it, a working paper may provide you with information you need. "Education Technology: An Evidence-Based Review," published by the National Bureau of Economic Research (NBER), takes a global view in examining how technology can be used to support K–12 and post-secondary education. The goal? To figure out what the literature says overall about causal effects or lack thereof in four areas:
- Access to technology, such as computers or the internet;
- Computer-assisted learning provided by specific software;
- Technology-enabled behavioral interventions in education, such as parental engagement during early childhood and text-based alerts; and
- Online learning, including traditional online courses and less conventional programs, such as MOOCs.
The researchers performed a literature survey that focused on studies that used one of two different research methods: randomized control trials (RCTs) and regression discontinuity designs (RDDs).
RCTs are experiments where the researcher randomly puts some participants into one or more "treatment group" that uses the intervention or program, while other participants go into a control group without the program. Any difference in outcomes that surfaces between the treatment and control groups "reflects the impact of the treatment," according to the paper.
RDDs are "quasi-experiments" that pinpoint a well defined cutoff threshold for eligibility or program status — such as the minimum test score required for a student to be eligible for financial aid. The threshold provides an equivalent differentiator to the treatment group and control group. As the paper explained, "The jump in an outcome between those just above and those just below the threshold can be interpreted as the causal effect of the intervention in question for those near the threshold."
These two research designs were chosen for the project because the "literature on ed-tech is flooded with observational research," the report stated, "and could benefit from a synthesis of evidence from the designs most likely to produce unbiased estimates of causal effects." The papers the researchers identified for each of the four categories are referenced in the document, along with details about the specific intervention, the data source and a summary of the findings.
Results were mixed. The researchers found that just providing students with access to technology at the K-12 level had limited impact on learning outcomes while also allowing them to improve their computer proficiency. At the college level it was a different story. There "bright spots" surfaced that warranted "further study."
Likewise, online learning needs more research. Compared to face-to-face courses, students in online-only classes didn't do as well. However, studies looking at hybrid courses found that the impact was "generally on par" with fully in-person versions.
The two areas of focus that showed "considerable promise" were computer-assisted learning and behavioral interventions. "Especially when equipped with a feature of personalization, computer-assisted learning can be quite effective in helping students learn, particularly with math," the report noted. Also helpful: technology-enabled "nudges" to students and notifications from the school to the parent. These hold "great promise as a cost-effective approach in education," the paper suggested. "Moving forward, researchers should prioritize understanding when technology-based behavioral nudges are most impactful."
For more details, access the working paper through the NBER website.
About the Author
Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.