Study Uncovers How Ed Tech Decision-Making Works
- By Dian Schaffhauser
- 11/29/17
Higher education people most often turn to each other when they're trying to make decisions about education technology. And it's not uncommon for them to start with a particular technology and then find a problem to solve, vs. identifying a pedagogical need and then looking for the tech tools that would address the challenges.
Those findings and others came out of a research project undertaken by the EdTech Efficacy Research Academic Symposium, a consortium of 150 educators, researchers, business people, administrators and philanthropists who want to figure out what technologies deserve to be developed, funded, piloted, procured and implemented in education. The symposium is housed at the Curry School of Education at the University of Virginia. At the recent OLC Accelerate conference, presenters Kristin Palmer and Whitney Kilgore shared the findings from a survey of ed tech decision-makers at 43 United States colleges and universities. Palmer is the director of online learning programs at U Virginia; Kilgore is the chief academic officer and cofounder of iDesign, a company that works with institutions to develop and grow their online programs.
The study covered three areas: the sources of information that higher ed uses to make ed tech buying decisions; how research is used in the decision-making process; and whether institutions do their own research on the efficacy of the ed tech products they're currently using.
The primary uses of ed tech were to support teaching and learning (mentioned by 41 percent of respondents), to gain operational efficiencies and decrease costs (36 percent), to increase the capacity to serve students online (30 percent) and improve the user experience (27 percent).
However, acknowledged several people, there's a continuous balancing act between "identifying needs and knowing what solutions are potentially available." For help on that, 96 percent of respondents said they turn to colleagues at their own or other schools; 80 percent also use vendor guidance; 67 percent tap professional associations or consortia; and 53 percent use consultants.
A majority of survey participants said the most common forum for getting information on ed tech products was networking events such as conferences or consortium meetings, specified by 93 percent of respondents. Publications were also prominent, cited by 91 percent. Among those, 62 percent reported that newspapers and newsletters proved useful; 56 perfect referenced partially or non-peer-reviewed journals or papers; and 44 percent listed trade magazines and practitioner publications.
A large number — 89 percent — said they turn to social media and online communications (blogs, websites, twitter, e-mail, etc.) to get their information, though no particular source was referenced by more than 40 percent of respondents.
All of the survey participants said they conduct research after implementing ed tech:
- 38 percent review student outcomes;
- 29 percent ask other institutions for feedback about products;
- 24 percent run their own student/faculty/staff surveys; and
- 22 percent run pilots.
Interestingly, noted the presenters, the results of pilot studies are "rarely shared" outside of the specific school than runs the tests.
While 78 percent of survey respondents reported that they do their own research, few projects end up being published in peer-reviewed journals and most aren't shared publicly. The results tend to be used internally for doing continuous improvement of instruction and making decisions about whether to scale up a pilot or even continue it.
The evaluation of ed tech in higher ed undergoes cost assessment at nearly three-quarters (73 percent) of respondent institutions. About two-thirds of the survey participants (64 percent) said their decisions are influenced by vendor demos. A similar number (59 percent) reported that IT is involved in approving the decision. Almost half (47 percent) use a formal request for proposal, information or quote. Just four in 10 (39 percent) go through formal piloting.
The report offered a number of recommendations for ed tech decision-makers, such as talking to people outside of higher education and making sure decisions follow identification of pedagogical needs and not the other way around. Other guidance:
- Bring on students, faculty and staff early in the process "to build buy-in and avoid bumpy rollouts";
- Stay on top of the vendor's technology "roadmap" and make sure it meshes with the needs of the institution;
- Streamline and standardize the buying process across the campus;
- Incorporate change management as part of the adoption process;
- Enhance the rigor of pilot testing by comparing results among students using the new tech against those who aren't, or by running a pilot with different user groups or in different contexts; and go beyond assessment of pass rates, retention or completion as the only measures of learning;
- Team up with other institutions to run "multi-site pilots";
- Encourage faculty and project managers to share the results of their pilots with other institutions; and
- Provide incentives for pilot participation and "make sure the culture accommodates error as well as trial."
A report on the findings is available on the symposium's website.
About the Author
Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.