Open Menu Close Menu

Viewpoint

Can We Gauge Technology's Impact on Learning Outcomes?

The ongoing debate as to the effectiveness of technology use for student learning outcomes still seems to have no clear answers. Recently, some universities have decided to end their laptop programs for students because of the economic challenges facing those institutions. But there is no consistent response as to the effect on students. Some say it has been highly effective for students, and others say that it has not had any significant impact in how students learn.

What is interesting is that there is also no real agreement as to what should be measured or even whether it can be measured in order to quantify success in this regard. Institutions--whether K-12 or higher education--that have adopted technology for instruction, often have little or no systematic methodology in place for instructional technology use or how its success can or should be measured. Rather, the technology use has typically relied upon individual teachers and faculty who have given up time to learn and use new technology and who are always underfunded and unable, as a result, to expand their use to other programs and other instructors for ongoing research.

So technology use remains conflicted between the generalized rollout of hardware and software and the individualized adoption for instruction. What then can be done to truly assess benefits to learning in regards to technology use?

Misconceptions of Technology Use for Instruction (Tools, not Teachers)
Just as paper, pens, pencils, blackboards, overheads, and so on have served as tools to mediate and support instruction in times past (and are often still used today for the same purposes), the tools have simply changed. The misunderstanding exists, however, that technology causes problems, dilutes the rigor of academic content, and distributes the learning process to such a degree that there is little authenticity of authorship and expertise. While it is true that the ubiquitous presence of technology has meant that some of the supporting processes and transferable skills have changed, why does this immediately threaten academic rigor?

My sense is that the lack of overall understanding of technology and how it actually works causes more anxiety than anything else and, by default, creates suspicion and questioning that simply would not exist otherwise. For example, do we really ask how rigorous face-to-face classes are, or do we simply rely on test scores and grades? Given the current debate in higher education regarding grade inflation, those measures may be themselves inadequate. The main reason for this misunderstanding is that, while we may be skilled technology users or may know a handful of teachers who actually use technology well, there are many more we know who persistently avoid the issue and fake use. In discussion with some instructors, for example, it becomes evident quite quickly that there is little to no understanding of the relationships between servers and local drives, Internet and intranet, document locations and security, or something like Web portal technology. For those who use technology regularly, these questions have been asked and answered a long time ago. For others, however, there is a fear of owning up to a lack of understanding and frankly few resources available to answer these kinds of direct questions at most institutions where technology support is about system operations, not user knowledge. Of course, knowledge banks exist and are always expanded by IT personnel based on frequently asked questions; however, if you do not know that or where to find the information, it remains ineffective.

In other words, much of the reason we are not discovering in any meaningful way of measuring whether technology truly improves the learning experience for students or helps them attain the learning outcomes more efficiently is that the knowledge of the technology itself is smattered and that little is consistently taught to those who are currently teaching. This is heightened with the move away from separate instructional or educational technology departments toward all-inclusive IT departments that, owing to budget constraints, house only one or two instructional staff. No self-respecting teaching "expert" will approach an IT help desk with a question like, "Could you please explain to me what a server is and what happens to my documents when I hit the ‘save’ button?"

Additionally, misconceptions exist around the direct role of technology in the learning process, and, often, the technology is regarded as the teacher rather than a tool used by teachers and students to support the dynamic process of learning. As such, teachers sometimes distance themselves from these tools in the hopes that their jobs will remain secure. (If everything is being painfully scrutinized and you simply do what you do as a teacher because that is how you learned and were taught, then it is very probable that your outlook is threatened and that you would become insecure.) Why do you grade the way you do? Why do you create assignments the way you do? How do you know that the learning outcomes of the course you are currently teaching truly reflect student needs and global application requirements?

Most of this, in my opinion, results from education itself being the commodity it now is and teachers being required, as a result, to be business-minded, currently marketable, and technology-savvy. The swing away from any or all of these by teachers, then, is more about defensive practice than retaining rigor.

The Variables of Teaching and Learning
Then there are the variables in any process of teaching and learning. In general, most educational programs of study rightly focus on curriculum, teaching, and learning as the triad that address the entire scope of variables in the process. That is, no educational process is complete without attention to curriculum details, teaching and learning methods, or student needs. Using new technology, however, to mediate instruction, to provide direct communicate, or support collaboration and direct authorship means that these already existing variables become even more heightened. So, while it was already almost impossible to quantify educational research in scientific ways demanded by those outside the field, now it becomes even more of a challenge and could be regarded as a non-issue.

The main benefits of technology use are to support each individual student in his/her own learning process, provide direct access to all learning supports he/she might need (as well as creating his/her own when needed), and collaborating within various learning communities and project teams. Yet we still look for percentage gains that can only be calculated mathematically when "generalities" are applied. Therefore, the actual strength of the learning is completely ignored and devalued in favor of antiquated schemes of measurement that somehow have value outside of education but that we know, as educators, mean nothing in educational terms.

Misplaced Standards for Success (Tests, not Learning)
This then begs the question, then: "What are the standards of success we should be measuring?"

A short answer: The more we remain fixated with standardizing standards, the less likely we will be to truly measure the kind of learning that is currently taking place in technology-supported learning environments.

Conversely, teachers who continue to use older standards for measurement because they do not use new technology are still working with students who do use technology elsewhere or who have been affected by new technology in their thinking and perceptions. As a result, those students may seem to be failing entire courses of study while they really may be demonstrating many skills that are not being valued by a grade percentage. One example would be a student who has been developing great research skills and team-building skills but cannot select the correct answer in a multiple choice test. Clearly, it is important for that student to know the content of a course; however, if the teacher presents an opportunity to demonstrate the content knowledge in a context that uses highly developed research and team-building skills, it will be more likely that the student will be able to demonstrate what he or she knows. In these kinds of cases, then, what we are measuring is the student's ability to complete a specific type of test rather than what has been learned. K-12 presents a major challenge in this regard as so much of the system relies heavily on test scores.

Instructional Design Approaches
While content remains important, we know that it is always expanding, and, therefore, we must work with students to know how to think within certain disciplines rather than simply produce rote information. How do historians think? How do biologists think? We usually wait until students have declared a major, and then they are suddenly pushed from a generalist mindset into a mindset of expertise, usually without prior exposure to what that means. As a result, students often do poorly until they begin to understand how to think differently. Certainly, new technology does not halt the thinking process and can actually go a long way toward facilitating contexts of use that require critical thinking and direct application of thought and ideas within real-life or simulated environments.

While we do still require teachers to know their disciplines and to be knowledgeable in their specific content areas, we do not require teachers to be content-driven but to be (teaching and learning) process-driven. The following are three characteristics of process-based instructional design:

  • Focus on how rather than what. Teachers who encourage students to focus on “how” something works or happens are more likely to develop students who can think beyond what is currently happening to what might happen more efficiently and effectively.
  • Focus on why rather than when. Additionally, teachers who encourage students to ask “why” questions are more likely to develop thinking skills that help students to move beyond the understanding of simple tasks and the fulfillment/completion of simple tasks toward the more complex skills of problem solving.
  • Focus on future trends rather than current practices. The ultimate result of these kinds of approaches to learning is that we will see students who can move toward future trends and progressive organizations and methods, who can move toward change rather than stagnate within existing practices that may or may not meet the demands or the needs of clients and participants.

In order for us to be able to evaluate whether technology use improves student learning outcomes, my sense is that we as educators must redefine those outcomes and the methods with which they will be measured. Additionally, if teachers were better equipped to use technology, not only would students have a more consistent experience from class to class, but we could actually begin to see what effects the technology is having on student learning through more disciplines and over longer periods of time. Certainly, if learning outcomes were to include more of the kinds of skills likely to be developed through technology use, we could move beyond the obvious and begin really to see how the actual learning is being changed, if at all.

comments powered by Disqus