Open Menu Close Menu

Teaching and Learning

Supporting Interactive Instruction With Quality Feedback

Whether online instruction is lecture-based, totally flipped or something in between, providing high-quality instructional feedback can be a challenge. Here are the goals and considerations behind doing feedback right.

If you want to design an airplane, two good wings are highly recommended. No amount of strength or beauty in the right wing can compensate for fatal flaws in the left. Instructional design projects also need two equally strong elements to get off the ground: content and delivery. I like to think of them as the payload and the rocket. Most articles on interactive learning focus on the rocket — the technologies and techniques for delivering instruction — as well they should. But this piece will focus on a vitally important but often neglected element of the payload: truly useful instructional feedback. In the last few years, educators have been coming around to the view that providing individualized feedback is the core activity of teaching. The omnipresent "flipped classroom" is really nothing but an implementation of this shift, and not a moment too soon.

In traditional college classes, students sit in a room where they are essentially being read to from the professor's notes. The word "lecture" comes from the Latin lectus, meaning "to read." Then the students do some type of homework designed to verify that they have paid some degree of attention to the lecture. The bulk of the instructor's energy is expected to go into preparing that lecture. Providing feedback on student homework is considered a secondary nuisance task for the instructor or a graduate assistant.

This lack of concern for the quality of feedback is baked into the national university system and is largely responsible for its failings. Any graduate school professor will tell you that many incoming graduate students could not write an essay in comprehensible English if their lives depended on it. The typical institutional answer for this is the Freshman Composition course. But since instructional feedback has traditionally not been valued, this course is often taught by an instructor with a load of over a hundred students. That means each student receives only the most hurried and cryptic feedback on his or her writing assignment each week. Should we then be surprised that many of our college graduates write like first semester tenth-graders?

In the flipped classroom, students are expected to engage with their learning materials before they ever get to the lecture hall. When the system is working as designed, no lecture is necessary because the students are already comfortable with the material. The instructor can now spend the class session answering student questions, hearing reactions to what they have already studied, and providing additional insights based on what was of interest to the students in the lesson and what they chose to share. This learning-through-feedback idea has been a seismic shift in college teaching and learning.

Providing Feedback Online

Automated instructional feedback is the heartbeat of any interactive learning system. It is the content that the system delivers in response to student input, such as feedback on student answers to online questions at the end of a learning module. Instructional feedback helps fill the gaps that have been identified in a student's understanding. Systemic or "housekeeping" feedback, such as "Please respond to all items before pressing the Enter key" or "Please re-enter the date using the dd/mm/yyyy format" is also important, but will not be our concern here.

Whether computer-delivered instruction is lecture-based, totally flipped or something in between, providing instructional feedback automatically in an online environment, where the individualization of response is necessarily limited, presents unique challenges. Like any artificial intelligence system, computer-generated feedback is a blunt instrument compared to the real-time feedback of a skillful and attentive live instructor. But the blunt instrument can be considerably sharpened by understanding the goals, challenges and methods of doing online feedback right.

A tip of the cap is due to David Nicol and Debra Macfarlane-Dick of the University of Glasgow who started us all thinking about quality feedback many years ago.

5 Goals of Instructional Feedback

Before we look at creating effective feedback, let's examine what our feedback is trying to accomplish.

1) Clarify expectations. My students can only meet the standards I set if they know what those standards are. Anyone who has ever built a college course knows how challenging it can be to write a brief set of behavioral objectives that actually make your expectations clear. Most so-called "behavioral" objectives do not describe behaviors at all. Using weak verbs like "comprehend," "understand" or "appreciate," they dance around what the student is supposed to accomplish without ever really describing the bar that must be cleared. Perhaps this is intentional because it allows instructors to be flexible in giving out lots of high grades, which leads to doing better on student evaluations!

When poor performance in a test or quiz triggers the electronic delivery of instructional feedback, you get a second chance to clarify just what the student needs to be able to know and do to excel in your course.

2) Improve performance. The most important reason for providing instructional feedback is to help the student close the performance gap between current abilities and exit outcomes. The help which an instructor provides the student in achieving this is called "scaffolding." Imagine a house painter who can't quite reach a certain high place he needs to paint. Add a little scaffolding and he can get to that place and get it done. Performance improvement depends on additional performance opportunities right after interacting with the feedback. If too much time passes, the learning that comes from the feedback will never move from short-term memory to long-term memory and will be lost.

It is also important that the new performance opportunity be different from the original performance opportunity that initiated the feedback. A very common mistake with online assessment is that the bank of test items is too small. After receiving feedback, the student runs into exactly the same questions he or she got wrong the first time. Since most questions only have two remotely plausible answers, answering correctly the second time is a trivial task — which does not demonstrate that the student has correctly absorbed the skills or concepts behind the questions. Students can only prove true mastery by transferring their knowledge to the correct solution of different set of problems.

3) Encourage self-assessment. The goal of any instructional process is to build reflective self-regulated learners who set their own standards of achievement. Students who have fully engaged with a subject and understand how they intend to use it on the job can make their own decisions about how deeply their knowledge base and skill set needs to go. The opposite of self-regulation is hyper-dependence. Hyper-dependent students are always looking to the instructor to tell them how good is good enough. They never outgrow asking junior high school questions, such as how long does the paper need to be and can I double-space it. I have been amazed at how many 30- to 40-year-old hyper-dependents are seeking advanced degrees.

Specific examples linked to instructional feedback, like fully coded sample projects, can be invaluable in promoting self-regulation by clarifying what can be accomplished with the course material. If the examples are well-constructed, they will suggest not only what must be done to pass the course, but what can be done to excel in it and why it is worth bothering to excel. Students are then in a position to decide just how far up the skill ladder they wish to climb.

4) Motivate learning while encouraging self-esteem. Nothing succeeds like success, even if it comes in very small increments. If the scaffolding you provide helps a student to get over the hump of a tough set of concepts and solve the next problem, you are boosting both present knowledge and future motivation. The tone of your feedback will also affect motivation. Students need encouragement the most when things are not going well, and that is exactly when they are likely to encounter instructional feedback. If the tone of your initial instruction is warm, bright and enthusiastic, the tone of your instructional feedback should be even warmer, brighter and more enthusiastic. We must always strive to remind the student that having difficulty with some aspect of the subject is a momentary learning challenge, not a character flaw. How you phrase your instructional feedback can go a long way toward accomplishing this.

5) Encourage more extended dialogue with the instructor and with peers. As I have often noted in my contributions to Campus Technology, my students despise collaborative learning. They would rather clean the bathroom with their tongues than have any portion of their grade be dependent on someone else's performance on some group project. Still, there are advantages to not having your learning take place in a complete vacuum. Not the least of these is the possibility of enthusiasm catching fire from student to student. Students can also share problems and their solutions among themselves, which can be a great time-saver for the instructor, just like the Adobe Online Product Forums save the company a fortune in phone support technicians.

It is always a good idea for an instructor working in an online environment to maintain regular personal contact with students. Without a constant flow of information about how the course materials are being received, it is easy to lose touch with what is working and what isn't. An example of this is the series of weekly study guides that I post for my graduate students for each course. In my first year of teaching, I was constantly revising these guides throughout the semester as I fine-tuned my instructional plans. Little did I know that no one was seeing my revisions because the students just printed copies of all the guides on the first day of class and worked off the printed copies from that moment forward. It was only through a chance interaction with a few students late in the semester that I learned this was true.

Instructional feedback can offer additional opportunities to interact. For example, feedback text can suggest that the student prepare a code sample based on a given model, send it to the instructor and then call to go over it together.

The Unique Challenge of Online Feedback

Automated online feedback will normally have a binary trigger. If a student gets one or two questions wrong out of a group of five, he might get one block of feedback. If he gets three or more questions wrong, he might get another. We are unlikely to be sophisticated enough in using our authoring systems to build Boolean structures where, if a student misses certain questions on one quiz and has a record of having missed certain questions on an earlier quiz, we can infer a more generally weak understanding that cuts across subtopics, and then provide targeted feedback. That is exactly what a highly skilled live instructor in a flipped classroom is doing all the time.

Because our online feedback must be prepared in advance, it is necessarily based on an idea in the professor's mind about what students who are likely to give a certain type of wrong answer will need to know to get back on track. Unlike a face-to-face interaction, there is no opportunity for the instructor to adjust the feedback while delivering it, based on the student's real-time reaction to the feedback itself. The student can't say, "Stop right there, I have no idea what you are talking about!"

Tuning Your Feedback for Optimum Performance

In spite of the challenge, crafting high-quality instructional feedback is entirely possibly if you keep a few guidelines in mind:

1) Determine your strategy through situational awareness. Instructional events don't occur in a vacuum. You need to consider everything you know about your audience, the situation that led them to enroll in your course, and what they hope to gain from the experience. This will help you decide what type of instructional feedback is appropriate, or if any is needed at all.

One exceptional example: When I served as a senior instructional designer at Boeing Aircraft, all employees were required to complete periodic compliance training on legal topics such as export control regulations and waste/fraud/abuse reporting guidelines. The company had to demand this training to comply with federal laws and regulations but fully understood that the topics were useless for the majority of employees.

Needless to say, non-management employees were highly unenthusiastic about this training that had no bearing on their jobs and was considered a waste of time. Everyone soon discovered that the most expedient strategy was to blow through all the training content slides without reading anything, and then guess the answers on the final quiz. More than half the time, common sense was sufficient to cobble together a passing score.

If you did fail the test, you would get no instructional feedback, but simply a message saying "Failing score, try again," along with a list of the questions you missed. While this would be completely inadequate if you were really trying to learn something, it was totally appropriate for this situation. The problem was not that the content slides failed to convey the material, but rather that the audience never took the time to read the slides to begin with. Of course, the company knew this and really didn't want us wasting a lot of time on this training either. Upon seeing the identical questions the second time around, it was "shooting fish in a barrel" to pass the test.

2) Keep your feedback neutral, quantitative and value-free. A Marine friend of mine says military training boils down to three messages: "Here's the gun," "There's the hill" and "You're the best!" Since the ability to do any job consists of both the skill to achieve it and the confidence to attack it to begin with, we can't forget the "you're the best" part.

Despite your best design efforts, students will sometimes perform badly. If you were interacting in person, you could deliver the news that improvements are needed in a careful, nuanced way that preserves both your relationship with the trainee and his or her self-esteem. In an online environment, the best way to handle student failure is with neutral, non-judgmental language, possibly by just using the test score numbers. No reasonable person will be offended by being told that the passing grade is 80 percent of the items correct and they only hit the mark on 60 percent. Most will respond much differently to a message like "Your performance is substandard and additional effort is required."

No matter how carefully and supportively you construct instructional feedback, it is always going to be a guest whose welcome is a little tenuous. Since it is usually delivered at the moment when something has gone wrong for the student, feedback is always in danger of being interpreted as personal criticism. But it works best when it falls like rain — gentle enough to nourish your students' growth but not so strong as to drown their emerging sense of competence.

3) Don't repeat failed strategies. A very common type of instructional feedback is to let students know that they have failed to achieve a required standard and then loop them back through the same content for review. Repeating the same content in the same way assumes that the reason students didn't "get it" the first time is that they weren't paying attention. This is rarely the case! Sending students through the same "content car wash" that didn't remove the dead bug goo from their grille the first time is unlikely to do so on the second trip. A far better method is to use instructional feedback to guide them through an alternative approach to the problem material. The combined effect of the two methods is likely to do the trick.

4) Be consistent in your feedback formats. Create an integrated set of instructional feedback templates for yourself and use them consistently. Your statement of why the student's answer was wrong, guidance language on how to correct it, resource links and graphic elements should all have the same look and feel on the page, regardless of the point you are treating. If your students are familiar with your consistent style of presenting feedback, they can spend their time learning rather than trying to figure out an ever-changing feedback delivery system.

5) Use "open composition" for your feedback, when it is appropriate. When you look at a painting in a museum, one of the first things to notice is whether it has a closed or an open composition. In a closed composition, all the figures and elements face inward toward the focal point, which is usually near the center of the canvas. The eye and the imagination are invited to stay inside the picture frame. In an open composition, figures and objects may face out toward the edges of the picture plane. Some elements might be cropped to suggest that there is more going on beyond the space that you can see, like a tree where an imagined trunk is outside the picture but has some branches that are painted in from one side. The eye is invited to consider the frame as just a window into a larger reality that is equally as interesting as the painted space itself.

Feedback can also have a closed or an open composition. Neither is always best; it all depends of what the training project is trying to achieve. Closed training works best for the remediation of weaknesses with specific skill objectives. Its job is to patch potholes in the student's skill set that have been uncovered by the assessment tools. Open training moves beyond reinforcing skills to providing enrichment opportunities and a greater depth of understanding. How to Assemble a Centrifuge might call for a closed training model, while Dealing with Difficult People in the Workplace would clearly benefit from an open one.

With open training, instructional feedback is not limited to when something goes wrong. It can also build on success when something goes right. For example, let's say you are delivering a training on manufacturing defect elimination. Your assessment tool can determine which students seem to be exceptionally skilled or enthusiastic about the topic. These students can be offered extension activities to further develop their interest. The activities could be as simple as additional content links to check out, or as involved as an invitation to participate in further advanced training modules on the topic, leading to lead roles in a company's defect elimination teams.

In an asynchronous online course where a personal relationship with the instructor is not built into the model, opening up to even a modest amount of direct contact can make a world of difference in the student's sense of being cared for. The best commercial mass-market training companies know this. I am a great fan of the video courses put out by The Great Courses of Chantilly, Virginia. Their courses serve hundreds, if not thousands, of learners over a shelf life of several years. I have occasionally e-mailed their instructors with a personal question about a course in which I was enrolled and have never failed in getting a kind, complete and gracious reply. Even though this amounted to only a 10-minute indirect interaction with the instructor in each instance, it had a major impact on how I felt about the courseware and the company.

6) Get some feedback on your feedback. The success of instructional feedback is as important as the success of the content itself. There are two ways to determine if your feedback is doing its job. You could do what Adobe and other major technical corporations do for their reference manuals: At the end of every article, they ask if the preceding learning object was helpful and capture the response on a scale from "Not At All" to "Very Much So." While this may be worth trying, you may have difficulty getting a satisfying response rate to this type of query. Students may be too busy completing their assignment to take time out to comment of the quality of your feedback. A better way would be to set up your authoring tool to note each page view of your instructional feedback with a counter and then link this data to each administration of your quizzes. If the students do significantly better on a group of assessment items after using your instructional feedback, you know that the feedback is helping.

Instructional feedback possibilities will only improve over time. As artificial intelligence capacity becomes ever more robust, an instructor will be able to build a library of student response samples for test items and match them with appropriate feedback. The day may come (for better or worse) when feedback provided within fully automated computer-based training may be nearly indistinguishable from what a live instructor might offer.

comments powered by Disqus