Technology, Data-Centric Models, and the Equity of Information

A Q&A with George Siemens

In the face of instant communication, continuous connectivity, and data analytics, the key research question at UT-Arlington's LINK Research Lab is "What does it mean to be human in a digital age?" Here, LINK Lab Executive Director, PI/Researcher, and Professor George Siemens brings our attention to some key issues about our connectedness and how it is changing the way we interact, work, and learn in the digital age.

Mary Grush: Like every other sector, education is growing more connected all the time. How is this connectedness changing us? — Or is that question too broad?

George Siemens: Actually, the questions we are looking at today are fairly broad, but I don't think that's unusual in times of substantial change.

At the LINK Lab as a research space, our defining question is broad: What does it mean to be human in a digital age?

We're looking at that question from a number of perspectives. One thing that comes to mind immediately, is the new technology and different knowledge practices that you and I experience all the time. And going deeper, there are also some psycho-physiological impacts — we're feeling more stress than ever, as we may become a bit overwhelmed by the newly emerging tools we see on a daily basis.

Grush: What's an example of these psycho-physiological impacts? How might they change teaching and research practices?

Siemens: Think about the recent election in the U.S. There are toolsets now, where everyone has a voice — things are much different than in the past, because there are new ways we engage with one another technologically. We don't have regulatory guidelines for these new systems similar to what we were used to in the past. This is a very different experience of the election process.

From a learning perspective, this particular aspect of new technology and toolsets creates winners and losers, meaning that there is a large percentage of learners who are not able to be effective in this type of knowledge climate, and they are in effect being excluded from the education system.

Grush: So are you looking at student success at the LINK Lab?

Siemens: Yes, in addition to looking at new knowledge processes, we are also quite focused on success for all students: How do we bring all students along?

Grush: How do you approach that?

Siemens: An important part of that work now relies on learning analytics, artificial intelligence, and other models that help us improve practice.

Grush: What are some of the aspects — challenges or benefits — of connecting students to these models — with their real data? Again, maybe that's a larger question as you consider "connectedness" — as we all are suddenly more and more connected. Are we becoming what we connect with?

Siemens: Broadly this is a big, terrific question that we are all grappling with generally. And there are interesting, more specific educational challenges as well.

There was a period a few years ago, when we thought that it will be great if we bring new, more "connected" toolsets into education practice. We wanted students to have the ability to use technology that they could connect with and control. We were very actively engaged in promoting what we called "Web 2.0" technologies. That was a big mission in education for some time.

But as we look back now, with ten-plus years experience working with these toolsets, we are finding that the same toolsets that enable democracy and empower anyone to participate also generate an unprecedented amount of error in the public space, in terms of misinformation.

That became amply clear, as I mentioned, during the recent election, throughout the political spectrum.

There is a growing concern now that the toolsets we thought would provide democracy have in some ways provided something like the opposite. By raising all voices, some of the important voices can get "drowned out".

Now, this doesn't mean I'd like to exclude anyone, but I think we certainly need to start looking at this question: How does an open, equitable digital space function when there are groups who are intentionally disrupting the space? We've seen some groups whose mandate is not to be equal participants but to achieve their intended outcomes through some fairly dubious means. They fly in the face of the humanist model that underpins most of Western democratic society.

This question has really come to the forefront. And you are right to suggest that in the digital world, in some sense we are what we connect to — what we read, and the people that we engage with in an ongoing basis, for example, in the echo chambers that we are a part of on Twitter and Facebook.

I've found that in this post-election timeframe, many people are re-examining who they are "following" — and whether that represents diverse, challenging voices or simply comfortable conversation spaces.

We can easily create false realities for ourselves with some of these toolsets and technologies. If we disagree with someone, we can basically ignore them. This can in some cases be a benefit, but there are still some implications to be understood: Misinformation — or incomplete views — can start to drive the behavior of people in different places along the spectrum. What happens if they are steeped in unbalanced information? We may be able to find corollaries when we look back to the print age, but in our digital age, what we have is a much more complex picture and not an easy challenge to address.

So, those are a few of the questions we are looking at in terms of the impact of technology on the equity of information.

Grush: If there is a technology problem, is there a technology fix?

Siemens: What a lot of systems are starting to do now is to switch to data intensive models, whether under the umbrella of learning analytics, artificial intelligence, or machine learning approaches.

Now that's assuming that if we can get better fidelity or higher quality information into the hands of people, then they will make better choices, personally, politically, and otherwise.

But if you look, you can still find bias reflected in virtually all of the toolsets we use that have a data-centric model.

Grush: What's an example of that type of bias?

Siemens: You may remember about 8 months ago, Microsoft released an AI bot to the Internet — a sort of Twitter bot — to learn and engage with people. It did learn, but it happened to become a rather racist bot because it started to acquire the language that was used by people in that space. It's a very practical example of becoming what you connect to.

There are social justice components in conversations we are having now about our growing data-centric world view and how that's starting to shape how decisions are made — for example, in law enforcement and judicial outcomes; or in an education system for student success and completion.

Grush: Where are we now in all of this? What happens next?

Siemens: We have the beginnings of a new space now that's open and equitable, where everyone has a voice. And we've had growing attempts to create structure to that space through the use of data-centric models.

We are focusing on how we can improve the accuracy of information that emerges in this space. But even as we try to do this, we see many examples of how we are capturing in our algorithms almost mirror-like reflections of biases and prejudices that we may not have been aware of.

Where does this take us, and how do we remedy this? Among the things we've pursued at the LINK Lab, are — for want of better words — the traditional contemplative practices. For example, we are developing a yoga MOOC that will be run as part of a MicroMasters on edX, that looks at the practice of yoga as a way of centering one's self in a fairly complex world. It emphasizes mindfulness practices that enable attention and focus.

There is a huge distractionary element to the common toolsets people use — constant pings on our mobile, alerts on our desktop. Distractions are competing with our ability to work. A 30-second e-mail that offers instant gratification may get more attention than a comprehensive, 80-page report. What do such choices mean for us?

We are recognizing that in a digital age, there needs to be a growing attention to the most fundamental human attributes and how we nurture those while we relinquish selected parts of our work to technology — technology that can, in many ways help us do certain things better than ever before.

Grush: This sounds like very basic research. Does the LINK Lab push out practical solutions to departments within UTA and elsewhere?

Siemens: We concern ourselves with both research and practice. This is the intent of our dLRN initiative, the Digital Learning Research Network, which focuses on the idea that research should have a practical application and real impact — and conversely, that practice space needs to feed into research.

dLRN started as a $1.6 million grant from the Bill & Melinda Gates Foundation. We brought research institutions like CMU, Stanford, and UPenn, together with state systems including Arkansas, Georgia, and California — from these we created teams that were both research- and practice-represented.

We very much believe that practice in the digital learning space needs to have a basic science or research focus, together with an application focus. Research needs to have impact at a classroom level.

We don't see a sharp distinction in digital learning between what is research and what is practice. Part of the reason is that many of the practices we are engaged in haven't been researched — basic questions about learning activity, student motivation, and student agency haven't been addressed in some environments where software has been deployed to literally thousands of students. We want to understand what's happening in those spaces and how that impacts and changes the act of teaching, the student's sense of well-being, and the list goes on.

Just because something is possible technologically does not mean that it is going to be effective when deployed in an organizational setting. So, the research component and the human experience component become very important to us. At the LINK Lab, we try to emphasize projects that can have immediate impacts but at the same time have a true learning sciences orientation.



Featured