By Alina von Davier
While the iconic HAL 9000 from 2001: A Space Odyssey may not yet be a fixture in our homes, many of us have intelligent assistants, such as Alexa or Google Home, that help us manage our daily lives. And that technology is quickly becoming commonplace in schools, as well. Some argue that intelligent assistants will be one of the most disruptive technologies in the near future.
What impact will these assistants have on education, and what challenges and opportunities do they pose for educators, practitioners, and researchers? As an innovation leader on the “what, why, and how” of robots and learning, these are questions that concern me.
What are the robots of the day, the AI assistants?
Well…they may look like many different things: they may be like robots that we have seen in Star Wars movies, or be a piece of software running on a mobile platform on the web. For example, the app Waze is an AI-based driving helper, Siri is a voice-based agent for your phone, Alexa and others help us shop and browse the internet, Duolingo is your tutor for foreign languages, and Amazon.com’s recommendation system helps you navigate the offerings of vendors. Not all assistants have a voice, a body, or are even called assistants. All of them include separate intelligent systems that do different things but are coordinated to present us with one coherent “assistant.”
Why do we need machine-based assistants?
The overwhelming amount of data available to us on the internet needs to be carefully curated for optimal use. In traditional education, this was one of the main duties of the school: Students were told what subjects to study, in what order, what to read, how to place this information into a coherent system, and how to discover the relationships between parts of the system.
Enter the world of the AI-educational assistants. They curate the World Wide Web for each of us individually for different purposes. The better the curation, recommendations, and planning, the better the assistant is, and the more areas an assistant can tackle, the more accomplished it is. Additionally, the better the assistant can “learn” to apply information from one area to another, the more sophisticated it is. An outstanding assistant uses not only patterns in raw data but also logical structures and knowledge domains (see Doug Lenat’s work).
This is where educational tutors and companions have the potential to excel. As they learn from reliable data about a student (such as through quality assessment data from testing companies like the ACT), and engage with the cognitive theory of learning that governs these data, they can explain the dependencies across knowledge domains, enabling better assessment and advice to students through a myriad of situations.
Teachers can mediate through AI-based tutors for optimal learning experiences and outcomes. They can focus on social aspects of learning—using human-to-human conversation for example, about Shakespeare’s characters and their relevance to the students’ personal everyday lives—while the AI-assistant can support the individual students prepare for such a discussion.
If a student has special needs, the role of the AI-assistant in the classroom can become more significant: A specific AI-assistant can change the font, provide haptic support for those visually impaired, or provide simpler-looking characters with more monotonous voices to alleviate the overwhelming input for autistic learners. Similarly, specialized micro-assistants ranging from medical support to the facilitation of the classroom experience—implants, wearable, virtual reality, and sensors—will be part of the educational experience.
If the robots are coming, what are we supposed to do now?
As always, as educators we need to learn more about the assistants and their capabilities. We need to become educated customers and inquire about the quality of the data that feeds the AI-assistant: How reliable is it? How protected is it? What are the privacy affordances and risks? Have fairness tests been conducted on the training data and algorithms to ensure that the tutor does not display bias toward subgroups of people?
Educators need to request evidence of validity from educational technology (edtech) companies. Have papers describing the efficacy of the work been published in peer-reviewed journals? Were patents submitted? Educators need to expect transparency and ask for the reports that document the efficacy of the assistant. Was research conducted comparing the use of the particular assistant with other assistants and/or with a control group in a classroom experience? Were the samples of students for the studies large enough? How did different students perform? Who gained the most from working with an assistant? Who gained the least? Why? The edtech community should be transparent and held to higher standards of quality research and development, beyond the marketing pitch.
Also, teachers need to be supported by the assistants, not the other way around. Teachers should decide when and how assistants are used in school so that the school experience remains social and interactive, while still incorporating assistants and specialized tutors.
What are the researchers supposed to do?
We need to work relentlessly for the quality of the “invisible” infrastructure: The quality of measurement, the validity of the theory behind the assistants, the validity of the recommendations, and the fairness of the results. We need to bring the theories of learning, the experts’ input, and the psychometrics of learning and measurement into the backbone of the robotic assistants. And we need to work closely with teachers to understand their needs and to incorporate their best practices into the design of the assistant.
There are some good examples of such advice from the work of ACTNext— an interdisciplinary innovation unit at ACT— and that of our partners. For example, the Learning Design Studio of SmartSparrow spends at least six months with instructors to develop highly interactive and adaptive courseware, following the Learner Centered Design principles. At ACTNext, the Holistic Educational Resources and Assessment (HERA) adaptive learning research prototype uses the Evidence Centered Design (e-ECD) to allow for a theory-based integration of learning, assessment, and complex tasks. The ACT’s Holistic Framework is a great example of a theoretical framework to base curation of learning and testing materials—as has been done by OpenEd at ACT. As an example of sophisticated recommendations and diagnostics, consider the application programing interface created by ACTNExt, RAD API, which uses computational psychometrics (a blend of psychometrics theory and AI) to identify gaps in a learner’s knowledge and recommend appropriately aligned resources for further learning. Additionally, consider the research-based prototype, ACTNext Educational Companion App, which is an AI-assistant on a mobile platform.
The “robots” are already a part of our everyday lives, and they are indeed coming to schools. In fact, in many instances, they are already there. To leverage the potential of these new AI technologies to improve student learning, the Hippocratic Oath for educators and edtech communities has never been more relevant than today: “First, do no harm.”