Multimedia has long been considered a powerful tool in instruction, but University of Florida researchers believe differences in learners’ visual attention and cognition can impact just how effective multimedia environments are in fostering learning outcomes for all.
Pavlo “Pasha” Antonenko, associate professor of educational technology and director of the Neuroscience Applications for Learning (NeurAL) Lab, and a team of researchers have received $821,412 from the National Science Foundation to design and test a novel, artificial intelligence (AI)-enabled gaze-driven adaptive learning technology that provides individualized multimedia learning support to students in real-time based on differences in their working memory capacity and visual attention patterns.
Working memory capacity provides the attentional control needed to select, organize and integrate information that is gained from multimedia materials such as text, video, audio or graphics. As learners have unique strengths and weaknesses, they too have differences in working memory capacity and the visual attention strategies that can either facilitate or hinder learning.
Pavlo “Pasha” Antonenko, Ph.D.
“We assume that just because we’ve designed a nice PowerPoint or Google Slides presentation all students will effectively and efficiently understand everything, but in fact that’s not what happens because we do have a lot of individual differences that impact the way we learn,” said Antonenko, principal investigator of the project.
To address this gap, Antonenko will work alongside co-principal investigators Jonathan Martin, professor of geology, Kara Dawson, professor of educational technology, and Albert Ritzhaupt, professor of educational technology and computer science education, to develop GeoGaze — a display technology powered by AI that uses eye tracking to change multimedia learning materials in real-time based on students’ gaze behavior and differences in their working memory capacity. Marc Pomplun, principal investigator of the project’s sub-award and professor and chair of computer science at the University of Massachusetts Boston, will serve as the project’s eye tracking expert.
Using AI, GeoGaze will analyze and predict effective visual attention strategies for each student and then in real-time adapt the presentation of information to better support their learning.
“It’s a dangerous assumption, but we assume that if we show a person a screen that has some text and has a diagram, that they’re actually going to pay attention to either or both of these information sources,” Antonenko said. “… What we are finding in our eye tracking studies is — no — that’s not the case.”
The project, titled “Collaborative Research: GeoGaze: Gaze-Driven Adaptive Multimedia to Augment Geoscience Learning for Neurodiverse Learners,” will involve two studies, each enlisting 200 UF and Santa Fe College students. Study one will investigate students’ eye movement patterns while viewing a geoscience presentation on sea level rise and examine the different levels of learners’ working memory capacity to identify the best visual attention strategies needed to support their learning. Study two will then leverage these findings to build and optimize the machine learning algorithm and the actual GeoGaze technology with a large sample of postsecondary students viewing geoscience content.
Antonenko shared that the team hopes the AI-powered technology will advance the science of adaptive learning and help educators everywhere to provide students with needed personalized learning support in real-time.
“It’s important to individualize learning so when the time comes for us to actually pay attention to some information on the screen, which we do individually, we want to make sure that every student is supported based on their unique blend of individual differences in attention and cognition,” Antonenko said. “So, to say that students who need more support are in fact supported, and if we can have a technology that helps provide that support — well even better.”
The project is expected to be completed in 2024.
Jonathan Martin, Ph.D.
Kara Dawson, Ph.D.
Albert Ritzhaupt, Ph.D.
Marc Pomplun, Ph.D.