A framework for inclusive AI learning design for diverse learners

A framework for inclusive AI learning design for diverse learners

Abstract

As artificial intelligence (AI) becomes more prominent in children’s lives, an increasing number of researchers and practitioners have underscored the importance of integrating AI as learning content in K-12. Despite the recent efforts in developing AI curricula and guiding frameworks in AI education, the educational opportunities often do not provide equally engaging and inclusive learning experiences for all learners. To promote equality and equity in society and increase competitiveness in the AI workforce, it is essential to broaden participation in AI education. However, a framework that guides teachers and learning designers in designing inclusive learning opportunities tailored for AI education is lacking. Universal Design for Learning (UDL) provides guidelines for making learning more inclusive across disciplines. Based on the principles of UDL, this paper proposes a framework to guide the design of inclusive AI learning. We conducted a systematic literature review to identify AI learning design-related frameworks and synthesized them into our proposed framework, which includes the core component of AI learning content (i.e., five big ideas), anchored by the three UDL principles (the “why,” “what,” and “how” of learning), and six praxes with pedagogical examples of AI instruction. Alongside this, we present an illustrative example of the application of our proposed framework in the context of a middle school AI summer camp. We hope this paper will guide researchers and practitioners in designing more inclusive AI learning experiences.

Authors

Yukyeong Song
University of Florida
y.song1@ufl.edu 

Lauren R. Weisberg
University of Florida
lweisberg@ufl.edu 

Shan Zhang
University of Florida
zhangshan@ufl.edu 

Xiaoyi Tian
University of Florida
tianx@ufl.edu 

Kristy Elizabeth Boyer
University of Florida
keboyer@ufl.edu 

Maya Israel
University of Florida
misrael@coe.ufl.edu 

Integrating Cybersecurity and Cryptology in Elementary Preservice Education: Influence on Perceptions, Confidence and Intent to Teach

Integrating Cybersecurity and Cryptology in Elementary Preservice Education: Influence on Perceptions, Confidence and Intent to Teach

Abstract

Cybersecurity educational efforts are urgently needed to introduce young people to the profession and give students and teachers cybersecurity knowledge to protect themselves from increasing cybercrime. In this study, 56 elementary preservice teachers participated in a 3-hour intervention within a technology integration course that introduced them to a curriculum, [TITLE], designed to teach cybersecurity and cryptology to upper elementary classrooms. The study used pre-post surveys, an engagement questionnaire, and observations to investigate the influence of this module on elementary preservice teachers’ perceptions about the importance of teaching cyber security in elementary school, confidence in their ability to teach cybersecurity, intention to teach cybersecurity in the future and engagement during the intervention. While most preservice teachers had minimal prior experiences with the content, there were positive changes in their perceptions, confidence, and intention to teach cybersecurity and cryptology content. There was also evidence of cognitive, behavioral, and affective engagement throughout the intervention. While longer interventions and opportunities to teach these concepts in authentic settings would likely be even more impactful for preservice teachers, this study suggests that even short interventions can have positive results and is a step forward in bringing cybersecurity and cryptology content to preservice teacher education.

Authors

Christine Wusylko
University of Florida

Kara Dawson
University of Florida
dawson@coe.ufl.edu 

Pavlo Antonenko
University of Florida
p.antonenko@coe.ufl.edu 

Zhen Xu
University of North Carolina

The impact of near-peer virtual agents on computer science attitudes and collaborative dialogue

The impact of near-peer virtual agents on computer science attitudes and collaborative dialogue

Abstract

Virtual learning companions, or pedagogical agents situated as “near peers”, have shown great promise for supporting learning, but little is known about their potential to scaffold other practices, such as collaboration. We report on the development and evaluation of a first-of-their-kind pair of virtual learning companions, designed to model good collaborative practices for dyads of elementary school learners, that are integrated within a block-based coding environment. Results from a study with fifteen dyads of children indicate that the learning companions fostered more higher-order questions and promoted significantly higher computer science attitude scores than a control condition. Qualitative analyses revealed that most children perceived the virtual learning companions as helpful, felt that the companions changed their interaction with their partners, and wanted to have the companions in their future work. These results highlight the potential for virtual learning companions to scaffold collaboration between young learners and provide direction for future investigation on the role that near-peer agents play in collaborative and task support.

Authors

Toni V. Earle-Randell
University of Florida
tearlerandell@ufl.edu 

Joseph B. Wiggins
University of Florida

Yingbo Ma
University of Florida

Mehmet Celepkolu
University of Florida
mckolu@ufl.edu 

Dolly Bounajim
North Carolina State University

Zhikai Gao
North Carolina State University

Julianna Martinez-Ruiz
University of Florida
juliannamartinez@ufl.edu 

Kristy Elizabeth Boyer
University of Florida
keboyer@ufl.edu 

Maya Israel
University of Florida
misrael@coe.ufl.edu 

Collin F. Lynch
North Carolina State University

Eric Wiebe
North Carolina State University

Artificial Intelligence Unplugged: Designing Unplugged Activities for a Conversational AI Summer Camp

Artificial Intelligence Unplugged: Designing Unplugged Activities for a Conversational AI Summer Camp

Abstract

As conversational AI apps such as Siri and Alexa become ubiquitous among children, the CS education community has begun leveraging this popularity as a potential opportunity to attract young learners to AI, CS, and STEM learning. However, teaching conversational AI to K-12 learners remains challenging and unexplored due in part to the abstract and complex nature of some conversational AI concepts, such as intents and training phrases. One promising approach to teaching complex topics in engaging ways is through unplugged activities, which have been shown to be highly effective in fostering CS conceptual understanding without using computers. Research efforts are underway toward developing unplugged activities for teaching AI, but few thus far have focused on conversational AI. This experience report describes the design and iterative refinement of a series of novel unplugged activities for a conversational AI summer camp for middle school learners. We discuss learner responses and lessons learned through our implementation of these unplugged activities. Our hope is that these insights support CS education researchers in making conversational AI learning more engaging and accessible to all learners.

Authors

Yukyeong Song
University of Florida

Xiaoyi Tian
University of Florida

Nandika Regatti
University of Florida

Gloria Ashiya Katuka
University of Florida

Kristy Elizabeth Boyer
University of Florida
keboyer@ufl.edu 

Maya Israel
University of Florida
misrael@coe.ufl.edu 

An engagement-aware predictive model to evaluate problem-solving performance from the study of adult skills’ (PIAAC 2012) process data

An engagement-aware predictive model to evaluate problem-solving performance from the study of adult skills’ (PIAAC 2012) process data

Abstract

The benefits of incorporating process information in a large-scale assessment with the complex micro-level evidence from the examinees (i.e., process log data) are well documented in the research across large-scale assessments and learning analytics. This study introduces a deep-learning-based approach to predictive modeling of the examinee’s performance in sequential, interactive problem-solving tasks from a large-scale assessment of adults’ educational competencies. The current methods disambiguate problem-solving behaviors using network analysis to inform the examinee’s performance in a series of problem-solving tasks. The unique contribution of this framework lies in the introduction of an “effort-aware” system. The system considers the information regarding the examinee’s task-engagement level to accurately predict their task performance. The study demonstrates the potential to introduce a high-performing deep learning model to learning analytics and examinee performance modeling in a large-scale problem-solving task environment collected from the OECD Programme for the International Assessment of Adult Competencies (PIAAC 2012) test in multiple countries, including the United States, South Korea, and the United Kingdom. Our findings indicated a close relationship between the examinee’s engagement level and their problem-solving skills as well as the importance of modeling them together to have a better measure of students’ problem-solving performance.

Authors

Jinnie Shin
University of Florida
jinnie.shin@coe.ufl.edu 

Bowen Wang
University of Florida
bowen.wang@ufl.edu

Wallace N. Pinto Junior
University of Florida

Mark J. Gierl
University of Alberta

Automated Feedback for Student Math Responses Based on Multi-Modality and Fine-Tuning

Automated Feedback for Student Math Responses Based on Multi-Modality and Fine-Tuning

Abstract

Open-ended mathematical problems are a commonly used method for assessing students’ abilities by teachers. In previous automated assessments, natural language processing focusing on students’ textual answers has been the primary approach. However, mathematical questions often involve answers containing images, such as number lines, geometric shapes, and charts. Several existing computer-based learning systems allow students to upload their handwritten answers for grading. Yet, there are limited methods available for automated scoring of these image-based responses, with even fewer multi-modal approaches that can simultaneously handle both texts and images. In addition to scoring, another valuable scaffolding to procedurally and conceptually support students while lacking automation is comments. In this study, we developed a multi-task model to simultaneously output scores and comments using students’ multi-modal artifacts (texts and images) as inputs by extending BLIP, a multi-modal visual reasoning model. Benchmarked with three baselines, we fine-tuned and evaluated our approach on a dataset related to open-ended questions as well as students’ responses. We found that incorporating images with text inputs enhances feedback performance compared to using texts alone. Meanwhile, our model can effectively provide coherent and contextual feedback in mathematical settings.

Authors

Hai Li
University of Florida

Chenglu Li
University of Utah

Wanli Xing
University of Florida
wanli.xing@coe.ufl.edu 

Sami Baral
Worcester Polytechnic Institute

Neil Heffernan
Worcester Polytechnic Institute