A Fair Clustering Approach to Self-Regulated Learning Behaviors in a Virtual Learning Environment

A Fair Clustering Approach to Self-Regulated Learning Behaviors in a Virtual Learning Environment

Abstract

While virtual learning environments (VLEs) are widely used in K-12 education for classroom instruction and self-study, young students’ success in VLEs highly depends on their self-regulated learning (SRL) skills. Therefore, it is important to provide personalized support for SRL. One important precursor of designing personalized SRL support is to understand students’ SRL behavioral patterns. Extensive studies have clustered SRL behaviors and prescribed personalized support for each cluster. However, limited attention has been paid to the algorithm bias and fairness of clustering results. In this study, we “fairly” clustered the behavioral patterns of SRL using fair-capacitated clustering (FCC), an algorithm that incorporates constraints to ensure fairness in the assignment of data points. We used data from 14,251 secondary school learners in a virtual math learning environment. The results of FCC showed that it could capture six clusters of SRL behaviors in a fair way; three clusters belonging to high-performing (i.e., H-1. Help-provider, H-2) Active SRL learner, H-3) Active onlooker), and three clusters in low-performing groups (i.e., L-1) Quiz-taker, L-2) Dormant learner, and L-3) Inactive onlooker). The findings provide a better understanding of SRL patterns in online learning and can potentially guide the design of personalized support for SRL.

Authors

Yukyeong Song
University of Florida

Chenglu Li
University of Utah

Wanli Xing
University of Florida
wanli.xing@coe.ufl.edu 

Shan Li
Lehigh University

Hakeoung Hannah Lee
The University of Texas

Learner experience and motivational beliefs for a VR Lab in advanced undergraduate biology

Learner experience and motivational beliefs for a VR Lab in advanced undergraduate biology

Abstract

Recently, interest in understanding the prospects of virtual environments, specifically virtual reality laboratories (VR Lab) – those that involve a first-person practica experience via a desktop computer or head-mounted display – in science education has increased. Seemingly ingrained in the process of implementing a VR Lab is a supposition of interest, intrinsic motivation, and confidence on the part of participants. These attributes influence the participant’s ability to start and complete the task and support sensemaking and meaningful learning. Thus, this study used expectancy-value theory with the intent of learning experience design to assess the relationship between task value beliefs and those for usability of a VR Lab. Beliefs were also compared to those for physical laboratories, where only a higher cost for the VR Lab was found. Increased cost beliefs suggest that participants perceived the VR Lab as requiring more from them, a negative consequence, which would decrease their potential for seeing it as a viable learning alternative. This finding suggests that a student’s level of content knowledge may influence thier VR Lab experience. Cost belief was significantly related to utility value, giving credence to the theoretical model where cost is a value component. The VR Lab was determined to be marginally usable, but usability was not related to any motivational beliefs.

Authors

Shalaunda M. Reeves
University of Tennessee Knoxville

Charlotte A. Bolch
Midwestern University

Richard T. Bex II
Illinois State University

Kent J. Crippen
University of Florida
kcrippen@coe.ufl.edu 

Using Ant Colony Optimization to Identify Optimal Sample Allocations in Cluster-Randomized Trials

Using Ant Colony Optimization to Identify Optimal Sample Allocations in Cluster-Randomized Trials

Abstract

When designing cluster-randomized trials (CRTs), one important consideration is determining the proper sample sizes across levels and treatment conditions to cost-efficiently achieve adequate statistical power. This consideration is usually addressed in an optimal design framework by leveraging the cost structures of sampling and optimizing the sampling ratios across treatment conditions and levels of the hierarchy. Traditionally, optimization is done through the first-order derivative approach by setting the first-order derivatives equal to zero to solve for the optimal design parameters. However, the first-order derivative method is incapable of properly handling the optimization task when statistical power formulas are complex, such as those for CRTs detecting mediation effects under the joint significance test. The current study proposes using an ant colony optimization (ACO) algorithm to identify optimal allocations. We evaluate the algorithm’s performance for CRTs detecting main and mediation effects. The results show that the ACO algorithm can identify optimal sample allocations for CRTs investigating main effects with the same design efficiency as those identified through the first-order derivative method. Furthermore, it can efficiently identify optimal sample allocations for CRTs investigating mediation effects under the joint significance test. We have implemented the proposed methods in the R package odr.

 

Authors

Zuchao Shen
University of Georgia
zuchao.shen@gmail.com

Walter L. Leite
University of Florida
walter.leite@coe.ufl.edu 

Huibin Zhang
University Tennessee Knoxville

Jia Quan
University of Kansas

Huan Kuang
Florida State University

Automatically Detecting Confusion and Conflict During Collaborative Learning Using Linguistic, Prosodic, and Facial Cues

Automatically Detecting Confusion and Conflict During Collaborative Learning Using Linguistic, Prosodic, and Facial Cues

Abstract

During collaborative learning, confusion and conflict emerge naturally. However, persistent confusion or conflict have the potential to generate frustration and significantly impede learners’ performance. Early automatic detection of confusion and conflict would allow us to support early interventions which can in turn improve students’ experience with and outcomes from collaborative learning. Despite the extensive studies modeling confusion during solo learning, there is a need for further work in collaborative learning. This paper presents a multimodal machine-learning framework that automatically detects confusion and conflict during collaborative learning. We used data from 38 elementary school learners who collaborated on a series of programming tasks in classrooms. We trained deep multimodal learning models to detect confusion and conflict using features that were automatically extracted from learners’ collaborative dialogues, including (1) language-derived features including TF-IDF, lexical semantics, and sentiment, (2) audio-derived features including acoustic-prosodic features, and (3) video-derived features including eye gaze, head pose, and facial expressions. Our results show that multimodal models that combine semantics, pitch, and facial expressions detected confusion and conflict with the highest accuracy, outperforming all unimodal models. We also found that prosodic cues are more predictive of conflict, and facial cues are more predictive of confusion. This study contributes to the automated modeling of collaborative learning processes and the development of real-time adaptive support to enhance learners’ collaborative learning experience in classroom contexts.

Authors

Yingbo Ma
University of Florida

Yukyeong Song
University of Florida

Mehmet Celepkolu
University of Florida
mckolu@ufl.edu 

Kristy Elizabeth Boyer
University of Florida
keboyer@ufl.edu 

Eric Wiebe
North Carolina State University

Collin F. Lynch
North Carolina State University

Maya Israel
University of Florida
misrael@coe.ufl.edu 

Through the lens of artificial intelligence: A novel study of spherical video-based virtual reality usage in autism and neurotypical participants

Through the lens of artificial intelligence: A novel study of spherical video-based virtual reality usage in autism and neurotypical participants

Abstract

The current study explores the use of computer vision and artificial intelligence (AI) methods for analyzing 360-degree spherical video-based virtual reality (SVVR) data. The study aimed to explore the potential of AI, computer vision, and machine learning methods (including entropy analysis, Markov chain analysis, and sequential pattern mining), in extracting salient information from SVVR video data. The research questions focused on differences and distinguishing characteristics of autistic and neurotypical usage characteristics in terms of behavior sequences, object associations, and common patterns, and the extent to which the predictability and variability of findings might distinguish the two participant groups and provide provisional insights into the dynamics of their usage behaviors. Findings from entropy analysis suggest the neurotypical group showed greater homogeneity and predictability, and the autistic group displayed significant heterogeneity and variability in behavior. Results from the Markov Chains analysis revealed distinct engagement patterns, with autistic participants exhibiting a wide range of transition probabilities, suggesting varied SVVR engagement strategies, and with the neurotypical group demonstrating more predictable behaviors. Sequential pattern mining results indicated that the autistic group engaged with a broader spectrum of classes within the SVVR environment, hinting at their attraction to a diverse set of stimuli. This research provides a preliminary foundation for future studies in this area, as well as practical implications for designing effective SVVR learning interventions for autistic individuals.

Authors

Matthew Schmidt
University of Georgia
matthew.schmidt@uga.edu 

Noah Glaser
University of Missouri

Heath Palmer
University of Cincinnati

Carla Schmidt

Wanli Xing
University of Florida
wanli.xing@coe.ufl.edu 

Learning experience design (LXD) professional competencies: an exploratory job announcement analysis

Learning experience design (LXD) professional competencies: an exploratory job announcement analysis

Abstract

The purpose of this study was to explore the competencies of those professionals working in the nascent area of Learning Experience Design (LXD). Following systematic procedures and inspiration from our conceptual framework of LXD that emphasizes knowledge, skill, and ability statements (KSA), we collected and coded N = 388 unique LXD job announcements over a four-month period. Using LXD job announcements as the unit of analysis, this research used a concurrent mixed-methods approach first to qualitatively code LXD job announcements on the presence or absence of a range KSA statements and then used Exploratory Factor Analysis (EFA) for dichotomously scored data to discern the core competencies required among LXD professionals. Our processes resulted in a total of 69 knowledge statements, 38 skills statements, and 72 ability statements. Our findings unveil core competencies across the KSA domains such as knowledge of multimedia production technologies and software, emerging technology design and development skills, and the ability to apply human-centered design to create learning products and experiences. We discuss our findings in relation to the more established domain of instructional design (ID) and present potentially unique facets of LXD in this regard. Limitations and delimitations are provided along with closing remarks.

Authors

Xiaoman Wang
University of Florida

Matthew Schmidt
University of Georgia
matthew.schmidt@uga.edu

Albert Ritzhaupt
University of Florida
aritzhaupt@coe.ufl.edu 

Jie Lu
Oklahoma State University

Rui Tammy Huang
University of Florida
rui.huang@coe.ufl.edu 

Minyoung Lee
University of Florida
minyounglee@ufl.edu