Through the lens of artificial intelligence: A novel study of spherical video-based virtual reality usage in autism and neurotypical participants

Through the lens of artificial intelligence: A novel study of spherical video-based virtual reality usage in autism and neurotypical participants

Abstract

The current study explores the use of computer vision and artificial intelligence (AI) methods for analyzing 360-degree spherical video-based virtual reality (SVVR) data. The study aimed to explore the potential of AI, computer vision, and machine learning methods (including entropy analysis, Markov chain analysis, and sequential pattern mining), in extracting salient information from SVVR video data. The research questions focused on differences and distinguishing characteristics of autistic and neurotypical usage characteristics in terms of behavior sequences, object associations, and common patterns, and the extent to which the predictability and variability of findings might distinguish the two participant groups and provide provisional insights into the dynamics of their usage behaviors. Findings from entropy analysis suggest the neurotypical group showed greater homogeneity and predictability, and the autistic group displayed significant heterogeneity and variability in behavior. Results from the Markov Chains analysis revealed distinct engagement patterns, with autistic participants exhibiting a wide range of transition probabilities, suggesting varied SVVR engagement strategies, and with the neurotypical group demonstrating more predictable behaviors. Sequential pattern mining results indicated that the autistic group engaged with a broader spectrum of classes within the SVVR environment, hinting at their attraction to a diverse set of stimuli. This research provides a preliminary foundation for future studies in this area, as well as practical implications for designing effective SVVR learning interventions for autistic individuals.

Authors

Matthew Schmidt
University of Georgia
matthew.schmidt@uga.edu 

Noah Glaser
University of Missouri

Heath Palmer
University of Cincinnati

Carla Schmidt

Wanli Xing
University of Florida
wanli.xing@coe.ufl.edu 

Learning experience design (LXD) professional competencies: an exploratory job announcement analysis

Learning experience design (LXD) professional competencies: an exploratory job announcement analysis

Abstract

The purpose of this study was to explore the competencies of those professionals working in the nascent area of Learning Experience Design (LXD). Following systematic procedures and inspiration from our conceptual framework of LXD that emphasizes knowledge, skill, and ability statements (KSA), we collected and coded N = 388 unique LXD job announcements over a four-month period. Using LXD job announcements as the unit of analysis, this research used a concurrent mixed-methods approach first to qualitatively code LXD job announcements on the presence or absence of a range KSA statements and then used Exploratory Factor Analysis (EFA) for dichotomously scored data to discern the core competencies required among LXD professionals. Our processes resulted in a total of 69 knowledge statements, 38 skills statements, and 72 ability statements. Our findings unveil core competencies across the KSA domains such as knowledge of multimedia production technologies and software, emerging technology design and development skills, and the ability to apply human-centered design to create learning products and experiences. We discuss our findings in relation to the more established domain of instructional design (ID) and present potentially unique facets of LXD in this regard. Limitations and delimitations are provided along with closing remarks.

Authors

Xiaoman Wang
University of Florida

Matthew Schmidt
University of Georgia
matthew.schmidt@uga.edu

Albert Ritzhaupt
University of Florida
aritzhaupt@coe.ufl.edu 

Jie Lu
Oklahoma State University

Rui Tammy Huang
University of Florida
rui.huang@coe.ufl.edu 

Minyoung Lee
University of Florida
minyounglee@ufl.edu 

Toward a strengths-based model for designing virtual reality learning experiences for autistic users

Toward a strengths-based model for designing virtual reality learning experiences for autistic users

Abstract

This study presents a strengths-based framework for designing virtual reality experiences tailored to the needs and abilities of autistic individuals. Recognizing the potential of virtual reality to provide engaging and immersive learning environments, the framework aligns the strengths and preferences of autistic users with the affordances of virtual reality platforms. Drawing on the existing literature and empirical findings, the framework highlights key areas of alignment, including visual perception, anxiety management, attention to differences, concrete thinking, and response to positive feedback. The framework emphasizes the importance of involving autistic individuals in the co-design and co-creation of virtual reality technologies to ensure a more tailored and preferred user experience. By adopting a strengths-based approach and actively involving autistic individuals, the design and implementation of virtual reality interventions can better address their unique needs and foster positive outcomes. The study concludes by advocating for continued research and collaboration to advance the field of virtual reality technology for autistic individuals and to work toward shared goals with the autistic community.

Authors

Matthew Schmidt
University of Georgia
matthew.schmidt@uga.edu

Nigel Newbutt
University of Florida
nigel.newbutt@coe.ufl.edu 

Noah Glaser
University of Missouri

Should We account for classrooms? Analyzing online experimental data with student-level randomization

Should We account for classrooms? Analyzing online experimental data with student-level randomization

Abstract

Emergent technologies present platforms for educational researchers to conduct randomized controlled trials (RCTs) and collect rich data to study students’ performance, behavior, learning processes, and outcomes in authentic learning environments. As educational research increasingly uses methods and data collection from such platforms, it is necessary to consider the most appropriate ways to analyze this data to draw causal inferences from RCTs. Here, we examine whether and how analysis results are impacted by accounting for multilevel variance in samples from RCTs with student-level randomization within one platform. We propose and demonstrate a method that leverages auxiliary non-experimental “remnant” data collected within a learning platform to inform analysis decisions. Specifically, we compare five commonly-applied analysis methods to estimate treatment effects while accounting for, or ignoring, class-level factors and observed measures of confidence and accuracy to identify best practices under real-world conditions. We find that methods that account for groups as either fixed effects or random effects consistently outperform those that ignore group-level factors, even though randomization was applied at the student level. However, we found no meaningful differences between the use of fixed or random effects as a means to account for groups. We conclude that analyses of online experiments should account for the naturally-nested structure of students within classes, despite the notion that student-level randomization may alleviate group-level differences. Further, we demonstrate how to use remnant data to identify appropriate methods for analyzing experiments. These findings provide practical guidelines for researchers conducting RCTs in similar educational technologies to make more informed decisions when approaching analyses.

Authors

Avery H. Closser
Purdue University
aclosser@purdue.edu

Adam Sales
Worcester Polytechnic Institute

Anthony F. Botelho
University of Florida
abotelho@coe.ufl.edu 

Stealth Assessments’ Technical Architecture

Stealth Assessments’ Technical Architecture

Abstract

With advances in technology and the learning and assessment sciences, educators can develop learning environments that can accurately and engagingly assess and improve learners’ knowledge, skills, and other attributes via stealth assessment. Such learning environments use real-time estimates of learners’ competency levels to adapt activities to a learner’s ability level or provide personalized learning supports. To make stealth assessment possible, various technical components need to work together. The purpose of this chapter is to describe an example architecture that supports stealth assessment. Toward that end, the authors describe the requirements for both the game engine/server and the assessment engine/server, how these two systems should communicate with each other, and conclude with a discussion on the technical lessons learned from about a decade of work developing and testing a stealth-assessment game called Physics Playground.

Authors

Seyedahmad Rahimi
University of Florida
srahimi@coe.ufl.edu 

Russell G. Almond
Florida State University

Valerie J. Shute
Florida State University