Toward a strengths-based model for designing virtual reality learning experiences for autistic users

Toward a strengths-based model for designing virtual reality learning experiences for autistic users

Abstract

This study presents a strengths-based framework for designing virtual reality experiences tailored to the needs and abilities of autistic individuals. Recognizing the potential of virtual reality to provide engaging and immersive learning environments, the framework aligns the strengths and preferences of autistic users with the affordances of virtual reality platforms. Drawing on the existing literature and empirical findings, the framework highlights key areas of alignment, including visual perception, anxiety management, attention to differences, concrete thinking, and response to positive feedback. The framework emphasizes the importance of involving autistic individuals in the co-design and co-creation of virtual reality technologies to ensure a more tailored and preferred user experience. By adopting a strengths-based approach and actively involving autistic individuals, the design and implementation of virtual reality interventions can better address their unique needs and foster positive outcomes. The study concludes by advocating for continued research and collaboration to advance the field of virtual reality technology for autistic individuals and to work toward shared goals with the autistic community.

Authors

Matthew Schmidt
University of Georgia
matthew.schmidt@uga.edu

Nigel Newbutt
University of Florida
nigel.newbutt@coe.ufl.edu 

Noah Glaser
University of Missouri

Should We account for classrooms? Analyzing online experimental data with student-level randomization

Should We account for classrooms? Analyzing online experimental data with student-level randomization

Abstract

Emergent technologies present platforms for educational researchers to conduct randomized controlled trials (RCTs) and collect rich data to study students’ performance, behavior, learning processes, and outcomes in authentic learning environments. As educational research increasingly uses methods and data collection from such platforms, it is necessary to consider the most appropriate ways to analyze this data to draw causal inferences from RCTs. Here, we examine whether and how analysis results are impacted by accounting for multilevel variance in samples from RCTs with student-level randomization within one platform. We propose and demonstrate a method that leverages auxiliary non-experimental “remnant” data collected within a learning platform to inform analysis decisions. Specifically, we compare five commonly-applied analysis methods to estimate treatment effects while accounting for, or ignoring, class-level factors and observed measures of confidence and accuracy to identify best practices under real-world conditions. We find that methods that account for groups as either fixed effects or random effects consistently outperform those that ignore group-level factors, even though randomization was applied at the student level. However, we found no meaningful differences between the use of fixed or random effects as a means to account for groups. We conclude that analyses of online experiments should account for the naturally-nested structure of students within classes, despite the notion that student-level randomization may alleviate group-level differences. Further, we demonstrate how to use remnant data to identify appropriate methods for analyzing experiments. These findings provide practical guidelines for researchers conducting RCTs in similar educational technologies to make more informed decisions when approaching analyses.

Authors

Avery H. Closser
Purdue University
aclosser@purdue.edu

Adam Sales
Worcester Polytechnic Institute

Anthony F. Botelho
University of Florida
abotelho@coe.ufl.edu 

Stealth Assessments’ Technical Architecture

Stealth Assessments’ Technical Architecture

Abstract

With advances in technology and the learning and assessment sciences, educators can develop learning environments that can accurately and engagingly assess and improve learners’ knowledge, skills, and other attributes via stealth assessment. Such learning environments use real-time estimates of learners’ competency levels to adapt activities to a learner’s ability level or provide personalized learning supports. To make stealth assessment possible, various technical components need to work together. The purpose of this chapter is to describe an example architecture that supports stealth assessment. Toward that end, the authors describe the requirements for both the game engine/server and the assessment engine/server, how these two systems should communicate with each other, and conclude with a discussion on the technical lessons learned from about a decade of work developing and testing a stealth-assessment game called Physics Playground.

Authors

Seyedahmad Rahimi
University of Florida
srahimi@coe.ufl.edu 

Russell G. Almond
Florida State University

Valerie J. Shute
Florida State University

Stealth Assessment and Digital Learning Game Design

Stealth Assessment and Digital Learning Game Design

Abstract

Stealth assessment is an innovative way to measure a set of student competencies through gameplay. The process starts with a competency model that is comprised of everything you want to measure during the assessment, the theoretical concepts being assessed. The competency model is the glue of the stealth assessment, as alignment of the task model and evidence model to the competency model is key to creating a valid stealth assessment. The following chapter examines the different design challenges and approaches used for creating and implementing a stealth assessment through the lens of two design cases. In the first case, a stealth assessment is implemented into an already developed game. In the second case, a stealth assessment is being expanded and a new competency model is needed to represent the expanded content. The chapter concludes with a discussion comparing the two approaches to implementing stealth assessment and implications for future design.

Authors

Ginny L. Smith
Florida State University

Valerie J. Shute
Florida State University

Seyedahmad Rahimi
University of Florida
srahimi@coe.ufl.edu 

Chih-Pu Dai
University of Hawaiʻi at Mānoa

Renata Kuba

Getting the First and Second Decimals Right: Psychometrics of Stealth Assessment

Getting the First and Second Decimals Right: Psychometrics of Stealth Assessment

Abstract

Stealth assessment, like all assessments, must have three essential psychometric properties: validity, reliability, and fairness. Evidence-centered assessment design (ECD) provides a psychometrically sound framework for designing assessments based on a validity argument. This chapter describes how using ECD in the design of a stealth assessment helps designers meet the psychometric goals. It also discusses how to evaluate a stealth assessment’s validity, reliability, and fairness after it is designed and implemented.

Authors

Seyedahmad Rahimi
University of Florida
srahimi@coe.ufl.edu 

Russell G. Almond
Florida State University

Valerie J. Shute
Florida State University

Chen Sun

Pedagogical discourse markers in online algebra learning: Unraveling instructor’s communication using natural language processing

Pedagogical discourse markers in online algebra learning: Unraveling instructor’s communication using natural language processing

Abstract

Despite the proliferation of video-based instruction and its benefits—such as promoting student autonomy and self-paced learning—the complexities of online teaching remain a challenge. To be effective, educators require extensive training in digital teaching methodologies. As such, there’s a pressing need to examine and comprehend the intricacies of instructors’ communication patterns within this context. This research addresses the pressing need to understand pedagogical discourse in online video lectures in Algebra classes by employing computational linguistic tools and natural language processing (NLP). Using transcripts from 125 Algebra 1 video lectures—comprising 4962 instances of pedagogical discourse—from five instructors at Math Nation, a virtual math learning environment, we analyzed the conveyance of linguistic, attitudinal, and emotional nuances. With the aid of 26 Coh-Metrix and SÉANCE features, we classified educators’ language choices, achieving an accuracy of 86.7%. Furthermore, variations in language choices, as signified by discourse markers, were examined through a K-means clustering approach. The resulting 17 clusters were grouped into interpersonal, structural, and cognitive pedagogic functions. Through this exploration, we demonstrate the promising potential of NLP in efficiently deciphering pedagogical communication patterns in video lectures. These insights open a new avenue for research, aimed at assessing the efficacy of digital instruction by scrutinizing pedagogical discourse characteristics in computer-based learning environments.

Authors

Jinnie Shin
University of Florida
jinnie.shin@coe.ufl.edu

Renu Balyan
SUNY Old Westbury
balyanr@oldwestbury.edu

Michelle P. Banawan
Asian Institute of Management
mbanawan@aim.edu

Tracy Arner
Arizona State University
tarner@asu.edu 

Walter L. Leite
University of Florida
walter.leite@coe.ufl.edu 

Danielle S. McNamara
Arizona State Universty
dsmcnamara1@gmail.com