Thesis
Computational Models of Human Learning: Applications for Tutor Development, Behavior Prediction, and Theory Testing
Aug. 16th, Gates Hillman Center 4405, 1pm
Poster | Document |
Committee
Ken R. Koedinger | HCII (Chair) |
Vincent Aleven | HCII |
John R. Anderson | Psychology/HCII |
Pat Langley | CS, UOA (External) |
Abstract
Intelligent Tutoring Systems are effective for improving students’ learning outcomes (Koedinger & Anderson, 1997; Pane et al., 2013; Bowen et al., 2013). However, constructing tutoring systems that are pedagogically effective has been widely recognized as a challenging problem (Murray, 1999; Murray, 2003). In this thesis, I explore the use of computational models of apprentice learning, or computer models that learn interactively from the worked examples and feedback, to support tutor development. In particular, I investigate their use for authoring expert-models via demonstrations and feedback (Matsuda et al., 2014), predicting student behavior within tutors (VanLehn et al., 1994), and for testing alternative learning theories (MacLellan et al., 2017).
To support these investigations, I present the Apprentice Learner Architecture, which posits the types of knowledge, performance, and learning components needed for apprentice learning and enables the generation and testing of alternative models. I use this architecture to create two models: the DECISION TREE model, which non- incrementally learns when to apply its skills, and the TRESTLE model, which instead learns incrementally. The models are identical in all other respects and both draw on the same small set of prior knowledge for all simulations (six operators and three types of relational knowledge). Despite their limited prior knowledge, I demonstrate their use for efficiently authoring a novel experimental design tutor and show that they are capable of achieving human-level performance in seven additional tutoring systems that teach a wide range of knowledge types (associations, categories, and skills) across multiple domains (language, math, engineering, and science).
I shows that the models are capable of predicting which versions of the Fraction Arithmetic and Box and Arrows tutors are more effective for human students’ learning. Further, I use a mixed-effects regression analysis to evaluate the fit of the models to the available human data and show that across all seven domains the TRESTLE model better fits the human data than the DECISION TREE model, supporting the theory that humans learn the conditions under which skills apply incrementally, rather than non-incrementally as prior work has suggested (Matsuda et al., 2009; Li, 2013). This work lays the foundation for the development of a Model Human Learner—similar to Card, Moran, and Newell’s (1986) Model Human Processor—that encapsulates psychological and learning science findings in a format that researchers and instructional designers can use to create effective tutoring systems.