Simulations for the Precise Modeling of Exercises Including Time, Grades and Number of Attempts.
DOI:
https://doi.org/10.9781/ijimai.2024.10.002Keywords:
Content Modeling, Learning Analytics, Simulated Students, Smart ContentAbstract
Students’ interactions with exercises can reveal interesting features that can be used to redesign or effectively use the exercises during the learning process. The precise modeling of exercises includes how grades can evolve, depending on the number of attempts and time spent on the exercises. A missing aspect is how a precise relationship among grades, number of attempts, and time spent can be inferred from student interactions with exercises using machine learning methods, and how it differs depending on different factors. In this study, we analyzed the application of different machine-learning methods for modeling different scenarios by varying the probability of answering correctly, dataset sizes, and distributions. The results show that the model converged when the probability of random guessing was low. For exercises with an average of 2 attempts, the model converged to 200 interactions. However, increasing the number of interactions beyond 200 does not affect the accuracy of the model.
Downloads
References
P. Brusilovsky, S. Edwards, A. Kumar, L. Malmi, L. Benotti, D. Buck, P. Ihantola, R. Prince, T. Sirkiä, S. Sosnovsky, et al., “Increasing adoption of smart learning content for computer science education,” in Proceedings of the Working Group Reports of the 2014 on Innovation & Technology in Computer Science Education Conference, 2014, pp. 31–57.
E. G. Rincon-Flores, E. Lopez-Camacho, J. Mena, O. Olmos, “Teaching through learning analytics: Predicting student learning profiles in a physics course at a higher education institution,” International Journal of Interactive Multimedia and Artificial Intelligence, vol. 7, no. 7, pp. 82–89, 2022.
M. O. Edelen, B. B. Reeve, “Applying item response theory (irt) modeling to questionnaire development, evaluation, and refinement,” Quality of Life Research, vol. 16, no. 1, pp. 5–18, 2007.
I. Rushkin, I. Chuang, D. Tingley, “Modelling and using response times in online courses,” arXiv preprint arXiv:1801.07618, 2018.
A. Jiménez-Macías, P. J. Muñoz-Merino, M. Ortiz- Rojas, M. Muñoz-Organero, C. Delgado Kloos, “Content modeling in smart learning environments: A systematic literature review,” Journal of Universal Computer Science (JUCS), vol. 30, no. 3, pp. 333–362, 2024.
P. M. Moreno-Marcos, D. M. de la Torre, G. G. Castro, P. J. Muñoz-Merino, C. D. Kloos, “Should we consider efficiency and constancy for adaptation in intelligent tutoring systems?,” in International Conference on Intelligent Tutoring Systems, 2020, pp. 237–247, Springer.
M. Feng, J. Beck, N. Heffernan, K. Koedinger, “Can an intelligent tutoring system predict math proficiency as well as a standardized test?,” in Proceedings of the 1st International Conference on Education Data Mining, 2008, pp. 107–116.
B. Martin, A. Mitrovic, K. R. Koedinger, S. Mathan, “Evaluating and improving adaptive educational systems with learning curves,” User Modeling and User-Adapted Interaction, vol. 21, pp. 249–283, 2011.
F. Dorça, “Implementation and use of simulated students for test and validation of new adaptive educational systems: A practical insight,” International Journal of Artificial Intelligence in Education, vol. 25, no. 3, pp. 319–345, 2015.
E. Poitras, Z. Mayne, L. Huang, T. Doleck, L. Udy, S. Lajoie, “Simulated student behaviors with intelligent tutoring systems: Applications for authoring and evaluating network-based tutors,” Tutoring and Intelligent Tutoring Systems. Nova Publishers, 2018.
J. Champaign, R. Cohen, “A model for content sequencing in intelligent tutoring systems based on the ecological approach and its validation through simulated students,” in Proceedings of the Twenty-Third International Florida Artificial Intelligence Research Society Conference (FLAIRS 2010), 2010, pp. 486–491.
R. Pelánek, “Metrics for evaluation of student models.,” Journal of Educational Data Mining, vol. 7, no. 2, pp. 1–19, 2015.
A. Jiménez-Macías, P. J. Muñoz-Merino, C. Delgado Kloos, “A model to characterize exercises using probabilistic methods,” in Ninth International Conference on Technological Ecosystems for Enhancing Multiculturality (TEEM’21), 2021, pp. 594–599.
J. M. Spector, “Smart learning environments: Concepts and issues,” Society for Information Technology & teacher education international conference, 2016, pp. 2728–2737, Association for the Advancement of Computing in Education (AACE).
E. Pecheanu, C. Segal, D. Stefanescu, “Content modeling in intelligent instructional environments,” in International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, 2003, pp. 1229–1234, Springer.
J. P. Lalor, H. Wu, H. Yu, “Learning latent parameters without human response patterns: Item response theory with artificial crowds,” in Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing, vol. 2019, 2019, p. 4240, NIH Public Access.
F. Martínez-Plumed, R. B. Prudêncio, A. Martínez- Usó, J. Hernández-Orallo, “Item response theory in ai: Analysing machine learning classifiers at the instance level,” Artificial Intelligence, vol. 271, pp. 18–42, 2019.
C.-M. Chen, H.-M. Lee, Y.-H. Chen, “Personalized e-learning system using item response theory,” Computers & Education, vol. 44, no. 3, pp. 237–255, 2005.
D. Abbakumov, “The solution of the “cold start problem” in e-learning,” Procedia-Social and Behavioral Sciences, vol. 112, pp. 1225–1231, 2014.
K. Xue, V. Yaneva, C. Runyon, P. Baldwin, “Predicting the difficulty and response time of multiple choice questions using transfer learning,” in Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, 2020, pp. 193–197.
V. Yaneva, P. Baldwin, J. Mee, et al., “Predicting the difficulty of multiple choice questions in a high- stakes medical exam,” Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, 2019, pp. 11–20.
Z. Qiu, X. Wu, W. Fan, “Question difficulty prediction for multiple choice problems in medical exams,” Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 2019, pp. 139–148.
L. Benedetto, A. Cappelli, R. Turrin, P. Cremonesi, “R2de: a nlp approach to estimating irt parameters of newly generated questions,” in Proceedings of the Tenth International Conference on Learning Analytics & Knowledge, 2020, pp. 412–421.
B. A. Lehman, D. Zapata-Rivera, “Student emotions in conversation-based assessments,” IEEE Transactions on Learning Technologies, vol. 11, no. 1, pp. 41–53, 2018.
N. Capuano, S. Caballé, J. Conesa, A. Greco, “Attention-based hierarchical recurrent neural networks for mooc forum posts analysis,” Journal of Ambient Intelligence and Humanized Computing, pp. 1–13, 2020.
T. Atapattu, K. Falkner, M. Thilakaratne, L. Sivaneasharajah, R. Jayashanka, “What do linguistic expressions tell us about learners’ confusion? a domain-independent analysis in moocs,” IEEE Transactions on Learning Technologies, vol. 13, no. 4, pp. 878–888, 2020.
M. Feng, N. Heffernan, K. Koedinger, “Addressing the assessment challenge with an online system that tutors as it assesses,” User modeling and user-adapted interaction, vol. 19, no. 3, pp. 243–266, 2009.
E. Verdú, M. J. Verdú, L. M. Regueras, J. P. de Castro, R. García, “A genetic fuzzy expert system for automatic question classification in a competitive learning environment,” Expert Systems with Applications, vol. 39, no. 8, pp. 7471–7478, 2012.
M. Uto, “Rater-effect irt model integrating supervised lda for accurate measurement of essay writing ability,” in International Conference on Artificial Intelligence in Education, 2019, pp. 494–506, Springer.
H. A.-M. Gerlache, P. M. Ger, L. de la Fuente Valentín, “Towards the grade's prediction. a study of different machine learning approaches to predict grades from student interaction data,” International Journal of Interactive Multimedia and Artificial Intelligence, vol. 7, no. 4, pp. 196–204, 2022.
K. VanLehn, S. Ohlsson, R. Nason, “Applications of simulated students: An exploration,” Journal of artificial intelligence in education, vol. 5, pp. 135–135, 1994.
N. Matsuda, W. W. Cohen, K. R. Koedinger, “Teaching the teacher: tutoring simstudent leads to more effective cognitive tutor authoring,” International Journal of Artificial Intelligence in Education, vol. 25, no. 1, pp. 1– 34, 2015.
S. B. Blessing, “A programming by demonstration authoring tool for model-tracing tutors,” International Journal of Artificial Intelligence in Education, vol. 8, no. 3- 4, pp. 233–261, 1997.
D. E. K. Lelei, G. McCalla, “Simulation in support of lifelong learning design: A prospectus.,” in SLLL@ AIED, 2019, pp. 38–42.
A. C. Graesser, “Conversations with autotutor help students learn,” International Journal of Artificial Intelligence in Education, vol. 26, no. 1, pp. 124–132, 2016.
A. Vizcaíno, “A simulated student can improve collaborative learning,” International Journal of Artificial Intelligence in Education, vol. 15, no. 1, pp. 3–40, 2005.
D. E. K. Lelei, G. McCalla, “How many times should a pedagogical agent simulation model be run?,” in International Conference on Artificial Intelligence in Education, 2019, pp. 182–193, Springer.
G. Erickson, S. Frost, S. Bateman, G. McCalla, “Using the ecological approach to create simulations of learning environments,” in Artificial Intelligence in Education, 2013, pp. 411–420, Springer.
S. Frost, G. McCalla, “Exploring through simulation an instructional planner for dynamic open-ended learning environments,” in Artificial Intelligence in Education, 2015, pp. 578–581, Springer.
M. A. Riedesel, N. Zimmerman, R. Baker, T. Titchener, J. Cooper, “Using a model for learning and memory to simulate learner response in spaced practice,” in Artificial Intelligence in Education, 2017, pp. 644–649, Springer.
M. Dzikovska, N. Steinhauser, E. Farrow, J. Moore, G. Campbell, “Beetle ii: Deep natural language understanding and automatic feedback generation for intelligent tutoring in basic electricity and electronics,” International Journal of Artificial Intelligence in Education, vol. 24, pp. 284–334, 2014.
T. R. O’Neill, J. L. Gregg, M. R. Peabody, “Effect of sample size on common item equating using the dichotomous rasch model,” Applied Measurement in Education, vol. 33, no. 1, pp. 10–23, 2020.
Q. He, C. Wheadon, “The effect of sample size on item parameter estimation for the partial credit model,” International Journal of Quantitative Research in Education, vol. 1, no. 3, pp. 297–315, 2013.
M. Antal, “On the use of elo rating for adaptive assessment,” Studia Universitatis Babes-Bolyai, Informatica, vol. 58, no. 1, pp. 29–41, 2013.
R. Pelánek, J. Rihák, J. Papoušek, “Impact of data collection on interpretation and evaluation of student models,” in Proceedings of the Sixth International Conference on Learning Analytics & Knowledge, 2016, pp. 40–47.
K. Taunk, S. De, S. Verma, A. Swetapadma, “A brief review of nearest neighbor algorithm for learning and classification,” in 2019 International Conference on Intelligent Computing and Control Systems (ICCS), 2019, pp. 1255–1260, IEEE.
K. Struyven, F. Dochy, S. Janssens, “Students’ perceptions about evaluation and assessment in higher education: A review,” Assessment & Evaluation in Higher Education, vol. 30, no. 4, pp. 325–341, 2005.
M. F. Rodríguez, J. Hernández Correa, M. Pérez- Sanagustín, J. A. Pertuze, C. Alario-Hoyos, “A mooc-based flipped class: Lessons learned from the orchestration perspective,” in European Conference on Massive Open Online Courses, 2017, pp. 102–112, Springer.
L. P. Prieto, Y. Dimitriadis, J. I. Asensio-Pérez, C.-K. Looi, “Orchestration in learning technology research: evaluation of a conceptual framework,” Research in Learning Technology, vol. 23, 2015.
J. Niznan, J. Papousek, R. Pelánek, “Exploring the role of small differences in predictive accuracy using simulated data,” in AIED Workshop Proceedings, vol. 5, 2015, pp. 21–30.
Downloads
Published
-
Abstract226
-
PDF105