Snap!2020: Computer-Based Testing

Описание к видео Snap!2020: Computer-Based Testing

Panel with Irene Ortega, Gurkaran Singh Goindi, 🐼 Shein Lin Phyo 🐧, Maxson Yang, Benjamin Belfus, Shannon Hearn, Jonas Ong, Qitian Liao, Alyssa Sugarman, Dan Garcia, Bojin Yao, Eduardo Huerta

In STEM higher education, courses conduct both formative and summative assessments in a manner that thwarts mastery learning and magnifies equity gaps in student preparation. In short, this is “constant time, variable learning”—course pacing is the same for all students regardless of learning speed, all students receive a small number of “one-shot” summative assessments at the same time, and not all will master the material (or even pass). In contrast, mastery learning is “constant learning over variable time”—some students may take longer than others to reach the same level of mastery, but they can eventually do so with increased practice and instructor support. The challenge with implementing mastery learning is that increased practice in STEM courses means solving more practice problems, but developing good practice problems requires instructor effort, to say nothing of giving the students feedback on their performance on those problems.

To address these challenges, Dan Garcia’s lab at UC Berkeley is developing paradigm-based question generators (PQGs) to enable both formative-assessment mastery learning and summative-assessment mastery learning for “The Beauty and Joy of Computing.” Students will have as much practice and time with Snap! concepts as necessary to achieve mastery rather than a rigid schedule that may result in variable mastery. Our hypothesis for this project is that PQGs will result in higher retention, stronger learning outcomes, higher participation in computing for underrepresented and minority students, and more effective use of instructor time to identify and assist struggling students.

Комментарии

Информация по комментариям в разработке