An Item Response Theory Evaluation of a Language-Independent CS1 Knowledge Assessment
1:45 PM - 2:10 PM
Fri Mar 1, 2019
Hyatt: Greenway B/C (2nd floor)


Tests serve an important role in computing education, measuring achievement and differentiating between learners with varying knowledge. But tests may have flaws that confuse learners or may be too difficult or easy, making test scores less valid and reliable. We analyzed the Second Computer Science 1 (SCS1) concept inventory, a widely used assessment of introductory computer science (CS1) knowledge, for such flaws. The prior validation study of the SCS1 used Classical Test Theory and was unable to determine whether differences in scores were a result of question properties or learner knowledge. We extended this validation by modeling question difficulty and learner knowledge separately with Item Response Theory (IRT) and performing expert review on problematic questions. We found that three questions measured knowledge that was unrelated to the rest of the SCS1, and four questions were too difficult for our sample of 489 undergrads from two universities.

Benjamin Xie Graduate Research Assistant, University of Washington Information School
Matt J. Davidson PhD Candidate, College of Education, University of Washington
Min Li University of Washington
Amy Ko Associate Professor, University of Washington

Add to my calendar


Create your personal schedule through the official app, Whova!

Get Started