|Read a Post|
|Reply to this Post|
|Posted By: Jay Powell on February 19, 2016|
|It is common practice to omit students' "wrong" answers when scoring "High Stakes" tests and to give only the total frequency of the "right" ones.|
This approach is used for two reasons;
1. The test is presumed to be a representative sample of the content domain.
2. "Wrong" answer selection is assumed to be "blind" guessing.
The first reason is insufficient because the answers could all be from memorization in the absence of understanding. Knowing which answers were "right" aids decision makers to profile students' knowledge.
The second assumption is FALSE because students usually read and formulate an answer and then match that with the answer set. This point in responding is where "guessing" enters, when the options are ambiguous to the test taker.
What is actually being measured is the students" INTERPRETATIONS of the test items, with verbal fluency, cultural background, cognitive maturity, and sometimes, deeper understanding than expected.
Properly scores, these tests are powerful diagnostic tools.
Profiles draw from the associations among all answer establish "proficiency" better than total scores and with repeated measurement, progress or decline patterns.
Students who choose "wrong" answers on easy questions while correctly answering difficult one are often reading more into the easy questions that the item developer intended.
In short, the effectiveness of superbly designed test items is obscured because learning is both remembering and a transformation process in which ways of thinking change. The total-correct score does not provide information about this second type of learning, but scoring for the reasoning behind answer selection does. Hence, scoring for the reasoning behind answer selection provides this deeper information automatically.
Our team has a scoring system that recovers this lost information.