What type of item requires students to select the correct answer from a number of possible responses?

Short answer questions (or SAQs) can be used in examinations or as part of assessment tasks.

They are generally questions that require students to construct a response. Short answer questions require a concise and focused response that may be factual, interpretive or a combination of the two.

SAQs can also be used in a non-examination situation. A series of SAQs can comprise a larger assessment task that is completed over time.

  • Questions can reveal a student’s ability to describe, explain, reason, create, analyse, synthesise, and evaluate.
  • Gives opportunities for students to show higher level skills and knowledge.
  • Allows students to elaborate on responses in a limited way.
  • Provides an opportunity to assess a student’s writing ability.
  • Can be less time consuming to prepare than other item types.
  • Structured in a variety of ways that elicit a range of responses, from a few words to a paragraph.

  • Can limit the range of content that can be assessed.
  • Favours students who have good writing skills.
  • Can potentially be difficult to moderate.
  • Can be time consuming to assess.
  • Need to be well written for the standard of answers to be able to be differentiated in terms of assessment.

  • Effective short answer questions should provide students with a focus (types of thinking and content) to use in their response.
  • Avoid indeterminate, vague questions that are open to numerous and/or subjective interpretations.
  • Select verbs that match the intended learning outcome and direct students in their thinking.
  • If you use ‘discuss’ or ‘explain’, give specific instructions as to what points should be discussed/explained.
  • Delimit the scope of the task to avoid students going off on an unrelated tangent.
  • Know what a good response would look like and what it might include reference to.
  • Practice writing a good response yourself so you have an exemplar and so you are aware of how long it may take to answer.
  • Provide students with practice questions so they are familiar with question types and understand time limitations.
  • Distribute marks based on the time required to answer.
  • Review the question using the following questions:
    • Does the question align with the learning outcome/s?
    • Is the focus of the question clear?
    • Is the scope specific and clear enough for students to be able to answer in the time allocated?
    • Is there enough direction to guide the student to the expected response?

Your questions can access a range of cognitive skills/action verbs.

1. Limit the number of distractors. Use between three and five options per question. It is difficult to come up with good distractors. The use of additional distractors will increase reading time. Research shows that three distractors are about as effective as four or five choice items.

2. Make the distractors appealing and plausible. If the distractors are far-fetched, students can too easily locate the correct answer, even if they have little knowledge. The best distractors help diagnose where students went wrong in their thinking.

3. Make sure there is only one correct answer. Avoid having two or more answers that are arguably correct where one is more correct than the other. Note, this differs from an element of truth in the distractor.

4. Make the choices grammatically consistent with the stem. This improves readability and reduces confusion.

5. Put the choices in meaningful order when possible. This could be numerical, chronological, or conceptual order.

6. Avoid using “all of the above” or “none of the above”. An “all of the above” option means students must read every response, increasing test-taking time, and penalizing slow readers. If the students know two answers are correct they may incorrectly select “all of the above”. If they know one answer is incorrect, they know that “all of the above” is also incorrect. The option “none of the above’ does not test whether the student knows the correct answer, only that the distractors aren’t correct.

7. Avoid using “which of the following” items. This increases test-taking time and penalizes slow readers.

8. Avoid using words such as always, never, all, or none. Most students know that few things are universally true or false so distractors with these words can easily be eliminated as plausible answers.

9. Make the distractors mutually exclusive. If one distractor is true, the student may assume another distractor is true as well.

10. Make distractors approximately equal in length. Students often select the longest option as the correct answer.

Strategies, Ideas, and Recommendations from the faculty Development Literature

General Strategies

  • Make sure that at least some test items require higher-level learning.
    Research suggests that faculty tend to construct questions that require memorized knowledge. One way of maintaining a balance is to set up a grid, using categories from Bloom's taxonomy as headings for columns, and course objective titles for rows. This may help insure an adequate number of understanding, application, and synthesis items.
  • Write test items throughout the term.
    Good test items are difficult to write, and you will find the task easier if you spread out the work. Target writing three to five items a week.
  • Give students advice on how to take a multiple-choice or matching test.
    Instructors should give the following recommendations to students preparing to take a multiple-choice exam:
    1. Go through the test once and answer all the questions you can.
    2. Go through the test again; spend a reasonable amount of time on each problem, but move on if you get stuck.
    3. Save time at the end to double-check your answers and make sure you haven't made any clerical errors.
    4. Change your answers if you wish; research shows that most students gain more than they lose on changed answers.

Multiple-Choice Test Items

  • In the directions, instruct students to select the "best answer" rather than the "correct answer."
    Asking for the "correct answer" is more likely to invite debate from contentious students.
  • In the directions, let students know whether they can guess.
    Don't design exams that punish students for guessing. Rather, encourage students to use a partial knowledge to figure out a response and make an informed guess.
  • Express the full problem in the stem.
    Make sure that students can understand the problem before reading the alternative answers. Usually, direct questions are clearer than sentence completions.
  • Put all relevant material in the stem.
    Do not repeat phrases in the options if the phrase can be stated in the stem.
  • Keep the stem short. Unnecessary information confuses students and wastes their time. Compare the following:

    Poor: Monetary and fiscal policies are commonly sued in the U.S. for stabilization purposes. Leaving aside fiscal policies for the moment, which of the following monetary policies would be most effective in combating inflation? Better: Which of the following monetary policies would be most effective in combating inflation?

  • Limit the number of response alternatives.
    Research shows that three-choice items are about as effective as four-choice items. A four-response answer format is the most popular. Never give students more than five alternatives.
  • Make the distracters appealing and plausible.
    Distracters should represent errors commonly made by students. The best distracters are statements that are too general or too specific for the requirements of the problem, statements that are accurate but do not fully meet the requirements of the problem, and incorrect statements that seem right to the poorly prepared student. On rare occasion an implausible distracter can relieve tension among students.
  • Make all choices equal in length and parallel in structure.
    Do not give away the best choice by making it longer, more detailed, or filled with more qualifiers than the alternatives.
  • Avoid trick questions or negative wording.
    Negative working often confuses students and makes items unnecessarily complex. If you do use negatives, underline or capitalize them or put them in bold so students don't overlook them. Always avoid having negatives in both the stem and the options.
  • Refrain from using words such as "always," "never," "all," or "none."
    Savvy students know that few ideas or situation are absolute or universally true.
  • Make the choices grammatically consistent with the stem.
    Read the stem and each of the choices aloud to be sure that each is correct in the use of a or an, singular and plural, and subject-verb agreement.
  • Avoid giving "all of the above" or "none of the above" as choices.
    These items do not discriminate well among students with differeing knowledge. Students need only compare two choices: if both are acceptable, then "all of the above" is the logical answer, even if the student is unsure about a third choice.
  • Vary the position of the best answer.
    Research shows that faculty tend to locate the best answer in the b or c position. Instead, use a deck of cards to locate correct responses randomly (for example, hearts = first position, spades = second position, and so on) unless you are arranging the choices in some meaningful order (for example, numerical, chronological, or conceptual).
  • Keep the test length manageable.
    Students can complete between on and two multiple-choice items per minute. (Source: Lowman, 1984).
  • Take advantage of machine scoring capabilities.
    Contact Tom Paul (#4864) in the Computer Center, McGraw 208, to have your scantron tests corrected and data analyzed for test improvement.

Matching Test Items

  • Give Clear Instructions.
    Let students know the basis on which items are to be matched, where to write answers, and whether a response may be used more than once.
  • Keep the two sets of items homogeneous
    For example, Column 1 may list events and Column 2 may list dates; do not combine events, dates, and names in one column.
  • Try to order the responses.
    If you order the items in Column 2 alphabetically, chronologically, or conceptually, students will be able to read the series quickly and locate answers rapidly.
  • Create more responses than premises.
    In general, give students five to ten alternatives in Column 2. If you include distracters in Column 2, let students know that some of the entries in Column 2 do not apply.
  • Be conscious of layout and format.
    Always keep both columns on the same page so that students don't have to flip back and forth. Place answer blanks to the left of each entry in Column 1. Place Column 2 on the right-hand side of the page. Use capital letters for the responses (they are easier to discern than lowercase letters) and numbers for the premises (for later discussion).

Post-Test Item Analysis

  • After you have scored the exams, evaluate the test items.
    An item analysis can help you improve your tests by showing which items are too easy or too hard and how well an item distinguishes between students at the top and bottom. Contact the LEARN Center for help with this type of analysis.
  • Look at the difficulty of each item.
    Calculate the percentage of students answering each item correctly. The goal is to construct a test that contains only a few items that more than 90 percent or less than 30 percent of students answer correctly. Optimally, difficult items are those that about 50 to 75 percent of the class answer correctly. Items are considered moderately difficult if between 70 and 85 percent of the students get the correct response. An item may be difficult for a variety of reasons: it may be unclearly written; the content may be challenging; or the students may be unprepared. In interpreting item difficulty indices, consider all three possibilities.
  • Look at how well each item discriminates between high and low scores.
    The statistical technique called item discrimination lets you know whether individual test items discriminate between top and bottom students. The discrimination ratio will fall between – 1.0 and + 1.0. The closer the ratio is to + 1.0, the more effectively that question distinguishes students who know the material (the top group) from those who don't (the bottom group). Ideally, each will have a ratio of at least + .5.
  • Use the results to improve your tests.
    Use both the difficulty level and discrimination ratio to drop or revise items. As a rule of thumb: items with a difficulty level of between 30 percent of above 70 percent can be expected to have an acceptable discrimination ratio, that is, at least + .3 or above. Items with difficulty levels below 30 percent or above 70 percent can be expected to have high discrimination ratios. If an item has a high difficulty level and a low discrimination (below + .3), the item needs to be revised. You may find that many items fall on the borderline: discrimination ratios just under + .3 and difficulty levels of between 30 percent and 70 percent. Those items do not necessarily need revision.
  • Solicit students' comments about the test.

Sources

The Strategies, Ideas and Recommendations Here Come Primarily From:

Gross Davis, B. Tools for Teaching. San Francisco, Jossey-Bass, 1993. McKeachie, W. J. Teaching Tips. (10th ed.) Lexington, Mass.: Heath, 2002.

Walvoord, B. E. and Johnson Anderson, V. Effective Grading. San Francisco, Jossey-Bass, 1998.

And These Additional Sources...

Clegg, V. L., and Cashin, W. E. "Improving Multiple-Choice Tests." Idea Paper, no. 16.

Manhattan: Center for Faculty Evaluation and Development in Higher Education, Kansas State University, 1986.

Fuhrmann, B. S. and Grasha, A. F. A Practical Handbook for College Teachers. Boston:

Little, Brown, 1983.

Jacobs, L. C. and Chase, C. I. Developing and Using Tests Effectively: A Guide for Faculty.

San Francisco: Jossey-Bass, 1992.

Lowman, J. Mastering the Techniques of Teaching. San Francisco: Jossey-Bass, 1984.

Ory, J. C. Improving Your Test Questions. Urbana: Office of Instructional Res.,

University of Illinois, 1985.

Seyer, P. C. Item Analysis. San Jose, Calif.: Faculty and Instructional Development Office,

San Jose State University, 1981.

Svinicki, M. D. "The Test: Uses, Construction and Evaluation," Engineering Education,

1976, 66(5) 408-411.

Welsh, A. L. "Multiple Choice Objective Tests." In P. Saunders, A. L. Welsh, and W. L.

Hansen (eds.), Resource Manual for Teaching Training Programs in Economics. New York: Joint Council on Economic Education, 1978.

Wergin, J. F. "Basic Issues and Principles in Classroom Assessment." In J. H. McMillan

(ed.), Assessing Students' Learning. New Directions for Teaching and Learning, no. 34. San Francisco: Jossey-Bass, 1988.