Dr. Barbara Martin, a former professor in the College of Education at the University of Central Florida and at Kent State University in educational technology and educational psychology, specializes in instructional design (ISD), criterion-referenced testing, evaluation strategies, distance education, and instructional theory. She has written articles on ISD and educational technology, including a book on designing instruction for affective behaviors.
In part 3 of this five-part series, Good Teaching Matters: five teaching practices to improve the quality of a training course, Barbara shares her expertise and experiences aligning technical information with effective training. Whether you’re an educator or a student, you’ll want to read Martin’s piece on five important teaching practices that can improve the quality of a training course.
In the last column, Learning Objectives, I said that objectives and assessments are the flip side of the same coin. The primary reason that objectives are written is to identify competencies so that they can be taught and tested. All objectives have at least three components:
- the Conditions: the circumstances (e.g., the situation, resources) under which the student will be tested;
- the Action or Behavior: the competency that the student must achieve; and
- the Criteria: how proficient the learner must be in the competency. When correctly written, they establish how the learner will be tested to determine whether or not a competency has been met.
Given that objectives establish how learners should be tested, it is helpful to know a little about how to assess learners before you start writing objectives. There are basically two primary ways of assessing learner performance. The first is to give a standard test with test items such as multiple-choice and matching. The second is to use performance or product checklists or rubrics to assess whether or not a student can accurately perform a task (such as run electrical wire) or can produce a tangible object (such as develop a blueprint or a site drawing for a PV array). It is beyond the scope of this short column to give a tutorial on how to write test questions and checklists. However, there are many good sources that explain how to write multiple-choice, fill-in-the blank, and essay questions (notice I left out true-false questions because there is a 50-50 chance of getting the right answer just by guessing!). Just Google “writing good tests,” “writing good test items,” and/or “developing a skills checklist” and you’ll find many good tutorials.
In this column, I’d like to focus on two key principles related to testing and assessment in training situations that may influence how you write your learning objectives and how you structure assessment in your own classroom. These two principles are: promoting transfer and less is more.
Write test items and assessments that promote transfer. In training situations at companies where I have taught testing, instructors will often say to me “I don’t know why my students performed so poorly when they went back to the job. I tested them during the class. They knew the information on the test.” After looking at their assessments, what I usually find is that most assessments and tests do not represent job performance. What you see are definitions and parts of problems. However, employees are rarely asked to state definitions when they get back to their job, nor are they asked to do parts of a problem (for example, to do a calculation out of context, or find a source in a manual without applying it to a specific situation or problem). While these part skills are important, they are only truly useful if they can be applied in the context of a larger job-related task. Therefore, it is important that students are tested, as nearly as possible, on the entire job performance. We want our assessments to reflect job performance so that they students will transfer those skills to the job.
Let’s take an example from a PV installation. If students have to make decisions about how to do a site survey, place disconnects, size wire, etc. on the job, then the test should reflect those skills in the context of an installation “on paper.” This is called situation-based testing or problem-based testing. An actual installation problem is presented and students have to make the same kinds of decisions on the test that they would have to make on the job. They have to apply what they learned to a simulated real-life situation(s).
Write fewer test items but more powerful ones. This principle is a corollary to the first. Instructors often get bogged down in writing a lot of test items. They test every definition, every theory and principle, and every calculation. It is an arduous process to write so many items. However, if you assess student learning with situation-based or problem-based items you can write one or two very important items that promote transfer to the job rather than writing items that do not put learning into the context of the job.
In summary, here’s an example of an objective:
Given the specifications for a PV system, a site survey, and a series of 20 questions about a PV installation (conditions), each student (audience) will make correct installation decisions (action or behavior) by answering 19 of the 20 questions correctly (criteria).
On the test, we are going to give at least one example of an installation (a “situation” or “problem”) that reflects what the learner will have to do on the job. We are going to ask 20 multiple choice questions about that installation. The students will have to get 19 of the 20 correct to pass the test (i.e., to be competent). We have framed the questions in the context of what they will have to do on the job; therefore, it promotes transfer, and we have written one very powerful application-based test question, using the principle that less is more.
In the next column, we’ll talk about the role of practice and feedback in designing effective instruction.