An ISD model based on analysis, design, development, implementation, and evaluation (ADDIE) has been used successfully to design and develop instruction in the renewable energy ﬁeld. ADDIE can be visually depicted in many different ways. Figure 3 shows one example of how the ﬁve processes are interrelated.
Let’s look at some of the key aspects of designing instruction using these ISD processes.
The Analysis Phase
The needs and the goals of the proposed instruction are identiﬁed in the Analysis phase. Several questions must be answered to determine whether instruction is needed to address a performance problem (a person’s inability to do a speciﬁc job or task).
- Is training really necessary?
- Who are the students?
- What equipment, resources, and facilities are available?
1. Is training really necessary?
ISD begins by examining the root cause of a performance problem.
The key concept here is to identify the knowledge, skills, and attitudes (KSAs) that are necessary for successful job performance and determine whether or not those performing the job have the key KSAs. If they do not, training may or may not be necessary. Typically, when workers do not have the necessary KSAs, training is needed; however, it is possible that a worker is missing one small step in a process or is deﬁcient in a single skill. Rather than produce a whole training course or program, we might be able to provide a job aid or send an e-mail that will correct the problem.
2. Who are the students?
Once it is established that training is needed, a course developer must determine what experience and knowledge the prospective students bring to the training. Important information to gather includes:
- The level of student expertise in the subject matter
- The amount of experience they have in this area
- The attitudes students have toward training
- Any special needs and constraints
Knowing students’ experience and competency helps us design the most appropriate instruction for them. For a more complete discussion of learner analysis and strategies for gathering information, see Good Teaching Matters: Section 1: Know Your Students
3. What equipment, resources, and facilities are available?
This question refers to the facilities, equipment (such as projectors, ﬂip charts, smart boards, as well as power tools, inverters, PV arrays), and resources (texts, PowerPoint presentations, computers, web sites, databases, test banks, practical exercises) that are available for training. Before planning learning activities and assessment plans, course developers will want to know: what educational and technical equipment and resources are available to be used in training and in testing? (Technical equipment refers to electrical and mechanical equipment that may inﬂuence job performance.)
Other questions include:
- Is there adequate lab space for the particular type of instruction?
- Are there enough qualiﬁed trainers for course delivery?
- What conditions will the students face when they go back to the job and how can the instruction address and, if possible, mirror those conditions?
The Solar Energy Education and Training Best Practices document on Lab Development can help you determine whether you have adequate space and equipment for your training.
In summary, the purpose of the analysis phase of systematic program planning is to get as complete a picture as possible of any job performance problems and determine whether the problems are due to students’ lack of KSAs or to something else. Once it is determined that a training course can remedy a performance problem, then an assessment of the students (KSAs and prerequisites) and the setting is performed. This insures that the instruction starts at the right point and that the necessary equipment and facilities are available.
The Design Phase
The purpose of the Design phase is to specify learning objectives and criterion-referenced testing procedures. As much as any phase in the instructional design process, this is the crux of systematic planning. Without well-stated learning objectives and well-designed assessment instruments, it is nearly impossible to have an effective education or training course.
Here are the key questions to ask during the design phase:
- Is there a task analysis to guide the design process, or do I need to create one?
- What competencies and objectives must students master?
- What assessment, test items, and checklists can I use to determine whether students are competent?
1. Is there a task analysis to guide the design process, or do I need to create one?
A JTA breaks a main task down into subordinate tasks and shows the relationships among the tasks. It provides a list of competencies (knowledge, skills, and tasks) that are required for a particular job.
The competencies speciﬁed in the task analysis are used as the basis for deﬁning instructional objectives and prerequisites. Figure 4 presents a partial example of a competency chart showing prerequisite relationships. Most JTAs do not show the prequisite relationships among the skills and tasks.
What if a JTA is not available? There are many good resources that can help you design a task analysis. This is not an easy process, however, and usually requires the help of subject matter experts (SMEs). The Solar Energy Education and Training Best Practices document on Curriculum and Program Development discusses job task analysis and gives examples of the NABCEP tasks analyses that are available.
2. What competencies and objectives must the students master?
The task analysis lists the competencies that students must master and the prerequisites they should have when they start a course. Course developers translate those competencies into learning objectives — speciﬁc and measurable statements that describe what students must accomplish at the end of a course. Learning objectives have four components:
- Audience – Who is the target audience?
- Behavior or action – What behavior or action must the student perform?
- Conditions – What conditions (including equipment, time constraints, and resources) will the student have during the assessment?
- Degree or standard – What standard or criterion level is required?
Although there is not a one-to-one match between any single task on the PV Installer task analysis and the sample learning objective above, the objective states that the student must make correct installation decisions (similar to those that might be required on the job under similar conditions). Several different tasks from the JTA are being assessed in this one objective. Notice that students are not required to go out to a job to perform the tasks. This is a cognitive objective. Students are being tested on their mastery of conceptual information and knowledge, not on their physical skills.
Cognitive objectives measure the knowledge and intellectual skills that students have. Bloom’s Taxonomy is often used to differentiate between different types of cognitive objectives. Figure 5 shows six cognitive levels, presented from highest to lowest. The Taxonomy can also be presented in the reverse order, from the simplest behaviors to the most complex. When designing instruction, we select the most complex behavior we want students to use on the job and write learning objectives to reﬂect those tasks or skills. When teaching, we start with the simplest skills and insure that students master those before going on to the higher ones.
The importance of these cognitive levels cannot be overstated. Because we want students to perform well on the job, we want them to operate at the higher levels of the cognitive domain. Rather than simply recalling facts or deﬁnitions or demonstrating comprehension of an idea, we want students to apply the information they learn, troubleshoot problems, solve problems, and make decisions. Every course in every program should have at least one primary objective that is written at the application level or higher.
For more information about how to write and use good learning objectives, go to Good Teaching Matters: Section 2, Learning Objectives. For more information about Bloom’s Taxonomy, click here.
3. What assessment, test items,and checklists can I use to determine whether students are competent?
Learning objectives and assessment instruments are different sides of the same coin. Learning objectives state the conditions, behavior/actions, and standards that students must meet to become competent in a particular task or skill. When the items on a test can be used to evaluate whether or not students have mastered the task or skill stated in the objective, the assessment instruments are referred to as criterion-referenced. See the table below for an example.
When writing test items all types of items can be used including multiple-choice questions or short-answer questions. Multiple choice items are more difﬁcult to write but easier to grade. The reverse is true for short-answer questions. True-false items should not be used because the learner has a 50-50 chance of guessing the correct answer. There are many good resources about good test writing, including:
- University of Washington Center for Teaching & Learning
- Test Designer: How to Write Good Test Questions
- The Learning Management Corporation: Writing Effective Questions
- The University of Tennessee at Chattanooga: Walker Teaching Resource Center
Another way to assess student knowledge and skills is to use criterion-referenced product and performance checklists. These are used when students must perform a task (such as strip wire) or produce a product (such as a blueprint or site survey). Even when students are asked to physically perform a task (demonstrate how to install a PV array), they are often being tested on their cognitive skills as well as their psychomotor or physical skills (their ability to use tools). We want to know whether the students can make correct installation decisions as well as if they can use the tools and equipment correctly.
A recent trend in testing is to use test items and checklists that ask students to perform on the assessment as closely as possible to the real-life situation they will face on the job. These are called problem-based or situation-based tests and use scenarios and situations that mirror actual job performance.
In the table below are two job aids that may help you design assessment instruments. The ﬁrst is a set of general guidelines for test development. The second provides general guidelines for developing any type of test item — from multiple-choice to matching and short-answer items.
To learn more about assessment and testing, go to Good Teaching Matters: Section 3 Design Test and Evaluation Measures that Promote Transfer.
The Development Phase
During the Development phase, lessons, learning activities and strategies, and media are selected, constructed, and produced for the training. Some key questions that should be answered are:
- How do I create a lesson plan? How should the content be organized?
- What instructor and student activities should be included?
- How do I provide practice for students?
- What media should I use when teaching?
- How can I present conﬁrming and corrective feedback?
This paper provides a brief overview of the development phase of systematic program planning. This phase and the implementation phase are discussed in detail in the Solar Energy Education and Training Best Practices document on Becoming an Effective Teacher.
1. How do I create a lesson plan? How should the content be organized?
Key steps in creating lessons and lesson plans include:
- Introducing students to the topic
- Explaining what students need to learn and telling them what the objectives are
- Presenting the instructional materials
- Giving students an example of correct performance
- Letting students practice what they’ve learned
- Providing corrective or constructive feedback
- Assessing student performance
- Providing summaries and reviews
2. What instructor and student activities should be included?
Instructors can either directly present the information to students or use an inquiry/discovery method. Using the inquiry method takes more time as students are “led” to the correct performance using various indirect methods.
Lectures have been used for centuries and are a staple of instruction. However, as students’ attention spans get shorter and shorter, lectures need to be shorter and interspersed with a variety of learning techniques that let students practice new skills and use the new knowledge they are acquiring.
Studies have shown that students rarely learn from lectures alone regardless of the quality of the lecture or the lecturer. To read a study about physicists who learned that lectures alone are not an effective technique, click here.
One of the easiest ways to engage learners is to use questions and answers. Kenneth E. Vogler offers an excellent resource in Asking Good Questions – ASCD, with questions matched to Bloom’s taxonomy. Some examples of PV questions matched to Bloom’s Taxonomy can be found in the table.
3. How do I provide practice for students?
Regardless of the strategy used, instructors must give students the opportunity to practice the skills they are learning. Practice can occur in the classroom when students engage in decision making about the size of an inverter or the length and amount of wire needed for a particular application. Practice can also occur in a lab when students physically connect PV system components or strip wire. Instructors must also provide feedback that either conﬁrms for the students that they are on the right track or that corrects incorrect behavior.
4. What media should I use when teaching?
Media available for teaching include PowerPoint presentations, videos, slides, pictures, ﬂip charts, and job-related objects. Adapting media rather than creating it from scratch is usually both time and cost efﬁcient. Some ways to adapt media are to:
- Add or delete parts
- Add practice and/or feedback opportunities
- Rewrite to match student characteristics
- Add content-relevant examples
When designing PowerPoint slides, the number of words on each slide should be around eight or ten. If students need a complete set of notes, prepare two sets of slides, one for the students to take with them and one to use during presentation. Avoid at all costs having text-dense slides that are projected and then read during the instruction.
To learn more about creating good PowerPoint presentations, see the Best Practices document on Becoming an Effective Teacher and/or Good Teaching Matters: Section 5: Create Simple PowerPoint Presentations.
5. How can I present conﬁrming and corrective feedback?
Nearly all educators believe that providing quality practice and immediate feedback is critical to student learning. Conﬁrming feedback acknowledges that students are correct or that their performance is acceptable. Corrective feedback helps students redirect and adjust their incorrect thinking or wrong answers. It is often followed by a related question or a complementary task to insure that students have achieved mastery.
To learn more about providing practice and feedback, go to the Best Practices document on Becoming an Effective Teacher and/or Good Teaching Matters: Section 4: Include Practice and Feedback in the Training.
The Implementation Phase
During the Implementation phase, instruction is presented to the students. Some of the questions listed below are determined during the development phase of the process but are carried out during the implementation phase:
- How do I motivate students?
- How do I introduce the lesson?
- What kinds of questions are best to use?
- How do I use PowerPoint slides or other presentation media?
- How do I summarize and review each lesson or presentation?
- How do I use my time wisely during the lesson?
This section provides a brief overview of the implementation phase of systematic program planning. This phase is discussed in detail in the Best Practices document, Becoming an Effective Teacher.
1. How do I motivate students?
John Keller, from Florida State University has developed a motivational instruction model (ARCS) based on four components:
- Gaining Attention
- Establishing Relevance
- Facilitating Conﬁdence
- Ensuring Satisfaction
An overview of Keller’s ARCS model can be found here.
2. How do I introduce the lesson?
An introduction sets the stage for the learning that is to occur. It creates a mindset for the students to receive new information. It taps into relevant knowledge and background information and can motivate students. Common types of introductions include:
- Asking questions
- Presenting a puzzling situation
- Providing a rationale
- Giving an overview of the lesson or topic (an advance organizer)
- Telling a relevant story or anecdote
- Using an analogy to link the unfamiliar with the familiar
3. What kinds of questions are best to use?
What you want students to learn will help you decide what type of questions to ask. Many people use Bloom’s Taxonomy as the basis for developing questions. Some examples are listed in the table on page 15.
In addition to matching questions to Bloom’s levels, other types of questions include:
- Convergent vs. divergent (one answer vs. multiple answers or ideas)
- Sequencing and patterns (questions that build on each other rather than questions that stand alone)
- Narrow to broad (speciﬁc questions that lead to general ideas or trends)
- General to speciﬁc or broad to narrow (global ideas or concepts followed by speciﬁc examples)
4. How do I use PowerPoint slides or other presentation media?
When using PowerPoint or other presentation media in the classroom, it is helpful to:
- Summarize or paraphrase what is on the screen (Do not read the slide.)
- Face the audience
- Make sure everyone can see the projected image
- Turn the projector off when you are not using it (Students will look at the bright light rather than at you.)
- Use a pointer
5. How do I summarize or review each lesson or presentation?
To help students remember what they have learned, a summary or review gives the main points that have been presented or discussed. Instructors should have a list of the key points, but they do not need to give the list to the students. They can prompt students to develop their own summaries by asking questions or asking them to solve a new problem. The important thing is that students have a clear idea of the main points included in the lesson and how those points relate to job performance.
6. How do I use my time wisely during the lesson?
If you are running late, pick out the most important aspects of the lesson and focus on those. Use designations from the job task analysis (critical, very important, and important) to decide how much time to spend on any task or objective.
Use more examples and fewer explanations. Students can assess their understanding to see if the examples ﬁt their frames of reference.
The Evaluation Phase
During the Evaluation phase, data is collected to determine if the lessons, course, or program have been successful. Some the questions that are asked include:
- How do I know if my course has been successful?
- Which experts should review the materials before a course is presented to students?
- Which changes should be made to improve the course after it is presented?
- Do the results justify the time and effort spent developing the course?
1. How do I know if my course has been successful?
Donald Kirkpatrick has developed a four-step model for evaluating instructional programs (See Figure 6). The four levels are:
Several types of evaluation data are collected to determine if a course has been successful. Program evaluation is partly based on whether or not the students successfully met the objectives as originally speciﬁed.
- For reactions, students are asked their perceptions of the course. For example: Were the objectives clearly stated? Were the instructional materials easy to read and follow? Were the classroom and labs adequate? Was the instructor knowledgeable and engaging? Surveys that let students describe their feelings about the instruction are often called “smile sheets.”
- Objectives-based evaluations help to determine if the participants met the learning objectives that were established. For example: Was there an increase in knowledge, skills, and attitudes? Were the correct learning objectives and prerequisites identiﬁed?
- In transfer evaluation, evaluators want to know if students can transfer KSAs from the classroom to the job.
- Results/payoff evaluation examines what the educational institution gained from the ﬁnal results of the course and whether this was worth the cost and effort to develop it.
For more information about Kirkpatrick’s evaluation levels and methods for measuring these levels, see: www.mastermindsink.com/Evaluation.pdf
2. Which experts should review the materials before a course is presented to students?
Evaluation is an on-going process in systematic program planning. A course is usually evaluated as it is being developed (formative evaluation) as well as after the course is presented to the target audience (summative evaluation).
During formative evaluation, SMEs knowledgeable about the content and education specialists competent in systematic program planning should evaluate the course. Content experts check to make sure the subject matter is correct. Instructional design experts check to make sure that all the program components are aligned.
A summative evaluation is conducted after the course has been presented. Data are collected to see how successful the course was. Students’ reactions to the course and their assessment data are gathered. Once students get to the job, they are evaluated to see how well the KSAs have transferred to the work environment. Return on investment (ROI) studies are also part of summative evaluation. These are usually conducted by personnel who are knowledgeable in conducting them.
3. Which changes should be made to improve the course after it is presented?
The ﬁrst and most important change to any course involves content. Content that is incorrect or out of date must be changed.
Student reactions to the course (Kirkpatrick’s ﬁrst level of program evaluation) may be helpful in determining which strategies were useful and which were not. However, SMEs must evaluate the content. The data collected allows a course developer to make decisions about which changes, if any, should be made to produce a more effective, higher-quality course.
4. Do the results justify the time and effort spent developing the course?
This question is addressed in Kirkpatrick’s Level 4 evaluation, results. This is a high-level evaluation of the course and often revolves around the “payoff” that the administrator or institution receives from presenting a course. This payoff might include determining the ROI or ﬁnding out if the course produced additional market share. Course developers are often involved in designing the evaluation instruments and implementing them for a Level 4 evaluation. Typically, however, someone higher up in the organization runs this part of the evaluation.