Training Evaluation: the process of determining whether “instructional objectives” have been met. (Instructional Design, Assessment in Education) Measuring whether the training works and goals have been achieved. (Prometheus, Products-ADDIE Model) Answers the questions: Does it work? Were goals and objectives achieved?, and What was accomplished? May include tests, observations, surveys, and chats with managers. (Marshall, 9/22/2011)

Assessment: the use of methods, including tests, to evaluate the current level of student learning; used in planning future steps in instruction. (Johnson, 304) Testing the learning to see if the learners have achieved a desired performance. A component of evaluation. Used to evaluate changes in skills and knowledge. (Marshall, 10/20/11) A procedure for assessing a person’s aptitude, competence, skill, or knowledge. (Oxford, definition for ‘test.’) Also referred to as ‘test’ and ‘testing.’

Authentic Assessment: an assessment that measures one’s ability to perform a task in a real-life situation. (Johnson, 12)

Criterion Referenced Assessment: a type of testing that compares the learner's performance to a set of defined criteria. (Marshall, 12/8/2011) (An evaluation method) that matches each learner to pre-specified criteria. Compares an individual’s performance to the acceptable standard of performance for specific tasks, requires completely specified objectives, and results in ‘yes-no’ decision about competence. Compares a learner’s actual performances with those spelled out in the “instructional objectives.” The “instructional systems design” model assumes an emphasis on criteria-referenced assessment. (Marshall, 10/20/11) A comprehensive assessment system through which candidates demonstrate their proficiencies in the area being measured. (Johnson, 17) Editor's note - examples include the Red Cross Lifesaving Certificate and the PADI Scuba Certification. Also referred to as ‘objective-referenced assessment,’ ‘performance assessment,’ and ‘performance-based assessment.’

Norm-Referenced Assessment: (an evaluation method that) compares one individual’s performance to the performance of other people. Compares learners to each other. For example, grading on a normal distribution curve. (Marshall, 10/20/2011) Editor's note - other examples include ‘IQ,’  ‘GRE,’ ‘GMAT,’  ‘SAT,’  ‘ACT,’ and ‘LSAT,’  tests, and entrance exams for medical school. Also referred to as ‘normative assessment.’

Objectivist Assessment: (cognitive) objective-based approach that believes in a reality that can be known and measured. Measures are analytical or quantitative in nature. (Marshall, 10/11/2011)

Summative Assessment: data about student performance that is used to make a judgment about a grade, promotion to the next grade, graduation, or college entrance. (Johnson, 311)

Evaluation Levels: educational technologists often use a four-level model of evaluation. Each successive level represents a more rigorous test of the instruction. (Marshall, 11/24/2011)

Evaluation Level 1 Reactions: (evaluating) how well the learners liked the instruction, their attitudes towards the instructor, the materials, and the environment in which the (instruction) was given. (Negative feedback may indicate) issues of motivation, how the materials are delivered or who is delivering them.(Marshall, 11/24/2011)

Evaluation Level 2 Learning: (evaluating) whether or not learners mastered the instructional objectives. Measured by giving a test during or at the end of the instruction. (Negative evaluation) is usually (related to) “analysis” and “design” issues. (Marshall, 11/24/2011)

Evaluation Level 3 Transfer: (evaluating whether or not the learners were able to) take their new skills and knowledge back to the workplace with them. (Negative evaluation may indicate) need for more practice, alternative instructional strategies, (or lack of) job aid. (Marshall, 11/24/2011)

Evaluation Level 4 Results: finding out whether (the instruction enabled) accomplishment of the organization’s goals. (Negative evaluation) may address recommendations from the performance analysis in more than just the skills and knowledge “driver.” Evaluation Level 4 doesn’t get measured very often. (Marshall, 11/24/2011)

Evaluation Plan: consists of four steps. A variety of tools, or instruments, (are used) to evaluate a product or program at different levels. (Marshall, 11/24/2011)

Evaluation Plan Step 1: identify the question you wish to answer or the aspect of the project you wish to evaluate. (Marshall, 11/24/2011)

Evaluation Plan Step 2: choose or create an appropriate instrument with which to gather data to answer the questions. Computer-based instructional products often support built-in assessment tools. (Enables evaluators) to see what learners are looking at, when and how long they access training areas such as help, examples, remediation, enrichment materials, and so forth. (Marshall, 11/24/2011)

Evaluation Plan Step 3: establish a timeline for gathering data on each question, analyzing it, and writing the “evaluation report.” If appropriate, assign a person to take responsibility for each data-gathering, evaluation, and report-writing task. (Marshall, 11/24/2011)

Formative Evaluation: a type of evaluation done during the design and development of an instructional product, rather than at the end. It consists of some type of ‘prototype’ evaluation. (Marshall, 12/8/2011) Its main purpose is to catch deficiencies so that the proper learning interventions can take place that allow learners to master the required skills and knowledge. (BigDogs, Formative Evaluation) Collection of data to show what a student has learned in order to determine the instruction required next. (Johnson, 311)

Summative Evaluation: end of project evaluation effort. (Marshall, 12/8/2011) A method of judging the worth of a program at the end of the program activities (summation). The focus is on the outcome. (Big Dogs, Summative Evaluation) Taking place near or at the end of a project, summative evaluations show you how well the learning goals of the project were met, and documents (the project’s) impact and lessons learned. This new information can inform the planning of subsequent projects, saving staff time and organization money. (Institute for Learning Innovation, Evaluation) Also referred to as ‘external evaluation.’

Evaluation Report: introduces the evaluation, states the objectives, describes the instruments and methods used, reports the data, and synthesizes the results in the form of conclusions or recommendations. (Marshall, 11/24/2011)