Evaluating Organizational Learning
Evaluation and Continuous Improvement - Learning in Organizations (Ford, 2020)
Evaluation:
Needs Assessment:
Parallel process with setting up an evaluation plan.
Evaluation plan answers questions about purpose, data collection, and appropriate intensity.
Five Questions Addressed by Evaluation Plan:
Relevance: Reflects learner needs based on the needs assessment.
Content Validity: Measures job relevancy through evaluation of content domain.
Ratings of Job Relevance: Directly asking learners about job relevance.
Emphasis: Assesses appropriate emphasis on knowledge and skills.
Learning Validity:
Identifies expected level of learning in relation to success standards.
Measures different knowledge constructs through various assessment methods.
Transfer Validity:
Assesses changes in behavior on the job after learning.
Examines direct application, learning from observation, explaining ideas to others, and leading teams.
Job Performance and Organizational Payoff:
Measures job performance proficiency and contribution to team goals.
Considers economic impact or changes in performance for organizational payoff.
Return on Investment (ROI):
Calculates program value based on net benefits and costs.
Steps involve developing a valuation plan and estimating ROI conservatively.
Success Case Method:
Determines if program-intended changes are achieved.
Identifies success cases through surveys or records, relying on self-reported data.
Informative Evaluation:
Determines evaluation purpose and develops appropriate measures.
Collects high-quality data for informed choices about program retention and modification.
Stakeholders and Quality of Measurement:
Identifies interested parties and their expectations.
Focuses on developing criterion measures with high validity.
Proportionate Evaluation:
Creates measures, designs studies, and analyzes data proportionate to learning needs and organizational capabilities.
Choice Points in Evaluation:
Evaluation efforts can be simple or complex based on priorities, resources, and organizational commitments.
Strong evaluation plans are essential for effective interventions.
Internal Validity and Threats:
Considers whether the intervention made a difference and evaluates potential threats.
Threats to internal validity include history, testing, instrumentation, differential selection, and program integrity.
Evaluation Designs:
Learner Post-Assessment/Case Study Design:
Only post-test, cannot show change.
Learner Pre-and Post-Assessment Design:
Traditional design with pre- and post-tests.
Internal referencing approach can strengthen design.
Pre-Test/Post-Test, Control Group Design:
One group does pretest, training, post-test; the other does pre- and post-tests without training.
Threats include selection and regression to the mean.
Randomized Control Group Design:
Similar to the above, but with random selection into learning and control groups.
Solomon Four-Group Design:
Highly rigorous design addressing most validity threats.
Time Series Quasi-Experimental Design:
Learning group does four pretests, learning, and four post-tests.
Helps eliminate threats like testing effects or regression to the mean.
Continuous Improvement:
Learning Systems Model:
Evaluation feeds into a continuous improvement model.
Feedback loops to design, delivery, and evaluation for program modification.
Feedback Loop:
Strong focus on summative and formative processes and external validity.
Summative and Formative Evaluation:
Summative targets overall outcomes, comparing interventions.
Formative focuses on understanding why outcomes were or were not achieved.
External Validity Issues:
Summative evaluation provides information on program effectiveness.
External validity involves generalizability, requiring multiple studies in different settings.
Rapid Evaluation (REAM):
Aims for a balance between speed and accuracy in needs assessment, planning, implementing, and evaluating processes.
Involves real-time evaluations, systematic organization, data collection, and debriefing sessions.
Best Practice Guidelines:
Articulate Purpose and Identify Stakeholders:
Clearly define evaluation purpose and identify interested parties.
Build relevant evaluation measures.
High-Quality Measures:
Create measures with high levels of reliability and validity.
Realistic Evaluation Plan:
Develop a realistic evaluation plan considering available resources.
Appropriate Design:
Minimize threats to internal validity through appropriate design.
Consider quasi-experimental design with multiple time points when necessary.
Formative Evaluation:
Use formative evaluation during a pilot program to improve instruction quality.
Test rather than assume the generalizability of evaluation findings.
Considerations for Evaluation Designs:
Match evaluation efforts with learning priorities and organizational capabilities.
Managers and supervisors should provide post-learning assessments of transfer.
Resources must be available to take action based on evaluation data.
Internal Validity and Threats:
Assess threats to internal validity, including history, testing, instrumentation, selection, and program integrity.
Continuous Improvement:
Implement feedback loops for ongoing program modification.
Balance summative and formative evaluation approaches for effective continuous improvement.
Rapid Evaluation (REAM):
Consider rapid evaluation methods for urgent needs, using mixed-method approaches.
Focus on being rapid, participatory, team-based, iterative, and appropriate for urgent situations