Thursday, June 6, 2019
Types and Significance of Evaluation of Training Program Essay Example for Free
Types and Significance of rating of Training Program EssayTypes and significance of military rating of teaching platform INTRODUCTION Training is vital for each and every organization. With the changing socio-economic and technological relevance of didactics, the definitions, scope, methods and rating of prep aredness class stick as well changed. One of the earlier classic definitions of discipline is bringing lasting improvement in skills in jobs. The present day definitions take a multi-dimensional perspective enwrap the necessitys of individuals, teams, organizations and the society.The steps in the readying program development are planning, programme execution, and programme rating and follow-up. The evaluation of all develop system helps saloon the association gap, what is defined by Riech as the gap between what the flight simulator teaches and what the trainee learns. paygrades help to measure Reichs gap by determining the mensurate and effectivenes s of a learning programme. It uses assessment and validation tools to provide selective entropy for the evaluation. Evaluation of instruct systems, programmes or blood lines tends to be a demand of a social, institutional or economic nature.A develop program is non complete until you have evaluated methods and results. A unwrap to obtaining consistent success with training programs is to have a systematic approach to measurement and evaluation. Training Evaluation Approach Evaluation methods should be determined based on the goals of the training ferment and should meet the demands of the non-homogeneous stakeholders involved. Every organization has multiple stakeholders and not every mavin within the organization has the same information needs. Typically, organizational stakeholder groups include the training department, employees and commerce units.Their information requirements fall into both categories whether the competencies have been versed and whether the learning has been applied toward improved performance. DEFINITION * Goldstein (1993) defines evaluation as the systematic nabion of descriptive and judgmental information necessary to practice effective decisions related to selection, adoption, value and modification of various instructional activities. * Kirkpatrick (1996) defines evaluation as determination of the effectiveness of a training programme.Evaluation of training as each attempt to obtain information on the effects of a training programme, and to assess the value of the training in the light of that information. * According to van Dyk et al. (1997), definitions of evaluation have several implications Evaluation is an ongoing process. It is not done at the end of course only. The evaluation process is directed towards a ad hoc goal and objectives. Evaluation requires the use of accurate and appropriate measuring instruments to collect information for decision making. Evaluation is a form of quality control. Evaluation i s not only concerned with the evaluation of students entirely with the wider training system as a whole. TYPES OF EVALUATION 1) Formation evaluation Formative evaluation Provides ongoing feedback to the computer program designers and developers to discipline that what is be created really meets the needs of the intended listening. Formative Evaluation may be defined as any combination of measurements obtained and judgments made earlier or during the implementation of materials, methods, or programs to control, assure or improve the quality of program performance or delivery. * It answers such(prenominal) nouss as, Are the goals and objectives display case suitable for the intended audience? Are the methods and materials appropriate to the event? Can the event be easily replicated? Formative evaluation furnishes information for program developers and implementers. * It helps determine program planning and implementation activities in terms of (1) target population, (2) pro gram organization, and (3) program location and timing. * It provides short-loop feedback around the quality and implementation of program activities and thus becomes critical to establishing, stabilizing, and upgrading programs. ) routine evaluation Process evaluation provides information about what occurs during training. This includes giving and receiving verbal feedback. Process Evaluation answers the question, What did you do? It focuses on procedures and actions being used to produce results. * It monitors the quality of an event or project by various inculpates. Traditionally, working as an onlooker, the jurist describes this process and measures the results in oral and written reports. * Process evaluation is the most common grapheme of training evaluation. It takes place during training delivery and at the end of the event.Most of you probably have done it in one form or another. The question we try to answer is What did you do? * Following is a sample list of the kin ds of information collected to answer this question * Demographic info (characteristics about participants and their personal location) * What was taught how long it took * Whether or not the objectives were met * Who did what to whom, and when 3) Outcome evaluation Outcome evaluation determines whether or not the desired results (e. g. , what participants are doing) of applying new skills were achieved in the short-term.Outcome Evaluation answers the question, What happened to the knowledge, attitudes, and behaviors of the intended population? * Specific and observable changes in behaviors that run low toward healthier or more productive lifestyles and away from problem-ca development actions indicate a successful program. * For example, a successful project is one that is successful in ca exploitation a higher percentage of students to use safety devices when. This project would produce both outcomes and impacts. Outcome evaluation is a long-term undertaking. * Outcome evalu ation answers the question, What did the participants do? * Because outcomes refer to changes in behavior, outcome evaluation data is intended to measure what training participants were able to do at the end of training and what they actually did back on the job or in their community as a result of the training. * 4. clash evaluation Impact evaluation determines how the results of the training affect the strategic goal e. g. health promotion goal of reducing the incidence and prevalence of HIV/AIDS. Impact Evaluation takes even longer than outcome evaluation and you may never know for sure that your project helped bring about the change. The focus is on changes that have occurred in key social indicators which are used to gauge the levels of problem occurrence. * Examples of impacts are reduction in the incidence of HIV/AIDS increase in condom use among students * Impacts occur through an accumulation of outcomes. Impact evaluation is meant to answer the question, How what did wa s taught in the training affect the problem? (Think back on the problem statements you developed. * Impact evaluation tries to measure whether or not training has affected the initial problem you identified. In other words, an impact evaluation is meant to assess the extent to which what was learned is making a difference at the community level, or targeted groups, or beneficiaries of the intervention Though this type of evaluation usually takes a long time and greet a lot of money, it is the type that really focuses, for instance, on assessing whether or not there has been a reduction in the incidence and prevalence of item problems in the community. * The idea here is that the impact of training leave alone hopefully be far reaching and make a difference in peoples lives. hire for EvaluationSince evaluation is an integral part of the whole process of training and development the details have to be conceived much before the actual training body process rather-than its ritual istic tagging at the end of training. The trainer should be fairly clear of How to evaluate What to evaluate When to evaluate Answers to these questions are dependent on the need for evaluation. Why Should A Training Program Be Evaluated? * To identify the programs strengths and weaknesses. * To assess whether content, organization, and administration of the program contribute to learning and the use of training content on the job. To identify which trainees benefited most or least from the program. * To put together data to assist in marketing training programs. * To determine the financial benefits and costs of the programs. * To compare the costs and benefits of training versus non-training investments. * To compare the costs and benefits of different training programs to choose the best program. Principles of Evaluation Schuman, E. A. describes evaluation as an integral part of an in operation(p) system meant to aid trainers/ training managers to plan and adjust their trainin g activities in an attempt to increase the probability of achieving the desired action or goals.In decree to integrate training practices with business policy and objectives evaluation has to be based on sound principles such as 1. Trainer/ Evaluator moldiness be clear about the purpose of evaluation to be able to set the standards and criteria of evaluation. 2. For an objective evaluation, the methodology and criteria of evaluation should be based on observable and as far as possible measurable standards of assessment which have been agreed upon by the evaluators and the users of the training system. 3. Evaluation has to be accepted as a process than an end product of training. 4. As a process, it has to be continuous.The one-spot assessment cannot guide trainers for improving subsequent programmes, therefore it has to begin before the actual training activity and end much aft(prenominal) the conclusion of visible training activity. 5. The training objectives should be an outcome of overall organizational goals to permit tangible evaluation of training results. 6. Evaluation data should be directive rather than conclusive. It must be comprehensive enough to guide trainers in the collection of information that provide enable them to comment on current training effectiveness and to improve subsequent training. . A good evaluation system is tailor-made and should provide specific data about its strength and weakness. Generalizations drawn from one training activity may be in-applicable for training crosswise different levels and to meet different standards. Besides, they should refrain from using single instances for conclusions and generalizations. 8. A good evaluative system should provide sufficient scope for self- appraisal by the trainer/ evaluator. 9. The Evaluative data should try to match quantitative and qualitative information. 10.Role of the evaluator needs tone based on sound working relationship with the participants, trainers, senior line manag ers and policy makers. usually a researcher or a fresher is attached to the trainer to carry out end of the course evaluation. This evaluator may have the expertness of developing and designing-evaluative tools and techniques but it would be insufficient in promoting utilization of evaluation results. Evaluators acceptance by the participants and interpersonal sensitivity and trust for uncivil sharing of feedback is a must.This would modify their role as one of giving and receiving feedback rather than just receiving feedback. They have to be proactive than argumentative. 11. Effective communication and coordination are essential. Training and evaluation plans should be discussed so that there is commonality of purpose amongst the trainers, the evaluators and those sponsoring the trainees. 12. Reporting system of evaluative data should be simple, clear, adequate and available for interpretation. It requires the, evaluator to be sensitive to the feelings of the guidance, has to be tactful and honest.As far as possible terminology used should be concise and free from jargons. 13. Realistic targets must be set. A sense of urgency no doubt is desirable but deadline that are unrealistically high will result in miserable quality. 14. Finally, a trainer who is sincere about training, evaluation would always insist on complete, objective and continuous feedback on the progress and deficiencies of training to be able to maintain the momentum of the training Programme, its evaluation and subsequent improvement. Benefits of Evaluation Improved quality of training activities Improved ability of the trainers to relate inputs to outputs Better discrimination of training activities between those that are worthy of support and those that should be dropped Better integration of training offered and on-the job development Better co-operation between trainers and line-managers in the development of mental faculty Evidence of the contribution that training and developmen t are making to the organization. Kirkpatricks Four-Level Training Evaluation Model The four levels of Kirkpatricks evaluation model essentially measure 1. chemical reaction of student what they thought and felt about the training 2.Learning the resulting increase in knowledge and/or content 3. Behavior extent of behavior and capability improvement and implementation/application 4. Results the effects on the business or environment resulting from the trainees performance Level 1 Evaluation Reactions This level measures how participants in a training program react to the training. Every program should at least be evaluated at this level to answer questions regarding the learners perceptions and improve training. This level gains knowledge about whether the participants liked the training and if it was relevant to their work.Negative reactions reduce the possibility of learning. Evaluation tools Program evaluation sheets Face-to-face interviews Participant comments throughout the training Ability of the course to maintain interest Amount and appropriateness of interactive exercises Ease of navigation in Web-based and computer-based training Participants perceived value and transferability to the workplace This type of evaluation is trashy and easy to administer using interaction with the participants, paper forms and online forms. Level 2 Evaluation LearningLevel 2 evaluations are conducted before training (pre-test) and after training (post-test) to assess the amount of learning that has occurred due to a training program. Level 2 evaluations assess the extent learners have advanced in knowledge, skills or attitude. Level 2 evaluation methods range from self-assessment to team assessment to informal to formal assessment. Evaluation tools Individual pre- and post-training tests for comparisons Assessment of action based learning such as work-based projects and role-plays Observations and feedback by peers, managers and instructors. Level 3 Beha viorLevel 3 involves the extent to which learners implement or transfer what they learned. This level differentiates between knowing the principles and techniques and using them on the job. Potential methodologies include formal testing or informal observation. This level of evaluation takes place post-training when the learners have returned to their jobs and is used to determine whether the skills are being used and how well. It typically involves contact with the learner and slightlyone closely involved with the learner, such as the learners supervisor. Evaluation tools Individual pre- and post-training tests or written reports Face-to-face interviews Observations and feedback from others Focus groups to gather information and share knowledge. Level 4 Evaluation- Results This evaluation measures the success of the training program in term that executives and managers can see such as change magnitude production, increased sales, decreased costs, improved quality, reduced fre quency of accidents, higher profits or return on investment, positive changes in management style or in general behavior, increase in engagement levels of direct ports and favorable feedback from customers, peers and subordinates.Methods of Evaluation of Training Programs It is extremely important to assess the result of any training program. The participant must be made aware of the goals and objectives of the training program and on completion of the training program, they should be asked about the impact of the concerned training program. Evaluation of any program is a difficult task and more so of a training program. The first step toward evaluation of a training program is to define the goals and objectives of the training program. These goals and objectives should be stated in such format so that they can be measured statistically.Also both the trainer and the trainees most be well acquainted with their role in the training Program In the evaluation of any training program. Th e first requirement is to collect valid and reliable data. The required data can be collected by using the fowling techniques. (12) 1. self-importance assessments answer sheets. 2. Question confronted by the trainees. 3. Assessing the collected information and observation. 4. Final result based on earlier information plus the new data Each method of data collection has its advantages and disadvantages. Which need to interpreted into Consideration?The merits and demerits of each method are as follows. Merits of Self Assessment 1. The cost element is quite low. 2. Data can easily collect. 3. Time utilisation of the trainer and trainee is negligible. 4. Outside interference is completely avoided. 5. Effective relationships develop between the trainees. 6. Well designed answer sheet can produce healthy results. Demerits of Self Assessment 1. Self assessment is basically self evaluation which can be based of biased responses. The assessment must have enough reliableness so as to draw r ight conclusion in regard to individual assessment. . The responses given by the trainees can be based on dissimulation or misinterpretation of the questions asked. Thus self assessment questions should be small and easy to understand . in addition . no information should be sleeked which will embarrass the trainees. 3. The information provided by the trainees cannot be evaluated in terms of their correctness. All the trainees do not prefer to give the required information lest it may be used against at any point of time. All these problems can be easily solved. Self assessment is basically adhered to by all the training programs. just what is important to consider is to make proper effective use of this technique as the trainees provide valuable information which the trainer can use to mould training strategy. The second requirement for evaluating a training program is concerned with the evaluation of the training program when part of the training program has been completed. The time factor must be decided before the program is initiated and the evaluation criteria must be determined before the training program begins. The first evaluation will give adequate information to the trainers whither the program moving toward write direction.At the same time trainees will be able to assess the value of the program in terms of its needs and usefulness. It is extremely important to realize whether the trainees have understood the need and importance of the training program. As this stage adequate data should be collected from the trainees to make proper evaluation of the training program. The collect data, interview and questionnaire methods can be most effective. Interviews can be conducted by pursuance information face to face, by means of telephone, or by other strategies like group discussions etc.Each of these methods has its own merits and demerits. Merits of Interviews 1. Face to face interviews ensure some response, if any responses need to be clarified. Th e trainer can do so instantly. Similarly if the trainees want any clarification, the same can do immediately. This helps in ensuring correct information. 2. As far telephone interviews are concerned though there is lack of personnel touch. The trainee does not feel the pressure of the interviewer to give answers that suit the trainer. The trainer can answer all those question that are complex in nature.These answers have far more validity as the responses are without any pressure. Demerits of Interviews 1. The interview is a lengthy and costly process as it requires trained and skilled personal to get results that are reliable. 2. another(prenominal) important drawback is the possibility of the trainer being involved in the interview. 3. Data collected through interview methods may be out of date and hence difficult to interpret. A original survey was done using a detailed questionnaire as a tool. The survey helped in establishing an understanding f all the four levels of evaluatio n reaction, learning, changes and results. The survey used the entire population of participants who attended the training programs of the Institution over the selected ternion years. The institution on an average trained 3000 participants every year from across the country in its 100 training programs per year. The questionnaire had three main parts I. Personal details to build the profile of the participants II. Effectiveness of Program was studied with key questions on whether the objectives of rural development were met within the program.The participants were asked to rate the program content and design on the basic inputs of knowledge, skills and attitudes. III. Professional relevance of training was evaluated with key questions communicate how relevant the program content was for meeting the local needs and whether there was enough practical application which could be used for working or transferring the knowledge to functionaries further down the line. It also probed wh ether the learning could be shared with other colleagues in the organization and lastly whether the course had helped in the organizational performance.Merits and Demerits of Questionnaire Questionnaires in one form or another do appear in all kinds of research and surveys. Hence it is extremely vital that the questionnaire is framed with utmost care so that it measures the variable inexactly the way it has been designed for. Once the initial design has been properly framed, a pre _ test must be conducted to find out whether the questions mean the same thing to the trainer and the trainee if effectuate inappropriate the questionnaire should be redesigned and a pilot survey should be conducted.If found appropriate. Full survey should be conducted and if found inappropriate the questionnaire should be redesigned again. The reliability and validity of the questionnaire should be properly evaluated before going in for full survey. In regard to collection of data. It may be observed, As with any method of data collection it is vital to plan how the data is to be collected. However with this method, since it does not usually involve the design of some sort of formal survey instrument such as questionnaire.It is all too easy to leap straight in without a plan. This can lead to a considerable waste of time and without a plan. This can lead to a considerable waste of time and even worse the wrong data being collected-so the message is plan and design your desk research in the same way as you would any more formal survey. Database In the first instance, the database of 9000 participants was cleaned for scatty names and incomplete addresses. The questionnaire was then posted to all the participants together with a stamped self-addressed envelope.Three reminders were also posted over a period of three months to the trainees who had not replied. Questionnaires were also posted to e-mail ids wherever available. The replies received were tabulated in the SPSS format and a nalyzed. BARRIERS TO EFFECTIVE pedagogy EVALUATION * Lewis and Thorn hill, (1994) state that evaluation results that do not reflect positive changes or positive results may be a function of an incorrect decision to conduct training. This decision may have been taken higher in the organizations hierarchy. Companies fail to do training evaluations correctly and thus do not obtain valid business or performance results (Sims, 1993). * According to Mann (1996) the question of what to evaluate is crucial to the evaluation strategy. The failure of training programme evaluations can be attributed to inadequate planning or design, lack of objectivity, evaluation errors of one sort or another, improper interpretation of results and inappropriate use of results and lack of sponsorships and lack of budget (Abernathy, 1999Goldstein, 1993 Sims, 1993).ISSUES OR DILEMMAS IN EVALUATING TRAINING PROGRAMS A. Perceptions and attitudes of learners about evaluation. For example, trainees seem to respond best to evaluation when The instrument or technique is clear, sensible, agreed on (or expected), well-planned, and integrated in the training design and they understand the purpose of evaluation and see it as part of the training process. B. Is learning measurable, Observable? Can we measure or objectify the important leanings? C. Is training cost effective? Example Does it increase roductivity, reduce absenteeism, lower turnover? D. Confidentiality and other uses of evaluation Ethical uses? E. Who can really measure adult learning but the learner? F. Systems-level evaluation of programs The pilot phase The model phase The institutionalization phase FOLLOW UP A COMPONENT OF EVALUATION A. Evaluation of Training on the Job Behavioral change Results of application B. Help in Practical Applications External services such as coaching consultancy Help by superiors and colleagues C. set ahead Personal Development On-the-job Further training courses D.Liaison with Former Participants Pers onal contacts Associations Information and conferences Alumni peer mentor in. Assessing the costs and benefits of training To conduct a thorough evaluation of a training program, it is important to assess the costs and benefits associated with the program. This is difficult to do but may he important for showing glide by management the value of training for the organization. For example, in one case, the net return of a training program for bank supervisors was calculated to be $148,400 over a 5-years period.Generally, a utility model would be used to estimate the value of training (benefits minus costs). Some of the costs that should be measured for the training program include needs assessment costs, salaries of training designers, purchase of equipment (computers, video, handouts), program development costs, evaluation costs, trainers costs (e. g. , salaries, travel, lodging, meals), facilities rental, trainee wages during training, and other trainee costs (e. g. , travel, lodgi ng, meals).It is important to compare the benefits of the training program with its costs. One benefit that should be estimated is the dollar payback associated with the improvement in trainees performance after receiving training. Since the results of the experimental design will indicate any differences in behavior between whose trained and those untrained, the HR professional can estimate for that particular group of employees (e. g. managers, engineers) what this difference is worth in terms of the salaries of those employees.Another factor that should be considered when estimating the benefits of training is the duration of the trainings impact-that is, the length of time during which the improved performance will be maintained. While probably no programmes will show benefits forever, those that do incur longer-term improved performance will have greater value to the organization. Conclusion The Evaluation of any training program has certain aims to fulfill. These are concerned with the determination of change in the organizational behavior and the change needed in the organizational structure.Hence evaluation of any training program must inform us whether the training program has been able to deliver the goals and objectives in terms of cost incurred and benefits achieved. The analysis of the information is the cerebrate part of any evaluation program. The analysis of data should be summarized and then compared with the data of other training programs similar nature. On the basis of these comparisons, problems and strength should be identified which would help the trainer in his future training programs.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.