I believe in creating checklists when planning to design and deliver professional development. This is especially true when we are trying to understand the factors that can potentially influence the transfer of training from the PD to the learning environment. Checklists are great reminders, and achors, for ensuring that the most important components of the training design are present. For instance, providing a range of choice is important for educators. Choice is considered a motivating factor for promoting transfer of learning. Some of the things I would put on a checklist would differ as well including:
- Have advanced ‘think-abouts’ to get participants to think about topic before webinar (design factor)
- Ensuring all links work, familiar with technology, ensuring speakers, webcam, headsets etc. are working (design factor)
- Ensuring allotment of activities that engage participants (trainee characteristic)
- Content differentiated for new and experienced educators (trainee characteristic)
- Extension in place for those that need it (trainee characteristic)
- Relevance and meaning to support 21st century learners (work-environment characteristic)
- Opportunities for critical thinking (design factor)
- Provide actual student work samples (design factor)
- Allow for conversation and dialogue (design factor)
- Next steps, consolidation and reflection (design factor)
The needs assessment takes us right back to the typical problem solving cycle. We first determine the problem, then determine the causes of the problems, analyse resources and circumstances, then choose among possible solutions. Finally, we can design and plan our training.
Transfer of learning
Moving from basic knowledge and skills toward a greater emphasis on cognitive strategies and organization of knowledge, and application to new situations.
For instance, in reading, we assume that all children use the same processes in learning to read, and ultimately need to master the same basic skills for fluency. I wonder if we need to carry the same assumptions based on the PD we are receiving or delivering?
organizational structures are essential to help students understand higher-order tasks, and to take the learning to deeper levels.
set higher-order learning objectives, and the expected outcomes are for the organization of knowledge, which is different from the amount of declarative knowledge. I do agree that this is great predictor of subsequent taks performance.
– I love the idea of embedding the skills we all need to know within a framework of choice. Allowing participants to have a choice and a voice in the training they receive
More thorough analyses are warranted to take into account more individual differences, organization objectives, and characteristics of tasks to be learned. We need to consider training programs that have the ability to become more ‘personalized’, and focus beyond instructional techniques, and measuring reactions, toward learning specific skills, and how to help participants transfer these skills to multiple situations. Not just to make sure that the original need was fulfilled.
, what does one mean by “it works”.
There is simply a complex system of variables that are not tapped into when school boards decide on Professional Development training for teachers.
variables such as motivation, intervention readiness, job attitudes, personality characteristics and ability. All which impact performance.
While it is appealing to automate the data, we still need the trainers and trainees to take that data and determine how it will be used effectively to improve a) trainers work, and b) improve cognition, skills etc. to be transfered. They are the ones that best understand the contexts to which they are working in. However, many of us trainees and trainers do not have the skills to evaluate in a strong way, without bias. This truly leads to there being no evaluation at all. So many of us design training programs for other eduators, yet lack the expertise and knowledge to ensure the effectiveness of our training.
As a teacher, I would not want to merely look at post-training knowledge of my students. I would always want to engage them in higher order tasks including role-plays, performances, and hands on building etc. I would want to look at knowledge assessed immediately, but also see if they have retained that knowledge over time, and see that they can demonstrate the skills and behaviours as they transfer to new situations, and in the long-term.
I too, understand that sometimes training may not be meant to effect organizational change. Sometimes, for instance, employee morale needs to be boosted. As a result some training may just need to spark passion and motivation in employees.
I too am glad that Beneficience was added as a core principle for training. As an educator, student well-being needs to be at the forefront of all training. After all, everything we do is supposed to be for the benefit of the students. No matter what we are learning, or what training we are giving/receiving etc., the ethical principle of beneficience always needs to be at the forefront, and we always need to ask ourselves, how will this benefit the students?
This is much deeper than agreeing to ‘do no harm’, but to actively promote the wellbeing of others. Beneficience is also not a one-size-fits-all. What is good for some clients and stakeholders, will not be beneficial to others. I am personally grappling with this right now in terms of how to decolonize the curriculum to honour, and recognize First Nations, Metis, and Inuit students. While my teaching practice may not be doing harm per se according to our settler definitions of harm, I am evolving to realise that I am not demonstrating beneficience either. I would like to see more training programs that are aware of these ethical implications.
I considered when charismatic trainers become so popular, even when there is little substantive evidence to demonstrate that their training causes learning.
I found myself thinking about Ken Robinson’s popular TED talk about creativity. It was well delivered, and was, I am sure, not his intention for this to be a training program. Yet, how many poeple used this talk, in parts, or as a whole, to promote positive reactions in training? How many people are strongly influenced by their own reactions, even to the point of abandoning good practice. As a result of extremely positive reactions to this talk, how many training programs emerged from this talk to promote creativity in our classrooms without proper attention going toward other factors and variables. When we have trainers for educators, we need to be cognizant of how we are acting ethically and promoting beneficience toward our own students.
The assumption of causality has contributed to an overuse of reaction measures as the sole evaluative measures of training effectiveness. This infers a connection between the reaction and the learning. It diverts our attention away from making training effective because it puts pressure to develop training that is entertaining and easy-going to help participants enjoy the experience. But learning is hard. Real learning will take us to uncomfortable places.
Ethically, there is a potential for damage due to the misleading information that results from training where too much focus is on reaction.
aiming for a ‘good enough’ evaluation will provide us with some evidence to help us make decisions about training. I do think that this is the best we can hope for, especially since training has to change so often to address new trends, pedagogies, technologies and systemic, government variables that inevitably change.
I believe that in education, we have a dizzying array of data from which to draw upon to inform our needs analysis. We have ALP’s, EQAO, TLP’s, TPA’s, report cards, and many other literacy and numeracy standardized tests including Running Records, CASI, Prime and more.
I think too often, the training refers too much to cognitive analysis and reaction (as in KP model), and not enough to Organizational analysis. There are systemic organizational constraints and conflicts which are not formally identified nor ameliorated before training. I wonder if any of these elements create conditions whereby workers grow cynical over time and thus prevent changes from training as trainees age.
I was trying to figure out why I was so stuck on this one, and I surmised that in education we always have these variables very laid out for us. All one has to do is to look at Ministry, Board, School, OCT and ETFO descriptions, goals and roles to see that the role of an educator has been seriously analyzed and defined. However, there are so many variables that come into play, especially with the rapid changes that have come with technology, our duties and tasks can easily outgrow what has previously been laid out. Just as you mentioned, it is important to continually find ways to collect reliable and valid program data. Staff input is important, but can be difficult to manage in meaningful ways. It may not be cost effective, and then with such a large system, how do you use the data in meaningful ways?
I want to understand more deeply how the Ministry of Education develops their analyses, and how this gets translated to our school boards. We undergo a lot of training in our board, but I have never seen control groups and experimental groups and gaining multiple lines of evidence etc. Where does this take place and how does the Ministry protect against Ministry threats.
In teaching, meeting students’ academic, social and emotional needs are the key driving factors behind much of our training. Issues crop up however, like when we are too focussed on reactions to training, as in KP’s model. Or when we rarely measure the trainees learning, but rather evaluate student learning at the end of the year as an indication of ‘good’ teaching, we have difficultly. The possible variables are vast. I am not certain anymore that I could possibly rule out validity threats in my online webinar and connected communities training. I still want to understand how the Ministry obtains its data and evaluates it – outside of EQAO and other standardized measures as well.
paper test scores can hardly represent the whole picture of actual impact, when I apply this to my setting as a teacher in the education system
Because it is impossible to represent the whole picture with one form of assessment of training evaluation, I think it is important that we all work to the best of our ability with the options that we have available to us. The training programs that I provide are on a volunteer basis. Therefore, they are not mandatory. However, gaining insights into whether they improve knowledge, communication skills and application to job is important to understand.
I would first consider what will be deemed as proof of effectiveness, which would include transfer of knowledge to work setting, then I would define my evaluation strategies accordingly. I would do a one group pre-post design, with the assumption that the individuals taking my training, will be different at the end of training. In my pre-post test, I would include relevant and irrelevant items from the same subject/area of learning to minimize external contaminants. Therefore reducing Type I error. I will also ensure that the test is not too easy, thus minimizing impact of the ceiling effect, and other differential effects from maturation. To reduce problem of type II error, I would then ensure that the difficulty levels of my pretests would be at a fairly high level.
Next, would use a retro-spective pre-test to see what candidates thought about how they would have rated themselves before the training, had they known then, what they know now.
- Are you able to include a control group? If yes, how? If no, why not?
It is just not feasible for me to include a control group. I would need permissions from my employers, and would need to check with my union that this would be ethical. Then I would need to find participants who would be comparable – this may invite a new set of biases. The time and cost to undergo such a task would be insurmountable in my opinion.
- What would be your relevant and irrelevant training content?
My relevant training content would be related to more direct knowledge, skills, communication, application and transfer of knowledge. Versus how they ‘feel’ about the subject, or what they believe about the training.
- Assuming that you don’t have a control group, what pattern of results do you expect to find if the training is effective?
If my training is effective, I am expecting the candidates to demonstrate that they have changed. I am expecting a higher score on the post-test in terms of knowledge, skills, communication, application and transfer.
I am not convinced that without the use of a control group, that the data cannot reflect training impact. This is because the research on IRS has demonstrated that it stands up to more complex tests, yet does appear to be more vulnerable to type II error. I would be more concerned about type I error. I would prevent against response-shift-bias by adding in a parallel test for pre-postmeasurement
I do agree that academia and practitioners need to work together more closely.