Friday 9 November 2012

Do HE teacher development programmes have an impact on student learning?

Last wednesday I attended a HE Academy/Guardian Newspaper debate representing my VC. It will feed into a Guardian article to be published on 20th November. It raised some interesting questions, especially for those of us involved in these programmes. It was a good debate too mainly because the wide range of people involved held very different ,sometimes challenging, views on the value of such programmes and learning and teaching enhancement in general.It included university managers, students, educational developers and lecturers.

The debate was prompted by research commisioned by the HEA which pointed to the limited evidence of impact on student learning directly. You can see this here and hence the call for more research in this area.

Please look out for the article and use the online discussion forum to respond on 20th November.

In the meantime here are some of the views/questions which really got me thinking during the debate and which I think the SEDA community might also like to ponder:

  1. Is it even possible  to find causal links between our teacher development programmes or even T&L initiatives and learning and teaching units  and the student learning experience? How do you separate the programme or the initiative and the other factors which create the learning experience? If it is, when might those links emerge? I began to think about Mentkowski's (2000) Learning that Lasts  work for example that tracked students 5 years after leaving.
  2. If we can't measure impact then are we making ourselves vulnerable in the current political/economic climate? The debate included people who thought that some Pg certs were of low quality, should be closed down, mired in narrow social science paradigms, too academic (because people need teaching skills not scholarship), valued in a theraputic, self serving discourse (like homeopathy) rather than for  impact on student learning. Comments included - 'Often it's one man and a dog running these courses designed 'on  a wing and a prayer'
  3. Is it true that our scholarship has focused too much on the impacts on the participants and our roles rather than impact on student learning?
  4. Do we need programmes or merely spaces for academics to come together and do scholarship of L&T through action research  and hone skills and ideas?
  5. What might the impacts on student learning look like and how might we measure them? Should we not start with figuring out what a good student experience should be like and the work backwards to the educational development required?
  6. How do we encourage a culture where Vice Chancellor's push L+T and value it?
  7. How much do we need to work with students to understand the evaluation of their HE learning experiences. UK National Students Survey is too narrow and suggests its about satisfaction not engagement / learning.
  8. What provision might be shared across the sector? Not all HE providers are  in a position to offer PG certs or apply UKPSF - should we be looking at shared services?
  9. Has there been too much focus on individuals and not enough on programmes?
  10. HEA don't have the monopoly on UKPSF - what might be the implications in the UK?
  11. Should the HEA or another body to offer a national programme?

2 comments:

  1. Great questions Julie :D I’d like to pick up on a couple of them that struck a particular chord with me.

    I would say that it's certainly possible to find causal links – or at least evidence of them – between teacher development programmes and the student learning experience, but – here’s the thing - in order to do so you have to design both the programmes, and the evaluation of those programmes, with that specific goal in mind.

    For example, last year I wanted to evaluate the embedding of web 2.0 technologies (specifically blogs and video sharing) into our programme, with specific reference to the impact on participants’ teaching practice, so I had them provide me with answers to the question: ‘How has undertaking this activity changed your perception of the role of technology in learning?’ Just asking this simple question exposed some really interesting and detailed data. I won’t bore you with all of it but here’s a quick and dirty example - in response to the blogging activities, the 24 responses demonstrated:

    11 had clear and specific intentions to use blogs or other online learning tools with their students as a direct result of undertaking the unit, with six of these having already begun to put these plans into practice at the time they responded to the survey (six weeks after completion of the unit).
    7 felt more positive about online learning, and/or were now more open to exploring learning technologies in their own teaching practice
    4 felt more aware of the issues around using these kinds of technology in learning and teaching, although undecided about whether the benefits could outweigh the challenges
    2 felt that blogging had not worked for them, focusing on the challenge, as dyslexics, of communicating in a text-based medium.

    Having this information to hand has enabled me to track the progress of those with clear intentions to follow through in their own practice and push them to gather data on what this means for their own students' experience, and also offers the potential for following up with the cohort a few months/years down the line.

    Embedding light-touch pedagogic research into teaching courses is a great way of ensuring measurement of impact on student experience too.

    In response to your fourth question, my answer would also be a resounding ‘yes’, because we’re all human and, while we all want to develop our practice and be the best we can be, there are also other things we want that conflict with that. I got some really interesting data from last year’s PG Cert cohort that demonstrated their recognition of the value of being ‘forced’ to do something...! All in hindsight of course; it was a bit hellish at the time...

    ReplyDelete
  2. I also enjoyed the Round table discussion - and it proved to be a lively debate! However, mulling it over after the event, I was less convinced that the hard evidence which people seemed so keen on was actually obtainable! The more I think about it, the more problematic I find the notion of evaluation of impact. The problem is that the impact of any PgCert will be heavily dependent on the institutional culture in which it is situated. So for example, if I took my same programme, taught by the same staff into a different context (say from Plymouth to Leeds University), the impact of that programme would be considerably different. In the same way, I find that the potential for impact on lecturers from some disciplines within my own institution is far greater than it is with others – and this is because of the disciplinary cultures operating in those departments which makes them more or less open to teaching development and enhancement. So this leads me to wonder whether it would be possible to make realistic comparisons across the sector in terms of impact – or if you did so, you would be evaluating the institutional culture, and the fit between this and the development programme offered, as much as anything intrinsic about the programme or staff. However, this doesn’t mean that it wouldn’t be worth doing, only that the limitations of what you could infer from the research findings should be taken into account.

    ReplyDelete