The HE Green Paper: (Don’t) Read it and Weep – Part 1: The TEF & Social Mobility

The Disorder Of Things

Britain’s Conservative government recently released its much-awaited (or much-dreaded) ‘green paper’ on higher education (HE), a consultation document that sets out broad ideas for the sector’s future. Masochistically, I have read this document – so you don’t have to. This first post describes and evaluates the centrepiece of the green paper, the Teaching Excellence Framework (TEF), and measures on ‘social mobility’.

View original post 3,063 more words

The HE Green Paper: (Don’t) Read it and Weep – Part 2: Completing the Market

The Disorder Of Things

This post continues where Part 1 left off.

The real goal of the green paper is to accelerate the formation of a fully functioning market in HE – as has already been discussed elsewhere by the brilliant Andrew McGettigan. The opening move was HEFCE’s QA consultation earlier this year which, as I explained on TDOT, was an attempt to dilute quality standards to make it easier for ‘alternative’ (i.e. private) providers to enter the market. Whereas HEFCE hid behind technocratic jargon, however, the green paper openly announces the government’s ‘clear priority’ to ‘widen the range’ of HE providers (p.50). ‘Our aspiration is to remove all unnecessary barriers to entry’ and create a ‘level playing field’ (p.42).

View original post 1,088 more words

How research-informed practice stood up to the pseudo-science of inspection: defending an ungraded approach to the evaluation of teachers

Introduction

This post tells the story of a university partnership of teacher educators’ experience of an Ofsted inspection of its Initial Teacher Education (ITE) provision in March 2013. Building on a position paper that was written at the time of the inspection, this post outlines how we defended our position on not grading our student teachers and shares some of the underpinning principles of our philosophy. Given the recent shift in Ofsted policy to remove the grading of individual lesson observations from school inspections, this post is very timely as it discusses some of the challenges faced by a department that has not only never used the Ofsted 4-point scale to assess its student teachers during observations, but resisted the use of numerical grading scales across its programmes as a whole.

Few areas of practice have caused as much debate and unrest amongst teachers in recent years as that of lesson observation, particularly graded observations and the way in which they have been used as summative assessments to rank teachers’ classroom performance against the Ofsted 4-point scale. Recent research in the field has described how graded lesson observations have become normalised, highlighting Ofsted’s hegemonic influence and control over education policy and practice (e.g. O’Leary 2013). At the same time, they have been critiqued for embodying a pseudo-scientific approach to measuring performance, as well as giving rise to a range of counterproductive consequences that ultimately militate against professional learning and teacher improvement (e.g. O’Leary and Gewessler 2014; UCU 2013). 

Context

Unlike the vast majority of other university ITE providers in England, the post-compulsory education (PCE) department at the University of Wolverhampton has never used graded observations on its programmes. The underpinning rationale for adopting an ungraded approach to the assessment of our student teachers did not emerge arbitrarily but was developed collaboratively over a sustained period of time. This approach was underpinned by a core set of principles and shared understandings about the purpose and value of our ITE programmes, as well as being informed by empirical research into the use and impact of lesson observations in the Further Education (FE) sector and on-going discussions with our partners and student teachers. Given that our approach went against the grain of normalised models of observation, we knew that our programmes would be subject to heightened scrutiny and interrogation by Ofsted when it was announced that all the university’s ITE programmes would be inspected in March 2013.

The tone was set soon after the arrival of the inspection team on the first day when the lead inspector asked the PCE management team to rate the quality of its provision against Ofsted’s 4-point scale. This was despite the fact that the team had chosen not to apply this grading scale in its self-evaluation document (SED), which all providers were required to complete and submit at the end of each year and to which Ofsted had access before the inspection. But why did the partnership adopt this stance? It is important to emphasise that our resistance to embracing Ofsted’s ‘dominant discourses’ (Foucault 1980) and normalised practice was not based on any wilful refusal to comply or obey their authority as the regulators of quality for ITE provision, but driven by more fundamental concerns regarding the legitimacy and reliability of its assessment framework and the impact of that on teachers in training. Needless to say this epistemological positioning did not sit easily with the inspection team as it presented them with certain challenges that they were unaccustomed to, some of which are discussed further below.

Evaluating performance

It was a strongly held view across our partnership that the use of a metrics-based approach was neither the most appropriate nor the most effective means of fostering our student teachers’ development, nor indeed of measuring the level of performance required to meet the ‘pass’ threshold criteria of our programmes. Our partnership staff comprised largely experienced teacher educators who were comfortable and confident of being able to make judgements about the progress and performance of their students against the pass/fail assessment framework used on the programmes. In some ways this was akin to the notion of ‘fitness to practise’ used by other professions such as health. This ‘fitness to practise’ was initially mapped against the professional standards in use at the time in the FE sector (LLUK 2006) and more recently against the Education and Training Foundation’s (ETF) revised standards (ETF 2014). As the PCE partnership had been actively engaged with these standards through year on year collaborative work to revise and refine their application to its ITE programmes, there was a shared ownership of the assessment by those working on the programme. In contrast, we were not convinced that the Ofsted 4-point scale could be applied with the same rigour, reliability and appropriateness to assess students’ attainment as our existing assessment framework and criteria, whereby students were either judged to have satisfied the criteria or not. In other words, whilst all those teacher educators working on the programmes were clear as to what constituted a pass/fail and were confident in applying these criteria accurately and consistently, the same could not be said about the interpretation and application of Ofsted’s 4-point scale.

In their study into the grading of student teachers on teaching practice placements in Scotland, Cope et al (2003: 682) found that the success of such practice depended on ‘a clearly reliable and valid system of assessment of the practice of teaching’ and concluded that ‘the evidence available suggests that this does not currently exist’. This is not a phenomenon specific to observation as a method of assessment, but reflects widely held beliefs among key researchers in the field of assessment such as Gipps (1994: 167), who argued back in the 1990s that ‘assessment is not an exact science and we must stop presenting it as such.’ The danger, of course, is that the inherent limitations of practice such as numerically grading performance are often overlooked and the resulting judgments are given far more weight and authority than they can realistically claim to have or indeed deserve.

Prioritising teacher development

Our ITE programmes are built on a developmental philosophy in which the student teacher’s growth is prioritised. Staff working on the programmes are committed to helping their students to develop their pedagogic skills and subject knowledge base. It was therefore their belief that judging them against a performative, numerical grading scale of 1-4 would compromise that commitment and jeopardise the supportive focus of the teacher educator and mentor’s relationship with their students. The partnership also benefitted from being involved in and discussing the latest research into lesson observation as one of the university members of staff specialised in this particular area.

As mentioned above, recent research into the use of graded observation in FE reveals how it has become normalised as a performative tool of managerialist systems fixated with attempting to measure teacher performance rather than actually improving it (e.g. O’Leary 2012). The teacher educators and mentors in the PCE partnership saw their primary responsibility as that of helping to nurture their student teachers as effective practitioners rather than having to rank their performance according to a series of judgemental labels (i.e. ‘outstanding’, ‘inadequate’ etc.) that were principally designed to satisfy the needs of external agencies such as Ofsted within the marketised FE landscape and carried with them absolutist judgements that were inappropriate to their isolated, episodic nature. This emphasis on measuring teacher performance was also seen as responsible for what Ball (2003) refers to as ‘inauthenticity’ in teacher behaviour and classroom performance during assessed observations. This is typically manifested in the delivery of the rehearsed or showcase lesson as the high stakes nature of such observations results in a reluctance to want to take risks for fear of being given a low grade. Teachers are thus aware of the need to ‘play the game’, which can result in them following a collective template of good practice during observation. Yet being prepared to experiment with new ways of doing things in the classroom and taking risks in one’s teaching is widely acknowledged as an important constituent of the development of both the novice and experienced teacher.

Furthermore, findings from two separate studies on observation in FE (e.g. O’Leary 2011; UCU 2013) have revealed some of the distorting and counterproductive consequences of grading on in-service teachers’ identity and professionalism. Staff in the PCE partnership, many of whom are FE teachers themselves, were determined to protect their student teachers from such consequences during their time on the programme. This did not mean, however, that they avoided discussing the practice of grading teacher performance with them or confronting some of the challenging themes and issues associated with it. On the contrary, this was a topic that was addressed explicitly through professional development modules and wider discussions about assessment and professionalism as part of the on-going critically reflective dialogues that occurred between teacher educators, mentors and students throughout the programme.

Developing critically reflective teachers

The university’s PCE ITE programmes are underpinned by the notion of critical reflection. Brookfield (1995) argues that what makes critically reflective teaching ‘critical’ is an understanding of the concept of power in a wider socio-educational context and recognition of the hegemonic assumptions that influence and shape a teacher’s practices. The PCE partnership viewed the use of graded observations as an example of one such hegemonic assumption. Thus the perceived or intended outcomes of graded observations (i.e. improving the quality of teaching and learning, promoting a culture of continuous improvement amongst staff etc.) were not always the actual outcomes as experienced by those involved in the observation process. And then, of course, there was the thorny issue of measurement.

The ongoing fixation with attempting to measure teacher performance is symptomatic of a wider neoliberal obsession of trying to quantify and measure all forms of human activity, epitomised in the oft-quoted saying that ‘you can’t manage what you can’t measure’, a maxim that has its roots in a marketised approach to educational improvement and one which seems to shape Ofsted’s inspection framework. During the inspection, it became apparent that the PCE partnership’s ungraded approach was problematic for Ofsted. Although when I asked the lead inspector directly at a feedback meeting if the use of a grading scale was considered an essential feature of being able to measure teachers’ progress and attainment, he categorically stated that was NOT the case nor did Ofsted prescribe such policy, he later contradicted this in his final report by maintaining that as the partnership did not grade, it was ‘difficult to measure student progress from year to year or the value that the training added in each cohort’. In spite of the presentation of interwoven sources of qualitative evidence (tutor/mentor/peer evaluations, self-evaluations, integrated action/development plans, critically reflective accounts etc) illustrating these student teachers’ journeys throughout their programmes of study, the inspection team was reluctant or even unable to conceptualise the notion of improvement unless the outcome was expressed in the form of a number. And why is that? Because, of course, reading such qualitative accounts are more time consuming and ‘messier’ than the reductive simplicity of allocating a number to something, however spurious that number might be. This reveals the extent to which ‘managerialist positivism’ (Smith and O’Leary 2013) has become an orthodoxy and Ofsted its agent of enforcement. Despite that, the partnership team defended its practice and emphasised how the broad range of evidence captured in the combination of formative and summative assessments provided a rich tapestry of these student teachers’ progress and attainment throughout the programme and ultimately one that was more meaningful than the allocation of a reductive number.

References

Ball, S. (2003) The teacher’s soul and the terrors of performativity, Journal of Education Policy, 18(2), pp. 215-228.

Brookfield, S. D. (1995) Becoming a Critically Reflective Teacher. San Francisco, CA: Jossey-Bass.  

Cope, P., Bruce, A., McNally, J. and Wilson, G. (2003) Grading the practice of teaching: an unholy union of incompatibles. Assessment & Evaluation in Higher Education, 28(6), pp. 673-684.

Education and Training Foundation (ETF) (2014) Professional Standards for Teachers and Trainers in Education and Training – England. Available at: http://www.et-foundation.co.uk/wp-content/uploads/2014/05/4991-Prof-standards-A4_4-2.pdf.

Foucault, M. (1980) Power/Knowledge – Selected Interviews and Other Writings 1972-1977. Brighton: The Harvester Press.

Gipps, C. (1994) Beyond Testing: Towards a Theory of Educational Assessment. London: Falmer Press.

Lifelong Learning UK (LLUK) (2006) New overarching professional standards for teachers, tutors and trainers in the lifelong learning sector. London: LLUK

O’Leary, M. (2011) The Role of Lesson Observation in Shaping Professional Identity, Learning and Development in Further Education Colleges in the West Midlands, unpublished PhD Thesis, University of Warwick, September 2011.

O’Leary, M. (2012) Exploring the role of lesson observation in the English education system: a review of methods, models and meanings. Professional Development in Education, 38(5), pp. 791-810.

O’Leary, M. (2013) Surveillance, performativity and normalised practice: the use and impact of graded lesson observations in Further Education Colleges. Journal of Further and Higher Education, 37(5), pp. 694-714.

O’Leary, M. & Gewessler, A. (2014) ‘Changing the culture: beyond graded lesson observations’. Adults Learning– Spring 2014, 25: 38-41. 

Smith, R. & O’Leary, M. (2013) New Public Management in an age of austerity: knowledge and experience in further education, Journal of Educational Administration and History, 45(3), pp. 244-266.

University and College Union (UCU) (2013) Developing a National Framework for the Effective Use of Lesson Observation in Further Education. Project report, November 2013. Available at: http://www.ucu.org.uk/7105.

Observation rubrics – a response to @joe_kirby

@joe_kirby’s recent post  makes reference to my book in the context of a wider discussion regarding the ongoing use of lesson observation in the English education system. As all readers will be aware, observation is a hot topic that continues to generate much debate across the profession, albeit often for the counterproductive consequences of its predominantly performative use. The fact that teachers like Joe and others have written numerous blogs about it recently reinforces the idea that it continues to provoke strong emotions across the education sector. In his post Joe selects a series of quotes/extracts from the book in an attempt to encapsulate some of the thematic discussion and the main arguments I present. It’s no mean feat trying to capture some of the key arguments and topics covered in the book’s nine chapters in a blog entry but Joe’s inclusion of the following summarising statement from the book towards the start does a good job of setting the tone:

The high-stakes nature of performance management-driven observation for monitoring and measuring militates against professional development; school leaders must challenge the hegemony of graded observations and redesign observation as a tool for reciprocal learning, decoupled from summative high-stakes grading

For those people who would like to read detailed reviews of the book, there is one here and another here. And, of course, there are shorter ones on Amazon too. Following on from Joe’s blog, I’d I’d just like to add a couple of points of clarification and extend the discussion further. This post is certainly not intended to be a long one, nor indeed going to repeat things I’ve written/spoken about before regarding observation. If anyone is interested in reading my previous work on the topic, you can access journal articles, reports, talks etc for FREE by clicking on my academia.edu web page, where I regularly post my publications/output. Firstly, Joe makes a point about feeling ‘uneasy’ with the ‘prescription’ of the Ten principles of ‘good teaching’ graphic, which appears in Chapter 6 ‘Being an Effective Teacher – Models of Teacher Effectiveness’. I’m unsure if Joe has misread that particular extract but this is what it actually says in the book in relation to the ‘Ten principles’:

Despite the difficulties previously discussed in defining good teaching, this does not mean to say that it is impossible or pointless, as Moore suggested above, to devise a set of ‘guiding principles’. It is one thing to produce a prescriptive list of ‘dos’ and ‘don’ts’ as to what constitutes good teaching but another to theorize about some of its underpinning principles. With this in mind, what might such a set of principles or assumptions comprise?

Table 6.1 below is my attempt to produce a broad set of principles of ‘good teaching’, though not necessarily in a particular order. Table 6.1 is not meant to be an exhaustive list, but should provide a broad framework for discussing the topic within and across institutions. It might also be used as a set of prompts on which to base the development of a more tailored instrument for assessing practice across the institution through the medium of classroom observation and other relevant mechanisms. (O’Leary 2014a: pp. 97-98)

image

Thus this list of Ten Principles of ‘Good Teaching’ are simply a stimulus for debate and certainly not meant to be used prescriptively. Much of the discussion in that particular chapter of the book explores research into the notion of teacher effectiveness and makes it clear that it is a contested terrain with conflicting findings from a host of international studies. Joe contends that ‘any selective list of what makes good teaching will never be agreed upon’. One thing that is clear from past and current research into teacher effectiveness and attempts to define it is that it is indeed a thorny topic that divides opnions. However, that shouldn’t stop us from discussing it and developing our knowledge and understanding further of the qualities and attributes of the effective teacher. The recent MET project, funded by the Gates Foundation in the USA, is proof of how important an area of research this is considered to be for educators on an international scale. Besides, better for teachers to be engaged in this discussion and actively contributing rather than leave it in the hands of policy makers to decide. I agree with Joe’s point that one of the most important questions we need to be asking is how observation can be used as a tool for improving teaching and a third of my book is dedicated to exactly that focus. I have argued previously here and here that the single most significant obstacle preventing us from doing so is what I refer to as the ‘assessment straitjacket’ that for decades has constrained the perception and implementation of observation in the English education system. Breaking free from that assessment straitjacket is essentially if we are to fully exploit the benefits of observation as a source of evidence.

There needs to be a ‘thinking outside the box’ when it comes to how observation may be used as a source of evidence in the educational arena. Tinkering with prevailing normalised models of observation is, at best, only likely to have minimal impact and offer short-term solutions to longstanding issues. Although removing the graded element would certainly represent a step in the right direction, for example, it cannot be considered a panacea in itself. In a similar vein, recent calls for the abolition of lesson observation from the inspection process are a classic example of ‘throwing the baby out with the bath water’ and as such represent a knee-jerk reaction to a much more complex problem than the one they claim to solve. Ultimately, what both of these strategies fail to address are the deep-rooted political and epistemological issues surrounding the use of observation as a method of assessment. At the heart of any such discussion is the acceptance that the use of observation is not purely an act of pedagogy but one that is underpinned by issues of hierarchical power and professional trust. Until these issues are acknowledged and discussed by education professionals in an open forum, then any attempts at reforming the way in which the sector makes use of observation are unlikely to progress (O’Leary 2014b: pp. 220-221)

Finally, when discussing the reliability of graded observations in his post, Joe refers to the work of @ProfCoe. Prof Coe is often referred to in other bloggers’ posts about observation. As someone who has been researching and writing about lesson observation, I obviously have to keep up to speed with current research on the topic in the UK and internationally, yet I had never come across any research on observation by Rob Coe. Just to make sure I hadn’t missed something, I tweeted him in March earlier this year and as you can see from his response he openly admits to having done ‘no proper research’. The references to Coe’s work are thus based largely on a powerpoint presentation and a blog rather than empirical research. If people are interested in knowing more about current research in the UK – then this research study, carried out in the Further Education sector, is a good place to start as it is the most extensive research into the topic carried out to date.

References

O’Leary, M. (2014a) Classroom Observation: A Guide to the Effective Observation of Teaching and Learning. London: Routledge.

O’Leary, M. (2014b) ‘Power, policy and performance: learning lessons about lesson observation from England’s Further Education colleges’. Forum, Vol 56(2), pp. 209-222.

Embracing expansive approaches to the use of lesson observation

Embracing expansive approaches to the use of lesson observation

(This article first appeared in CPD Matters/InTuition – IfL, Issue 8, Summer 2013, pp. 21-22)

Introduction

In last summer’s issue of CPD Matters I discussed the topic of graded lesson observations in further education and argued that the continued emphasis on measuring teacher competence and performance via the Ofsted 4-point scale had not only become a perfunctory, box-ticking exercise in many colleges, but had also given rise to a range of counterproductive consequences that were impacting negatively on the professional identity and work of tutors in the sector (O’Leary 2012).

In that article I used the juxtapositional terms ‘restrictive’ and ‘expansive’ to describe those approaches to observation that hinder or help professional learning and development. Much of the discussion focused on examples of restrictive approaches and their impact on practitioners, which meant there was less room to discuss the features of expansive approaches. It is to this important area that this follow-up article turns its attention, as I look to present contextualised examples and reflect on why the adoption of a more expansive approach to the use of observation in FE is likely to yield more meaningful and sustained improvements in the quality of teaching and learning than current performative models that continue to dominate the sector.

Defining features

Given the brevity of this article, I have decided to limit my discussion to three specific features:

1) Differentiated observation

2) Prioritising feedback and feed forward

3) Removing the graded element

Space does not allow for a detailed discussion of these three features, but you should at least be able to get an overview. For a deeper exploration please see examples of my other work (e.g. O’Leary 2013; 2014 – listed at the end of this article).

  1. Differentiated observation

Differentiated observation runs counter to conventional models in that it involves identifying a specific focus to the observation rather than carrying out an all-inclusive assessment based on a generic template, as is currently the norm. The observee is given greater ownership and autonomy in deciding the focusand negotiating which session they wish to be observed. The purpose and context thus shape the way in which the focus is decided. So, for example, in the case of the trainee or less experienced teacher, it might make more sense for the observer to play a more decisive role in the focus than they would if they were observing experienced colleagues. What are some of the advantages and reasons for using a differentiated approach to observation?

First, a differentiated approach is built on the premise that each teacher is likely to have differing strengths and weaknesses in their pedagogic skills and knowledge base. Just as the most effective teachers differentiate in their teaching, so too does it make sense to apply this approach to the way in which teachers’ practice is observed. Second, maximising teacher ownership of the observation process is an important feature of facilitating professional learning that is likely to endure. All teachers have a responsibility for their continuing professional development and they are likely to value this more highly if they feel they are given some ownership of the decision making process. Third, the collaborative nature of professional learning means that it is not an individual act or the sole responsibility of the teacher but one that involves colleagues working together. So, for example, there may be times when the focus of differentiated observation is driven by wider objectives across a department such as a departmental improvement plan. These objectives may stem from a range of sources e.g. self-assessment, inspection reports, appraisal meetings etc and may be divided into separate strands or themes (eg use of formative assessment, use of ICT, behaviour management) to address through observation. In this instance a team/department of teachers may choose particular themes to focus on.

  1. Prioritising feedback and feed forward

Feedback is arguably the most important part of the observation process as it is generally regarded as having the most tangible impact on professional development. In a previous research study I carried out, three quarters of respondents across 10 colleges said that feedback lasted no longer than 20 minutes. It is difficult to imagine a professional dialogue of any substantive consequence occurring in such a short space of time. But why is so little time given to feedback if it is recognised as being such an important part of the observation process?

The simple answer is that the time available for feedback and professional dialogue is squeezed because so much time is spent on the collation and completion of the accompanying paper trail and performance management data associated with observations. This is further exacerbated by insufficient time being allocated to the observation process from the outset in many colleges. Feedback, occurring towards the end of the process, invariably ends up losing out. But there are more long term gains to be made from allocating adequate time to feedback in the observation process.

My research has found that those colleges that attach as much significance to the feedback and feed forward stages as they do to the observation itself are often the most successful in improving the quality of teaching and learning, along with fostering a culture of continuous and collaborative improvement amongst their staff. What those colleges have in common is the fact that the importance of feedback and feed forward is not just paid lip service to in their observation policies, but is enacted in practice by allocating appropriate time remission on staff timetables in each academic year.

  1. Removing the graded element

One of the biggest obstacles to embracing an expansive approach revolves around the issue of grading. My research identified correlation between an overreliance on using lesson observation grades as a key performance indicator and low levels of trust and professional autonomy in some colleges. Yet when the graded element was removed, levels of trust between colleagues improved and some of the negative associations surrounding observation vanished, as illustrated in the extract from a research interview with two observers below:

Abdul: We started to not give numerical grades as we felt people concentrated on the number not the feedback and we felt that that worked really well but then the principal decided one day that Ofsted wouldn’t like that and everything came to a halt. We have now moved completely away from that again and everything is performance driven and that’s a shame because that’s where I think we made all of our advances in improving the quality of teaching by getting people on side, being formative as opposed to punitive.

Molly: We did it for just under a year and the impact was quite startling. The quality of learning that was going on rose because staff listened to the developmental feedback rather than focusing on ‘oh I’ve got a three’. We had got staff on side with observations and they were no longer terrorised of having someone in the classroom. They became far more accepting but like Abdul has just said, all that progress has been undone now with the return to grading.

The idea that the summative element can overshadow the formative feedback is well documented in the field of assessment. The grade can take on such importance that it threatens to undermine the value of feedback and the professional dialogue. Abdul and Molly’s account reveals how removing the graded element can be liberating and help to break down some of the negative barriers (i.e. anxiety, fear, suspicion etc.) associated with observation. In their case, it enabled them as observers to gain the trust of tutors and to engage in meaningful, collaborative work, which subsequently led to improvements in the quality of teaching. By concentrating on the feedback and not the grade, the formative aspect of the observation process took on a greater significance and tutors were more disposed to engaging in professional dialogue about their practice.

Conclusion

The way in which staff experience and engage with the use of observation is inevitably influenced by the teaching and learning cultures of the institution itself. The commitment of senior management to promote particular notions of professionalism and professional learning is crucial in establishing an institutional ethos towards observation, which is cascaded, both implicitly and explicitly, to observers and observed alike. The key question for senior managers to consider is therefore a very simple one: what kind of culture do I want to foster amongst staff when it comes to the use of observation? Expansive or restrictive?

References

O’Leary, M. (2012) ‘Time to turn worthless lesson observation into a powerful tool for improving teaching and learning’. InTuition/CPD Matters – IfL, Issue 9, Summer 2012, pp. 16-18. 

O’Leary, M. (2013) Expansive and restrictive approaches to professionalism in FE colleges: the observation of teaching and learning as a case in point. Research in Post-Compulsory Education, 18(4), pp. 348-364.

O’Leary, M. (2014) Classroom Observation: A guide to the effective observation of teaching and learning. London: Routledge.

Coaching for sustainable development or just working on the observation profile: What are we really doing in the FE sector?

joannemilesconsulting

Current challenges

In the FE sector, are we coaching teachers with real development in mind or just to move them from one observation grade box to another on our spreadsheets? This may sound harsh but conversations with coaches in a range of colleges have highlighted this concern and made me feel somewhat troubled at the direction of travel. With increasing pressure within the sector to accelerate improvement, it is easy for coaches to feel that it is imperative they help their coachees to secure that magic grade two, which is taken as a sign of “coaching success”, of the teacher “having improved”. This can lead to an almost exclusive focus on fixing the “faults” seen in the lesson that was graded as a three or four, to the exclusion of deeper, more reflective work on developing the teacher’s practice.

To me, this seems to be a misguided use of coaching…

View original post 1,351 more words

Commentary on #ukfechat forum discussion on graded lesson observations – 28th February 2014

Commentary on #ukfechat forum discussion on graded lesson observations – 28th February 2014

As I lay recuperating in my sick bed last night, I decided to ‘observe’ from afar as a vibrant community of FE Twitter folk debated the #ukfechat topic of the week: ‘Observations: Is it time to ditch the grade?’ As someone who has been actively researching, talking and writing about the topic of observations for the last decade both in the UK and abroad, I was tempted to get involved but my sinusitis persuaded me otherwise that it was best to remain on the peripheries of the discussion as an ‘insider looking in’, if you know what I mean!

As all those working in FE and indeed schools will know, lesson observation is a hotly debated topic. The fact that there was such a lively and diverse debate on last night’s forum should therefore come as no surprise to anyone. In some ways the debate was a microcosm of a wider discussion that continues to reverberate around the corridors of colleges and schools across the country. I recently had a memorable first-hand experience of this when I was analysing and writing up data from the largest study ever to be conducted into lesson observation not just in FE but in the English education system as a whole. In the first part of the project, participants were asked to complete an online survey, at the end of which was an empty box for them to write any comments they had about observation in general. Oh my, did I underestimate the volume of responses that small box alone would generate?! Just under half of all those completed the survey (approx.. 4000 in total) chose to write detailed comments, which when added together totalled over 100,000 words. So, let’s just say there’s no shortage of opinion when it comes to the topic of observations. 

I’m conscious that this is a blog entry and I don’t want it to turn into a long piece of academic writing, if people are interested in that type of thing then they can look at some of the articles I’ve written recently or better still, buy my book! So let’s return now to the forum discussion. I made some notes early this morning of things that stood out for me and I just want to touch on some of those things, not necessarily in any particular order.

The ‘assessing learning/the lesson’ myth

One of the issues that cropped up on several occasions was the old cliché of ‘assessing the learning not the teaching’. For some time now, we have been sold this spurious argument with graded observations that it is the ‘learning’ in the lesson that is being assessed and graded and NOT the teacher. This is a complete fallacy and it needs to be put to bed once and for all. Firstly, if it is the ‘learning’ in the lesson that is being judged, then why does the grade follow the teacher? Why are teachers labelled as ‘outstanding’ or ‘inadequate’ and rewarded or reprimanded accordingly? This is a divisive practice that is commonly reinforced by some employers explicitly naming their ‘outstanding’ teachers, even celebrating their achievements in ‘awards ceremonies’. Besides, if the emphasis is meant to be on the learning taking place rather than the individual performance of the teacher, why are the outcomes of graded lesson observations directly linked to capability procedures in some workplaces?

Any attempts to separate the act of teaching from learning are not only artificial, but crudely ignore the symbiotic relationship between the two. As Ted Wragg (1999) once proclaimed, ‘the act of teaching is inseparable from the whole person and to attack the one is to demolish the other’ (p. 91). And the idea that ‘learning’ can be accurately measured through the medium of observation is highly contested and the reality is that we are light years away from ever being able to make such a claim with any degree of authority. The pseudo-scientific art of grading seduces us into believing that observer judgements have greater objectivity and reliability than they can actually claim to have. And why is that? It is because on the surface numbers have a ‘scientific’ quality to them, which makes people less likely to question what they are deemed to represent. In the case of graded observations, there is an assumption that the use of the Ofsted 4-point scale has some kind of objective value comparable to the use of a calibrated measuring instrument such as a thermometer. Yet this is clearly a myth. They are, of course, dependent on the subjective interpretation of observers so the application of a grade can never be wholly reliable. 

Wanting to be graded or know the grade

Of course, there are some teachers who are keen to want to know the grade even if they’re not being graded. This is indicative of what I’ve referred to in previous work as ‘normalised behaviour’. In other words, such teachers have become institutionalised into expecting a grade to be attached to an observation, regardless of the context or approach. They are unable and/or unwilling to conceptualise the use of observation outside of a performative context and see an umbilical link between their classroom ‘performance’ and attempts (because that’s all they are) to measure it. I can understand the ‘reward’ incentive of this for some but I think such a mentality does little to foster a collegial and collaborative culture in the workplace. I’m not opposed to the notion of competition per se but firmly believe that there is a time and a place for it and this is not it. 

Tweeters that stood out

Overall I thought the level of debate was fantastic and I enjoyed observing it from afar. I feel there are a few tweeters who need a brief mention though as for me their balanced and critically reflective positions shone through their tweets and they were: @hannahtyreman @Shanie_Nash and @cazzwebbo. Also @GrahamRazey deserves a mention for raising the all-important point about cultures of teaching and learning being the lynchpin of any successful model of observation.

 

Concluding thoughts

What we need is a fundamental reform of the way in which observation is used. Tinkering with the present system is pointless and only likely to have a minor impact. At the heart of such fundamental reform is the need to reconfigure the contexts and cultures of teaching and learning in which observation occurs, as Graham so rightly alluded to in his tweets. It’s not just about moving from one formulaic model to another but root and branch reform; a fundamental reconceptualisation of how we engage with observation and that inevitably requires removing the assessment straitjacket that currently constrains how people perceive it and what it’s used for.

Making more effective use of classroom observation – a differentiated approach as a tool for professional learning

A differentiated approach to observation

A differentiated approach to observation goes against the grain of most conventional models of observation insomuch as it involves the identification of a specific focus to the observation rather than attempting to carry out a holistic judgement of the teacher’s competence and performance via a standardised assessment tool. The focus of the differentiated observation is decided by the observee but it can also be negotiated and/or discussed with the observer (depending on the underlying purpose and context) and can even involve the wider team/department. The underlying purpose and context is likely to shape the way in which the focus is decided. So, for example, in the case of the trainee teacher or NQT whose teaching is being assessed as part of an on-going programme, it may be appropriate for the observer to play a more substantive role in deciding the focus than they otherwise might do if they were observing experienced practitioners who have identified a specific area of practice that is of particular relevance to their CPD. 

 

The rationale for a differentiated approach to observation is multi-faceted. Firstly, a differentiated approach is built on the premise that each teacher is likely to have differing strengths and weaknesses in their pedagogic skills and knowledge base in much the same way that any group of learners is likely to differ. Just as the most effective teachers incorporate differentiation into their teaching, so too does it make sense to incorporate it into the way in which teachers’ practice is observed. Secondly, maximizing teacher ownership of the observation process is seen as an important feature of facilitating professional learning that is likely to endure. All teachers have a responsibility for their CPD and they are likely to value this more highly if they feel they are given some ownership of the decision making process. Thirdly, the collaborative nature of professional learning means that it is not an individual act or the sole responsibility of the teacher but one that involves colleagues working together. So, for example, there may be times when the focus of differentiated observation is driven by wider objectives across a team or department such as a departmental improvement plan. These objectives may stem from a range of sources e.g. self-assessment, inspection reports, appraisal meetings, student evaluations etc and may be divided into separate strands or themes (e.g. use of formative assessment, use of ICT, behaviour management etc) to address through observation. In this instance a team/department of teachers may choose particular themes to focus on 

 

Example protocol for differentiated observation

Notes for the observee

The purpose of this observation is meant to be formative. YOU decide the focus of the observation and what you would like your observer to concentrate on whilst observing. The rationale for this approach is to allow you to choose an aspect of your teaching which you are keen to explore in more depth. This could be something that you are keen to improve, know more about, have some concerns about etc. For instance, you may be interested in studying how you give instructions, how you manage and deal with feedback, your use of a particular resource/form of technology, your methods of assessing learners etc. The important thing is that you choose something that is meaningful and relevant to your development. 

 

Notes for the Observer

In keeping with the principles of a collaborative and supportive observation scheme, the most appropriate approach to the recording of data must be one that avoids making judgemental comments about the observed session, as is often associated with those observations that are evaluative in purpose. The purpose of this observation is NOT to evaluate the classroom performance of your colleague, but to stimulate meaningful reflection associated with their chosen aspect(s) of practice. In your role as the observer you are encouraged to record notes of what you actually observe and that these notes should simply represent a factual record of what occurs during the observation and NOT a subjective interpretation of events (see Table 7.1 below). These notes are then used to help guide the follow-up discussion between you and your observee or colleague as they reflect on the lesson and that particular aspect of their teaching that they have asked you to observe and to keep notes on.

Table 7.1 Form for differentiated observations

Teacher:

Observer:

Date:

Title:

Level:

Number in Group:

Focus of Observation:

 

Field Notes:

 

 

This excerpt was taken from pp. 115-117 of: Classroom observation: A Guide to the Effective Observation of Teaching and Learning, by Matt O’Leary (London: Routledge) http://www.routledge.com/books/details/9780415525794/

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

A brave new world for Ofsted? A response to SMW’s Views on ‘Preferred Teaching Styles’

Below is a blog entry posted by @HelenMyers last Saturday in which she includes a letter that reportedly Sir Michael Wilshaw sent to Ofsted inspectors and which she says she was very encouraged by. Helen has since tweeted that the authenticity of the letter has been confirmed by Ofsted. So, without further ado, let’s move onto the letter itself and my response. So as not to confuse my thoughts and words with SMW’s (easily done, I know!), I’ve written mine in navy blue at the end of the letter.

 

Saturday, 25 January 2014

OFSTED Message from HMCI Sir Michael Wilshaw

Message from HMCI – Sir Michael Wilshaw

Over the last 18 months, I have emphasised in a number of speeches that Ofsted is not prescriptive about the way that teaching is delivered and does not recommend a suite of preferred teaching styles. Inspectors should only be concerned with the impact that teaching has on children’s learning, progress and outcomes. Our new guidance on the inspection of teaching in schools reinforces this. I quote:

‘Inspectors must not give the impression that Ofsted favours a particular teaching style. Moreover, they must not inspect or report in a way that is not stipulated in the framework, handbook or guidance. For example, they should not criticise teacher talk for being overlong or bemoan a lack of opportunity for different activities in lessons unless there is unequivocal evidence that this is slowing learning over time.

It is unrealistic, too, for inspectors to necessarily expect that all work in all lessons is always matched to the specific needs of each individual. Do not expect to see ‘independent learning’ in all lessons and do not make the assumption that this is always necessary or desirable. On occasions, too, pupils are rightly passive rather than active recipients of learning. Do not criticise ‘passivity’ as a matter of course and certainly not unless it is evidently stopping pupils from learning new knowledge or gaining skills and understanding.’

Nevertheless, I still see inspection reports, occasionally from HMI, which ignore this and earlier guidance and, irritatingly, give the impression that we are still telling teachers how to teach. Let me give you a few examples from recent reports I have just read:

• ‘Teaching will improve if more time is given to independent learning’
• ‘Insufficient time was given to collaborative learning’
• ‘Students are not given sufficient opportunity to support their classmates in their learning’
• ‘Pupils are not sufficiently engaged in their own learning’
• ‘Teaching requires improvement because pupils do not get enough opportunities to work alone or in groups’
• ‘Weak teaching is characterised by teachers talking too much.’

It is quite acceptable for a teacher to talk a lot as long as the children are attentive, interested, learning and making progress. If not, it is quite legitimate for inspectors to say that poor planning and lesson structure meant that children lost focus and learnt very little.

There is so much more that could be said about teaching without infringing the professional judgement of teachers to decide the most appropriate style of teaching to get the best out of their students. For example:

• Do lessons start promptly?
• Are children focused and attentive because the teaching is stimulating?
• Is the pace of the lesson good because the teacher is proactive and dynamic in the classroom?
• Is homework regularly given?
• Is literacy a key component of lessons across the curriculum?
• Do teachers use display and technology to support teaching?
• Are low expectations resulting in worksheets being used rather than textbooks?
• Are the most able children provided with work which stretches them and allows them to fulfil their true potential?
• Are children expected to take books home to do their homework and return them the following day?
• Does marking give a clear indication of what the children have to do to improve and are clear targets being set?
• Is the structure of the lesson promoting good learning and are children given sufficient time to practise and reinforce what is being taught?
• Do teachers have sufficient expertise to be able to impart to students the necessary knowledge and skills to succeed?
• Does the school have a robust professional development programme which is improving the quality of teaching by disseminating good practice across the school or college?
• Are teaching assistants supporting teaching effectively or are they simply ‘floating about’?

In summary, inspectors should report on the outcomes of teaching rather than its style. So please, please, please think carefully before criticising a lesson because it doesn’t conform to a particular view of how children should be taught.

In saying all this, I recognise that a report-writing orthodoxy has grown up over the years which owes as much to the formulaic approach of the national strategies as to any guidance that Ofsted has given inspectors. We must continue to break free of this and encourage inspectors to use their freedom to report in language that has meaning and relevance to the institutions we inspect and the parents and students who read our reports.

Only by doing this can we hope to use inspection to raise standards.

 
 
 
 
My Response
Let me start by saying that I agree wholeheartedly with SMW’s comment that Ofsted (nor indeed any other inspectorate or government agency involved in evaluating educational provision) should not be seen to prescribe ‘a suite of preferred teaching styles’. As the Chief Inspector reminds us, ‘Inspectors should only be concerned with the impact that teaching has on children’s learning, progress and outcomes’. How we ascertain that ‘impact’ is a matter for another discussion, but for now, SMW should be commended on making such a forceful statement and providing such clear examples (e.g. teacher talk time, independent learning) to illustrate his position.
 
As many experienced teachers would no doubt acknowledge, like fashion trends, different teaching methods and approaches come and go, and even come back into fashion again in their career. Thus, to espouse the virtues of the current methodological flavour of the month is a precarious position to adopt and one that is inevitably bound to have a limited shelf life, not to mention the floating sands of “evidence” on which ‘new pedagogies’ have traditionally been based . There is a distinct lack of convincing research evidence that points to a “right” way to teach. On the contrary, much of the research that has been undertaken across different disciplines/subject areas, has invariably concluded that no one method or style of teaching is significantly more successful than others, and that it is the quality of exposure to the subject matter that matters most. So, in that sense, SMW is right to call for an end to some of the highly subjective judgements made by inspectors in their reports ‘because it doesn’t conform to a particular view of how children should be taught’.

 
I suspect this is a position that many working in schools, colleges and academies would welcome as it sends a clear message to all that from now on inspectors will be scrutinised in the way in which they evaluate the quality of the teaching and learning experience. One of the significant repercussions of such a shift in policy is that, at least in theory, institutions will no longer be expected to conform and comply with models of normalised practice. Inspectors will be expected to base their judgements on the quality of teaching and learning they witness on a case by case basis and in so doing consider a range of contextual factors that impact on the learning experience. If this ‘theory’ is to come to fruition in practice, then it could mark a significant turning point for the inspection process. However, in order for this to happen, there are two key elements that need to be considered and embedded into the process and, unforturnately, I think this is where there’s a missing link in SMW’s letter.
 
Firstly, if Ofsted inspectors are to make informed judgements as to the effectiveness of particular teaching methods/approaches/styles that they observe in practice during inspections, then they must engage in substantive professional dialogue with practitioners, students and senior staff. In order to ensure robust data triangulation and to ward against their own personal biases influencing their interpretation of what they see, then they will need to ascertain the rationale for these chosen methods and evaluate their effectiveness by asking the very people involved in the heart of the process. Inevitably this will make the inspection process more time consuming, but not to do so risks perpetuating allegations of bias and subjectivity in their judgements and falling back into the very trap that SMW seems so eager to step out of.
 
Secondly, and finally, many forms of assessment or evaluation are beset with issues surrounding validity and reliability. As Gipps (1994) once said, ‘assessment is not an exact science and we must stop presenting it as such’ (1994: 167). But that should not stop us from at least wanting to try to improve it, particularly, as is the case in inspections, when the stakes are so high for those being inspected. It cannot be assumed, for example, that there is a shared understanding among inspectors as to the meaning and interpretation of value-laden terms such as ‘good’ and ‘outstanding’. These terms, together with the assessment criteria that underpin them, need to be carefully defined when used and attempts made to establish a collective understanding. But I see no discussion about this in Ofsted circles and this is where there is a second missing link in SMW’s letter. There is no reference to assessment criteria and the need to open a debate regarding what constitutes ‘good’ or ‘effective’ teaching or even how we might approach the thorny issue of standardisation.  Without dealing with such issues, we inevitably come full circle to relying on the subjective interpretations of inspectors, with the result that we’re two steps forward and one step back.

The National Student Survey and the Growing Importance of Student Voice in Universities

Student voice has become a prized asset in higher education. While some argue that the pendulum has swung too far in terms of the power attributed to it, clearly it is here to stay and universities have to decide how best to deal with it.  Ever since the advent of the National Student Survey in 2005, its stock value has risen rapidly. Since the introduction of higher tuition fees and reduction in funding from the Higher Education Funding Council for England, universities have come to take the results of the survey more and more seriously, convinced that its impact on application numbers becomes more important every year. So concerned are some institutions with maximising National Student Survey response rates that they have created specific posts to drive home its importance and prompt students to complete it.

But while both universities and student bodies have focused much attention on marketing and promoting the survey, less attention has been paid to how best to engage students with the process of evaluation and the evidence they should draw on to ensure that their responses are suitably informed and represent a balanced and accurate reflection of their university experiences.

The National Union of Students has acknowledged that many students neglect the survey, largely because they do not realise just how seriously their responses are taken, or indeed how important they are. Student leaders have therefore concentrated on raising awareness, aiming to maximising response rates.

This is important because results are only published for those courses in which the response rate is at least 50%. But how well-equipped are students to comment insightfully on aspects of pedagogic and subject knowledge expertise? How do we know, for example, that they are not basing their views on superficial and arbitrary criteria such as the lecturer’s personality rather than an informed understanding of pedagogy or subject knowledge?

Of course students should be given a platform to express their views about their learning experiences, but let us not fool ourselves into thinking that they will somehow be able to produce a fair, valid and reliable assessment of the competence and performance of their teachers at the end of it. My research into the use of lesson observation in the English further education sector https://wlv.academia.edu/MattOLeary highlighted how difficult this is even for the most highly experienced observers working with tried and tested assessment criteria over a sustained period of time.

We therefore need more transparent dialogue among university staff and students about the nature and purpose of the National Student Survey, why it is important to gather feedback on students’ experiences and what impact that data has on the experiences of future students, along with how best to approach answering the questions. I am not suggesting for one moment that university staff attempt to “coach”  students in the content of their responses, but that they see the process as providing a stimulus for generating meaningful, reciprocal discussion about wider issues, such as the teaching and learning experience and student evaluation.

For example, the first two sections of the survey ask students about the quality of teaching, assessment and feedback. Surely these are aspects of practice that both parties need to discuss throughout the course? In the realm of teacher education it is generally accepted as good practice that it is sometimes helpful for teachers to share with students the rationale behind their decisions on choice of teaching methods or learning resources — in other words, why they are doing what they are doing and why they think it is the best way to go about tackling a particular topic.

To stimulate initial discussion, lecturers should give their students an insight into why they choose to employ particular teaching styles, what they consider to be the most effective ways of providing feedback. This should not be presented in a vacuum purely to prepare students for the National Student Survey, but should be embedded into live courses so that the discussion is put in context and makes sense. Equally, as part of such discussion, students should be given the opportunity to put forward their opinions, ask questions and seek clarification, with a view to them feeling a genuine sense of inclusion in the ongoing development of the curriculum.

This type of open, reciprocal dialogue between staff and students is crucial. Without it universities risk students basing their responses to the student survey not on an informed understanding of the complex decision-making processes that teaching staff invariably undergo when planning, delivering and assessing a programme of study, but on a hunch.

Dr Matt O’Leary

CRADLE, University of Wolverhampton

 

This piece was originally written for an online publication entitled ‘Research Fortnight: HE Policy’ in December 2013