This week saw the publication of Ofsted’s report on the responses to its consultation a Better Inspection for All. The report summarises the responses to its online survey, which ran from the 9th October to 5th December 2014. It provides a descriptive overview of the key outcomes to emerge from Ofsted’s consultation on proposals for inspection reform and it is likely to be used as the basis for preparing its new Common Inspection Framework from September 2015, though the extent to which the consultation has actually influenced and/or changed Ofsted’s inspection policy remains less clear.
Such policy reform is a significant event that has repercussions for everyone involved, thus it is only right that not just the teaching profession but the general public as a whole should have been consulted. That the online questionnaire generated only 4,390 responses is somewhat surprising and disappointing however, especially given that the proposed reforms represent the biggest overhaul of the inspection framework in years. That said, at least on this occasion Ofsted has decided to share the findings from its consultation publicly, which is more than can be said of another key area of policy that it has recently reformed, notably its approach to the grading of individual lessons observed during school inspections.
The last few years have witnessed a lot of discussion amongst policy makers and practitioners over the use of lesson observation as a method of assessing the quality of teaching and learning (e.g. O’Leary 2014). In Ofsted, much of this discussion has converged around how observation is used as a source of evidence during inspections and particularly the issue of grading individual lesson observations, which subsequently led to the inspectorate recently adopting an ungraded approach for its school inspections. A position paper written last summer by Ofsted’s then National Director for Schools, Michael Cladingbowl, set out the rationale for the change in policy:
Like many others, I have strong views about inspection and the role of inspector observation in it. I believe, for example, that inspectors must always visit classrooms and see teachers and children working. Classrooms, after all, are where the main business of a school is transacted. It is also important to remember that we can give a different grade for teaching than we do for overall achievement, particularly where a school is improving but test or examination results have not caught up. But none of this means that inspectors need to ascribe a numerical grade to the teaching they see in each classroom they visit. Nor does it mean aggregating individual teaching grades to arrive at an overall view of teaching. Far from it. Evaluating teaching in a school should include looking across a range of children’s work (Cladingbowl 2014: 2)
It is no exaggeration to say that Ofsted’s decision to remove grading from individual observations was met with widespread approval by school teachers and was generally perceived as a step in the right direction. In many ways this reaction was to be expected as graded observations had become one of the most polemical areas of practice for the profession in recent years (e.g. O’Leary & Brooks 2014). Yet the timing of Ofsted’s shift in position was interesting, as it arguably occurred at a point when the inspectorate was eager to improve its public image by engaging more with the teaching profession, particularly a community of influential edubloggers, in the wake of growing criticism of its credibility and legitimacy as a regulator of quality and standards in schools (e.g. Waldegrave & Simons 2014). However, the experience of the Further Education (FE) sector in England has been somewhat different to that of the schools’ sector, which has led some to allege that double standards are at play when it comes to Ofsted’s position on the grading of individual lessons in FE inspections.
According to an online article by Stephen Exley that appeared in the TES just before Xmas last year, Ofsted’s national director for learning and skills, Lorna Fitzjohn, remained undecided as to whether the FE sector was ‘mature enough’ to cope without graded observations. Despite its shift in policy away from graded observations in school inspections in August 2014 last year, it seems that Ms Fitzjohn is still unconvinced as to whether or not FE should follow the same path. She therefore announced that ‘further pilots of ungraded observations would be carried out’ this year ‘in order to help Ofsted reach a final decision’. In this week’s report a Better Inspection for All, this position is reiterated.
I have a certain degree of sympathy with the dilemma facing Ms Fitzjohn. For starters, she’s having to contend with one of the most controversial and emotive issues to affect the FE workforce over the last twenty years. Added to this are the ongoing tensions associated with the way in which this highly contentious mechanism is perceived and experienced by staff at all levels in FE. For instance, how do you go about dealing with what seems to be a general split of opinion between senior managers and those of practitioners regarding the continued use of observation in the sector?
As I stated in an earlier TES article in September 2014, this was a dilemma that Ofsted needed to confront directly and transparently if its ongoing pilot of ungraded observations in FE and its subsequent evaluation was to retain any credibility at all, and if the inspectorate was not to be seen to prioritise the views of senior managers over those of the sector’s teaching staff. Alas, I’m sorry to say, the evidence so far all seems to point towards my prediction having become a reality. Ms Fitzjohn seems to be allowing the voices of senior managers to dictate the proceedings and by suggesting that the ‘jury is out’ and questioning whether the sector is ‘mature enough’ to cope without graded observations, she is, unwittingly or not, acting as a mouthpiece for the vested interests of those influential college principals and directors who are, by default of their position, more likely to get greater exposure to and opportunity to express their opinions to her than the average FE tutor.
But what can we read into this? Does this mean that Ms Fitzjohn is more inclined to listen to and act upon the views of senior managers in FE than teachers? Is it a simple case of her hearing a mixed bag of views and she is genuinely finding it difficult to identify a consensus amongst them? Or is there a more underlying issue at the heart of this whole debate regarding the way in which Ofsted goes about carrying out evaluations and how this relates to their approach to policy making?
In May 2014 I met with Ofsted’s then national director for schools, Mike Cladingbowl. Mike came over to see me at the University of Wolverhampton to talk about my research on lesson observation and was keen to get my views on how Ofsted might review its use of observation as part of the inspection process. The 1-2-1 meeting we had lasted over two hours and during the course of it we talked about a range of topics, much of which centred on issues connected to assessment and specifically the area of teacher evaluation, a particular research interest of mine. Some of the things we discussed were still not public knowledge at the time. For example, Mike was in the process of preparing a press release announcing the pilot of ungraded observations in school inspections, which we discussed and he shared with me during the meeting.
In the weeks that followed the meeting, we had a number of discussions (by phone and email) regarding the inspection pilot. Mike sought my advice about how best to evaluate the pilot and at one point sent me a set of questions that he intended to include as part of the evaluation to canvass the opinions of all those involved in the pilot. Towards the beginning of July 2014, I emailed Mike with my feedback on the evaluation questions and suggestions as to what more needed to be included as part of an impact evaluation. The summer break kicked in and I didn’t hear anything more until Sir Michael Wilshaw’s announcement at the end of August that the removal of grades from observations during the pilot had ‘proved incredibly popular’ and as of September 2014, Ofsted would no longer be grading individual teachers’ lessons during inspections.
Despite my repeated requests to Mike to share the findings from the schools’ pilot, Ofsted has still not done so and in a tweet on 23rd September 2014, he declared that Ofsted had ‘no immediate plans to publish the formal evaluation of the pilot’. I’m still none the wiser as to why the findings of the evaluation have not been shared publicly. Surely they are deemed important enough to share with the teaching profession as a whole? Why would you bother to carry out an evaluation in the first place if you didn’t intend to share the findings with the very people it affects? Besides, as a matter of ethical responsibility, aren’t the participants who were involved in the schools’ pilot entitled to know WHY it ‘proved incredibly popular’ and whether it was popular with everyone involved or specific groups?
Until the findings from the schools’ pilot are shared openly, then the specific rationale for why Ofsted decided to stop grading individual lessons in school inspections will remain unclear. Conspiracy theories will continue to abound as to whether it was due to the pressure of external criticism rather than the substantive data collected and analysed as part of the pilot. We will never know, for example, how the new ungraded approach compared to the previous graded approach across the different groups involved. We will never know, for example, what some of the challenges and/or areas of (dis)agreement were found to be in adopting an ungraded approach by inspectors. The fact remains that until this detailed information is released then all we have to go on is Sir Michael Wilshaw’s soundbite from August 2014 that it ‘proved incredibly popular’, which hardly seems to embody the robust and rigorous approach to evaluating evidence that Ofsted prides itself on when conducting inspections. But then again, maybe this reveals a more accurate picture than we realise as to how policy decisions are made by Ofsted? One thing is for sure though, with the pilot of ungraded observations ongoing in the FE sector, Lorna Fitzjohn still has the opportunity to dispel any allegations of a lack of transparency and/or double standards by openly sharing the findings of that consultation with FE and the wider public. To fail to do so will only serve to feed the rumour mill further and do little to persuade those who argue that when it comes to education policy, it’s one rule for schools and another for FE.
Cladingbowl, M. (2014) Why I want to try inspecting without grading teaching in each individual lesson, June 2014, No. 140101, Ofsted. Available online at: http://www.ofsted.gov.uk/resources/why-i-want-try-inspecting-without-grading-teaching-each-individual-lesson Accessed 23/8/2014.
O’Leary, M. (2014) ‘Power, policy and performance: learning lessons about lesson observation from England’s Further Education colleges’. Forum, 56(2), 209-222.
O’Leary, M. & Brooks, V. (2014) ‘Raising the stakes: classroom observation in the further education sector’. Professional Development in Education, Vol. 40(4), pp. 530-545.
Waldegrave, H., & Simons, J. (2014) Watching the watchmen: The future of school inspections in England. London: Policy Exchange. 45.