The HE Green Paper: (Don’t) Read it and Weep – Part 1: The TEF & Social Mobility

The Disorder Of Things

Britain’s Conservative government recently released its much-awaited (or much-dreaded) ‘green paper’ on higher education (HE), a consultation document that sets out broad ideas for the sector’s future. Masochistically, I have read this document – so you don’t have to. This first post describes and evaluates the centrepiece of the green paper, the Teaching Excellence Framework (TEF), and measures on ‘social mobility’.

View original post 3,063 more words

The HE Green Paper: (Don’t) Read it and Weep – Part 2: Completing the Market

The Disorder Of Things

This post continues where Part 1 left off.

The real goal of the green paper is to accelerate the formation of a fully functioning market in HE – as has already been discussed elsewhere by the brilliant Andrew McGettigan. The opening move was HEFCE’s QA consultation earlier this year which, as I explained on TDOT, was an attempt to dilute quality standards to make it easier for ‘alternative’ (i.e. private) providers to enter the market. Whereas HEFCE hid behind technocratic jargon, however, the green paper openly announces the government’s ‘clear priority’ to ‘widen the range’ of HE providers (p.50). ‘Our aspiration is to remove all unnecessary barriers to entry’ and create a ‘level playing field’ (p.42).

View original post 1,088 more words

A Small Victory in a Bigger Battle – The End of Graded Observations in FE Inspections

On Wednesday afternoon I received the following email from UCU’s policy officer Angela Nartey:

From: Angela Nartey [mailto:ANartey@UCU.ORG.UK]

Sent: 20 May 2015 14:51

To: O’Leary, Matthew (Dr)

Subject: Ofsted graded lesson observations

Dear Matt,

I wanted to let you know that this morning at an Ofsted Standing Group of Teaching Associations we received some great news. The meeting included an overview of the new inspection framework.

In advance of the meeting we submitted the following questions:

· Will Ofsted make a judgement on the quality of teaching, learning and assessment using graded lesson observations?
· Will Ofsted use graded observations of lessons in any part of the inspection process?

The verbal response we received in relation to both questions was ‘no’. We asked again, adding ‘in the further education and skills sector’, and again we were given a definitive ‘no’ response.

There we have it! Although only verbal at this stage, this is an excellent step forward, and we can only thank you once again for your work which has been the only academic interrogation of the practice. The inspection handbook and instruments will be published in mid-June and so we hope to see written confirmation at that point.

Best wishes,

Angela Nartey
Policy officer
Carlow Street
London NW1 7LH
020 7756 2595
07789 553 172

As I read the email out to a group of colleagues who I had been in an all-day meeting with, it was simultaneously met with a chorus of cheers and a collective sigh of relief. I got home that night and showed my wife the email. After congratulating me, she immediately said, ‘So what are you going to do now you’ve won this argument?’ She was, of course, referring to the fact that for the last decade much of my work and research has centred on exposing the shortcomings of reductionist practices like graded observations and highlighting the counterproductive effects that they have on the professional lives of teachers. The research that I carried out for UCU into the use and impact of observation on the FE workforce is the largest study that has ever been done on observation in the UK and has played an important role in influencing views and informing the wider debate. So given the fact that I have dedicated so much of my time writing and talking about this topic, it was a perfectly good question to ask. What now?

Make no mistake, the removal of graded lesson observations from the FE inspection process is a welcome and important step in the right direction. Lorna Fitzjohn and her colleagues at Ofsted are to be commended for listening and responding to the views and experiences of practitioners and the compelling evidence. Yet, without wanting to sound like a party pooper, this is only the beginning; a small victory in a much bigger battle that lies ahead.

I have argued for some time now that simply removing grades from the observation process is not a panacea in itself. Until wider issues relating to judgement and how we attempt to capture the complexities of teaching and learning in the context of teacher evaluation are confronted, then the removal of grades runs the risk of being little more than a superficial change.

When I met with Mike Cladingbowl (Ofsted’s previous National Director for Schools) last May and he told me in advance of the public announcement that Ofsted were planning to remove graded observations from school inspections, I asked him how he intended to prepare inspectors for the change in policy and what he thought the wider repercussions would be for Ofsted’s assessment framework. What I was getting at with these questions was: 1) a change in procedure does not equate to a change in practice and/or mindset. In other words, simply asking observers not to grade lessons any more does not deal with the wider issue of how they conceptualise their role; 2) the decision to remove individual lesson grades from the inspection process has more far reaching consequences for the way in which Ofsted seeks to assess the quality of educational provision. If, as Mike Cladingbowl argued in his position paper last June, attaching a grade to a one-off, episodic event like a lesson observation is no longer deemed fit for purpose, this inevitably raises the question of why stop at observations? Why not extend the removal of grades to the inspection process as a whole? There is a strong case for moving towards an assessment framework that simply operates on a ‘good enough/not good enough’ basis.

Despite Ofsted’s change in policy, there is a concern amongst some in the profession that this won’t necessarily lead to a change in the mindset and working practices of some senior managers/leaders in certain colleges and schools. Old habits die hard and the reliance of some on the grading of teachers on an annual basis has become engrained in the performance management systems of many institutions. From a management perspective, there is undoubtedly an allure about the quick and easy nature of attaching a number to a teacher’s performance that may prove a stubborn practice to change. But the real challenge that lies ahead concerns the way in which the profession conceptualises the use of a mechanism like observation. Grades or no grades, the next stage of the debate needs to confront the long standing issue of how the profession breaks free from the assessment straitjacket that has conceptually constrained the way in which it has engaged with observation for decades.


Double standards? An insight into Ofsted’s approach to policy making: the grading of individual lessons in England’s colleges and schools as a case in point

This week saw the publication of Ofsted’s report on the responses to its consultation a Better Inspection for All. The report summarises the responses to its online survey, which ran from the 9th October to 5th December 2014. It provides a descriptive overview of the key outcomes to emerge from Ofsted’s consultation on proposals for inspection reform and it is likely to be used as the basis for preparing its new Common Inspection Framework from September 2015, though the extent to which the consultation has actually influenced and/or changed Ofsted’s inspection policy remains less clear.

Such policy reform is a significant event that has repercussions for everyone involved, thus it is only right that not just the teaching profession but the general public as a whole should have been consulted. That the online questionnaire generated only 4,390 responses is somewhat surprising and disappointing however, especially given that the proposed reforms represent the biggest overhaul of the inspection framework in years. That said, at least on this occasion Ofsted has decided to share the findings from its consultation publicly, which is more than can be said of another key area of policy that it has recently reformed, notably its approach to the grading of individual lessons observed during school inspections.

The last few years have witnessed a lot of discussion amongst policy makers and practitioners over the use of lesson observation as a method of assessing the quality of teaching and learning (e.g. O’Leary 2014). In Ofsted, much of this discussion has converged around how observation is used as a source of evidence during inspections and particularly the issue of grading individual lesson observations, which subsequently led to the inspectorate recently adopting an ungraded approach for its school inspections. A position paper written last summer by Ofsted’s then National Director for Schools, Michael Cladingbowl, set out the rationale for the change in policy:

Like many others, I have strong views about inspection and the role of inspector observation in it. I believe, for example, that inspectors must always visit classrooms and see teachers and children working. Classrooms, after all, are where the main business of a school is transacted. It is also important to remember that we can give a different grade for teaching than we do for overall achievement, particularly where a school is improving but test or examination results have not caught up. But none of this means that inspectors need to ascribe a numerical grade to the teaching they see in each classroom they visit. Nor does it mean aggregating individual teaching grades to arrive at an overall view of teaching. Far from it. Evaluating teaching in a school should include looking across a range of children’s work (Cladingbowl 2014: 2)

It is no exaggeration to say that Ofsted’s decision to remove grading from individual observations was met with widespread approval by school teachers and was generally perceived as a step in the right direction. In many ways this reaction was to be expected as graded observations had become one of the most polemical areas of practice for the profession in recent years (e.g. O’Leary & Brooks 2014). Yet the timing of Ofsted’s shift in position was interesting, as it arguably occurred at a point when the inspectorate was eager to improve its public image by engaging more with the teaching profession, particularly a community of influential edubloggers, in the wake of growing criticism of its credibility and legitimacy as a regulator of quality and standards in schools (e.g. Waldegrave & Simons 2014). However, the experience of the Further Education (FE) sector in England has been somewhat different to that of the schools’ sector, which has led some to allege that double standards are at play when it comes to Ofsted’s position on the grading of individual lessons in FE inspections.

According to an online article by Stephen Exley that appeared in the TES just before Xmas last year, Ofsted’s national director for learning and skills, Lorna Fitzjohn, remained undecided as to whether the FE sector was ‘mature enough’ to cope without graded observations. Despite its shift in policy away from graded observations in school inspections in August 2014 last year, it seems that Ms Fitzjohn is still unconvinced as to whether or not FE should follow the same path. She therefore announced that ‘further pilots of ungraded observations would be carried out’ this year ‘in order to help Ofsted reach a final decision’. In this week’s report a Better Inspection for All, this position is reiterated.

I have a certain degree of sympathy with the dilemma facing Ms Fitzjohn. For starters, she’s having to contend with one of the most controversial and emotive issues to affect the FE workforce over the last twenty years. Added to this are the ongoing tensions associated with the way in which this highly contentious mechanism is perceived and experienced by staff at all levels in FE. For instance, how do you go about dealing with what seems to be a general split of opinion between senior managers and those of practitioners regarding the continued use of observation in the sector?

As I stated in an earlier TES article in September 2014, this was a dilemma that Ofsted needed to confront directly and transparently if its ongoing pilot of ungraded observations in FE and its subsequent evaluation was to retain any credibility at all, and if the inspectorate was not to be seen to prioritise the views of senior managers over those of the sector’s teaching staff. Alas, I’m sorry to say, the evidence so far all seems to point towards my prediction having become a reality. Ms Fitzjohn seems to be allowing the voices of senior managers to dictate the proceedings and by suggesting that the ‘jury is out’ and questioning whether the sector is ‘mature enough’ to cope without graded observations, she is, unwittingly or not, acting as a mouthpiece for the vested interests of those influential college principals and directors who are, by default of their position, more likely to get greater exposure to and opportunity to express their opinions to her than the average FE tutor.

But what can we read into this? Does this mean that Ms Fitzjohn is more inclined to listen to and act upon the views of senior managers in FE than teachers? Is it a simple case of her hearing a mixed bag of views and she is genuinely finding it difficult to identify a consensus amongst them? Or is there a more underlying issue at the heart of this whole debate regarding the way in which Ofsted goes about carrying out evaluations and how this relates to their approach to policy making?

In May 2014 I met with Ofsted’s then national director for schools, Mike Cladingbowl. Mike came over to see me at the University of Wolverhampton to talk about my research on lesson observation and was keen to get my views on how Ofsted might review its use of observation as part of the inspection process. The 1-2-1 meeting we had lasted over two hours and during the course of it we talked about a range of topics, much of which centred on issues connected to assessment and specifically the area of teacher evaluation, a particular research interest of mine. Some of the things we discussed were still not public knowledge at the time. For example, Mike was in the process of preparing a press release announcing the pilot of ungraded observations in school inspections, which we discussed and he shared with me during the meeting.

In the weeks that followed the meeting, we had a number of discussions (by phone and email) regarding the inspection pilot. Mike sought my advice about how best to evaluate the pilot and at one point sent me a set of questions that he intended to include as part of the evaluation to canvass the opinions of all those involved in the pilot. Towards the beginning of July 2014, I emailed Mike with my feedback on the evaluation questions and suggestions as to what more needed to be included as part of an impact evaluation. The summer break kicked in and I didn’t hear anything more until Sir Michael Wilshaw’s announcement at the end of August that the removal of grades from observations during the pilot had ‘proved incredibly popular’ and as of September 2014, Ofsted would no longer be grading individual teachers’ lessons during inspections.

Despite my repeated requests to Mike to share the findings from the schools’ pilot, Ofsted has still not done so and in a tweet on 23rd September 2014, he declared that Ofsted had ‘no immediate plans to publish the formal evaluation of the pilot’. I’m still none the wiser as to why the findings of the evaluation have not been shared publicly. Surely they are deemed important enough to share with the teaching profession as a whole? Why would you bother to carry out an evaluation in the first place if you didn’t intend to share the findings with the very people it affects? Besides, as a matter of ethical responsibility, aren’t the participants who were involved in the schools’ pilot entitled to know WHY it ‘proved incredibly popular’ and whether it was popular with everyone involved or specific groups?

Until the findings from the schools’ pilot are shared openly, then the specific rationale for why Ofsted decided to stop grading individual lessons in school inspections will remain unclear. Conspiracy theories will continue to abound as to whether it was due to the pressure of external criticism rather than the substantive data collected and analysed as part of the pilot. We will never know, for example, how the new ungraded approach compared to the previous graded approach across the different groups involved. We will never know, for example, what some of the challenges and/or areas of (dis)agreement were found to be in adopting an ungraded approach by inspectors. The fact remains that until this detailed information is released then all we have to go on is Sir Michael Wilshaw’s soundbite from August 2014 that it ‘proved incredibly popular’, which hardly seems to embody the robust and rigorous approach to evaluating evidence that Ofsted prides itself on when conducting inspections. But then again, maybe this reveals a more accurate picture than we realise as to how policy decisions are made by Ofsted? One thing is for sure though, with the pilot of ungraded observations ongoing in the FE sector, Lorna Fitzjohn still has the opportunity to dispel any allegations of a lack of transparency and/or double standards by openly sharing the findings of that consultation with FE and the wider public. To fail to do so will only serve to feed the rumour mill further and do little to persuade those who argue that when it comes to education policy, it’s one rule for schools and another for FE.


Cladingbowl, M. (2014) Why I want to try inspecting without grading teaching in each individual lesson, June 2014, No. 140101, Ofsted. Available online at: Accessed 23/8/2014.

O’Leary, M. (2014) ‘Power, policy and performance: learning lessons about lesson observation from England’s Further Education colleges’. Forum, 56(2), 209-222.

O’Leary, M. & Brooks, V. (2014) ‘Raising the stakes: classroom observation in the further education sector’. Professional Development in Education, Vol. 40(4), pp. 530-545.

Waldegrave, H., & Simons, J. (2014) Watching the watchmen: The future of school inspections in England. London: Policy Exchange. 45.

How research-informed practice stood up to the pseudo-science of inspection: defending an ungraded approach to the evaluation of teachers


This post tells the story of a university partnership of teacher educators’ experience of an Ofsted inspection of its Initial Teacher Education (ITE) provision in March 2013. Building on a position paper that was written at the time of the inspection, this post outlines how we defended our position on not grading our student teachers and shares some of the underpinning principles of our philosophy. Given the recent shift in Ofsted policy to remove the grading of individual lesson observations from school inspections, this post is very timely as it discusses some of the challenges faced by a department that has not only never used the Ofsted 4-point scale to assess its student teachers during observations, but resisted the use of numerical grading scales across its programmes as a whole.

Few areas of practice have caused as much debate and unrest amongst teachers in recent years as that of lesson observation, particularly graded observations and the way in which they have been used as summative assessments to rank teachers’ classroom performance against the Ofsted 4-point scale. Recent research in the field has described how graded lesson observations have become normalised, highlighting Ofsted’s hegemonic influence and control over education policy and practice (e.g. O’Leary 2013). At the same time, they have been critiqued for embodying a pseudo-scientific approach to measuring performance, as well as giving rise to a range of counterproductive consequences that ultimately militate against professional learning and teacher improvement (e.g. O’Leary and Gewessler 2014; UCU 2013). 


Unlike the vast majority of other university ITE providers in England, the post-compulsory education (PCE) department at the University of Wolverhampton has never used graded observations on its programmes. The underpinning rationale for adopting an ungraded approach to the assessment of our student teachers did not emerge arbitrarily but was developed collaboratively over a sustained period of time. This approach was underpinned by a core set of principles and shared understandings about the purpose and value of our ITE programmes, as well as being informed by empirical research into the use and impact of lesson observations in the Further Education (FE) sector and on-going discussions with our partners and student teachers. Given that our approach went against the grain of normalised models of observation, we knew that our programmes would be subject to heightened scrutiny and interrogation by Ofsted when it was announced that all the university’s ITE programmes would be inspected in March 2013.

The tone was set soon after the arrival of the inspection team on the first day when the lead inspector asked the PCE management team to rate the quality of its provision against Ofsted’s 4-point scale. This was despite the fact that the team had chosen not to apply this grading scale in its self-evaluation document (SED), which all providers were required to complete and submit at the end of each year and to which Ofsted had access before the inspection. But why did the partnership adopt this stance? It is important to emphasise that our resistance to embracing Ofsted’s ‘dominant discourses’ (Foucault 1980) and normalised practice was not based on any wilful refusal to comply or obey their authority as the regulators of quality for ITE provision, but driven by more fundamental concerns regarding the legitimacy and reliability of its assessment framework and the impact of that on teachers in training. Needless to say this epistemological positioning did not sit easily with the inspection team as it presented them with certain challenges that they were unaccustomed to, some of which are discussed further below.

Evaluating performance

It was a strongly held view across our partnership that the use of a metrics-based approach was neither the most appropriate nor the most effective means of fostering our student teachers’ development, nor indeed of measuring the level of performance required to meet the ‘pass’ threshold criteria of our programmes. Our partnership staff comprised largely experienced teacher educators who were comfortable and confident of being able to make judgements about the progress and performance of their students against the pass/fail assessment framework used on the programmes. In some ways this was akin to the notion of ‘fitness to practise’ used by other professions such as health. This ‘fitness to practise’ was initially mapped against the professional standards in use at the time in the FE sector (LLUK 2006) and more recently against the Education and Training Foundation’s (ETF) revised standards (ETF 2014). As the PCE partnership had been actively engaged with these standards through year on year collaborative work to revise and refine their application to its ITE programmes, there was a shared ownership of the assessment by those working on the programme. In contrast, we were not convinced that the Ofsted 4-point scale could be applied with the same rigour, reliability and appropriateness to assess students’ attainment as our existing assessment framework and criteria, whereby students were either judged to have satisfied the criteria or not. In other words, whilst all those teacher educators working on the programmes were clear as to what constituted a pass/fail and were confident in applying these criteria accurately and consistently, the same could not be said about the interpretation and application of Ofsted’s 4-point scale.

In their study into the grading of student teachers on teaching practice placements in Scotland, Cope et al (2003: 682) found that the success of such practice depended on ‘a clearly reliable and valid system of assessment of the practice of teaching’ and concluded that ‘the evidence available suggests that this does not currently exist’. This is not a phenomenon specific to observation as a method of assessment, but reflects widely held beliefs among key researchers in the field of assessment such as Gipps (1994: 167), who argued back in the 1990s that ‘assessment is not an exact science and we must stop presenting it as such.’ The danger, of course, is that the inherent limitations of practice such as numerically grading performance are often overlooked and the resulting judgments are given far more weight and authority than they can realistically claim to have or indeed deserve.

Prioritising teacher development

Our ITE programmes are built on a developmental philosophy in which the student teacher’s growth is prioritised. Staff working on the programmes are committed to helping their students to develop their pedagogic skills and subject knowledge base. It was therefore their belief that judging them against a performative, numerical grading scale of 1-4 would compromise that commitment and jeopardise the supportive focus of the teacher educator and mentor’s relationship with their students. The partnership also benefitted from being involved in and discussing the latest research into lesson observation as one of the university members of staff specialised in this particular area.

As mentioned above, recent research into the use of graded observation in FE reveals how it has become normalised as a performative tool of managerialist systems fixated with attempting to measure teacher performance rather than actually improving it (e.g. O’Leary 2012). The teacher educators and mentors in the PCE partnership saw their primary responsibility as that of helping to nurture their student teachers as effective practitioners rather than having to rank their performance according to a series of judgemental labels (i.e. ‘outstanding’, ‘inadequate’ etc.) that were principally designed to satisfy the needs of external agencies such as Ofsted within the marketised FE landscape and carried with them absolutist judgements that were inappropriate to their isolated, episodic nature. This emphasis on measuring teacher performance was also seen as responsible for what Ball (2003) refers to as ‘inauthenticity’ in teacher behaviour and classroom performance during assessed observations. This is typically manifested in the delivery of the rehearsed or showcase lesson as the high stakes nature of such observations results in a reluctance to want to take risks for fear of being given a low grade. Teachers are thus aware of the need to ‘play the game’, which can result in them following a collective template of good practice during observation. Yet being prepared to experiment with new ways of doing things in the classroom and taking risks in one’s teaching is widely acknowledged as an important constituent of the development of both the novice and experienced teacher.

Furthermore, findings from two separate studies on observation in FE (e.g. O’Leary 2011; UCU 2013) have revealed some of the distorting and counterproductive consequences of grading on in-service teachers’ identity and professionalism. Staff in the PCE partnership, many of whom are FE teachers themselves, were determined to protect their student teachers from such consequences during their time on the programme. This did not mean, however, that they avoided discussing the practice of grading teacher performance with them or confronting some of the challenging themes and issues associated with it. On the contrary, this was a topic that was addressed explicitly through professional development modules and wider discussions about assessment and professionalism as part of the on-going critically reflective dialogues that occurred between teacher educators, mentors and students throughout the programme.

Developing critically reflective teachers

The university’s PCE ITE programmes are underpinned by the notion of critical reflection. Brookfield (1995) argues that what makes critically reflective teaching ‘critical’ is an understanding of the concept of power in a wider socio-educational context and recognition of the hegemonic assumptions that influence and shape a teacher’s practices. The PCE partnership viewed the use of graded observations as an example of one such hegemonic assumption. Thus the perceived or intended outcomes of graded observations (i.e. improving the quality of teaching and learning, promoting a culture of continuous improvement amongst staff etc.) were not always the actual outcomes as experienced by those involved in the observation process. And then, of course, there was the thorny issue of measurement.

The ongoing fixation with attempting to measure teacher performance is symptomatic of a wider neoliberal obsession of trying to quantify and measure all forms of human activity, epitomised in the oft-quoted saying that ‘you can’t manage what you can’t measure’, a maxim that has its roots in a marketised approach to educational improvement and one which seems to shape Ofsted’s inspection framework. During the inspection, it became apparent that the PCE partnership’s ungraded approach was problematic for Ofsted. Although when I asked the lead inspector directly at a feedback meeting if the use of a grading scale was considered an essential feature of being able to measure teachers’ progress and attainment, he categorically stated that was NOT the case nor did Ofsted prescribe such policy, he later contradicted this in his final report by maintaining that as the partnership did not grade, it was ‘difficult to measure student progress from year to year or the value that the training added in each cohort’. In spite of the presentation of interwoven sources of qualitative evidence (tutor/mentor/peer evaluations, self-evaluations, integrated action/development plans, critically reflective accounts etc) illustrating these student teachers’ journeys throughout their programmes of study, the inspection team was reluctant or even unable to conceptualise the notion of improvement unless the outcome was expressed in the form of a number. And why is that? Because, of course, reading such qualitative accounts are more time consuming and ‘messier’ than the reductive simplicity of allocating a number to something, however spurious that number might be. This reveals the extent to which ‘managerialist positivism’ (Smith and O’Leary 2013) has become an orthodoxy and Ofsted its agent of enforcement. Despite that, the partnership team defended its practice and emphasised how the broad range of evidence captured in the combination of formative and summative assessments provided a rich tapestry of these student teachers’ progress and attainment throughout the programme and ultimately one that was more meaningful than the allocation of a reductive number.


Ball, S. (2003) The teacher’s soul and the terrors of performativity, Journal of Education Policy, 18(2), pp. 215-228.

Brookfield, S. D. (1995) Becoming a Critically Reflective Teacher. San Francisco, CA: Jossey-Bass.  

Cope, P., Bruce, A., McNally, J. and Wilson, G. (2003) Grading the practice of teaching: an unholy union of incompatibles. Assessment & Evaluation in Higher Education, 28(6), pp. 673-684.

Education and Training Foundation (ETF) (2014) Professional Standards for Teachers and Trainers in Education and Training – England. Available at:

Foucault, M. (1980) Power/Knowledge – Selected Interviews and Other Writings 1972-1977. Brighton: The Harvester Press.

Gipps, C. (1994) Beyond Testing: Towards a Theory of Educational Assessment. London: Falmer Press.

Lifelong Learning UK (LLUK) (2006) New overarching professional standards for teachers, tutors and trainers in the lifelong learning sector. London: LLUK

O’Leary, M. (2011) The Role of Lesson Observation in Shaping Professional Identity, Learning and Development in Further Education Colleges in the West Midlands, unpublished PhD Thesis, University of Warwick, September 2011.

O’Leary, M. (2012) Exploring the role of lesson observation in the English education system: a review of methods, models and meanings. Professional Development in Education, 38(5), pp. 791-810.

O’Leary, M. (2013) Surveillance, performativity and normalised practice: the use and impact of graded lesson observations in Further Education Colleges. Journal of Further and Higher Education, 37(5), pp. 694-714.

O’Leary, M. & Gewessler, A. (2014) ‘Changing the culture: beyond graded lesson observations’. Adults Learning– Spring 2014, 25: 38-41. 

Smith, R. & O’Leary, M. (2013) New Public Management in an age of austerity: knowledge and experience in further education, Journal of Educational Administration and History, 45(3), pp. 244-266.

University and College Union (UCU) (2013) Developing a National Framework for the Effective Use of Lesson Observation in Further Education. Project report, November 2013. Available at: