Impact, Impact, Impact

Posted by: Category:

Having initially focused on intent and then implementation, Ofsted are already moving increasingly on to impact as a key area of focus during inspection. This was seen in recent inspections in three of the schools I work with regularly. Inspectors were now looking at the data the school showed them, and well as asking how the data was used by leaders. This is a shift away from earlier inspection practice under this framework. Their train of thought is moving very much towards, “So what is the impact of…?” when discussing aspects of the school’s work with leaders. Recently, the inspection handbook has been changed slightly with sub-headings for the Quality of Education grade descriptors including the word ‘Impact’.

This highlights a potential issue in the future in just about every school I work with; namely the accuracy of summative assessment information. As we move towards the end of another school year, teachers and subject leaders will be collecting and collating summative data for each class. This normally involves gathering the percentage of pupils working below, working at and working above age expectations in each subject. In reading and maths this is often supplemented by data collection points at the end of each term and often includes some form of test results alongside teacher assessment. For all other subjects and writing it’s normally based on teacher assessment, although some schools use assessment tasks and tests built into their unit plans.

However, having discussed this data with numerous subject leaders and senior leaders in a wide variety of schools no-one would put their mortgage on the absolute accuracy of the information, nor can many leaders clearly state what they do with that information.

Some subject leaders have moved further on with this though. Any gaps the data suggests for groups of pupils not achieving expected standards (based on the school’s own curriculum model) form the basis for three lines of enquiry. Is it due to either:

  • The teaching
  • The curriculum, or
  • The pupils

 

The last one sounds harsh, but there could have been high mobility in that class, an increase in SEND need, long term absences or various other reasons why some pupils did not achieve as well, based on their starting points that year.

If it was because of the curriculum, then what is being done to address this? Subject leaders obviously need to bring any issues and possible solutions to the attention of senior leaders.

If teaching was the issue, then what specifically was the problem? Is it subject knowledge, lack of time, a misunderstanding about the level of challenge…?

I recently worked with a small MAT looking at the data that core subject leaders were collating. The system they use presents classes as a coloured bar, coded for working well below, just below, at or GD.

The CEO and I met with leaders from the schools over a couple of days, with each meeting including a discussion about pupils’ progress and attainment.

What we soon realised was that there needed to be greater consistency in the information different schools and different leaders were putting into the system. Some were too cautious, so pupils appeared to have gone backwards since the previous year. Some were putting all pupils in as working well-below at the beginning of the year, not realising that termly data was supposed to be based on what had been taught so far i.e. on track for the end of the year. Some had no pupils at GD, or removed any that had been, because they thought that that had to wait until the end of the key stage, and so the inconsistencies went on.

Below I have included the example we put together to try and explain what the ‘bar’ in the tracking should represent. It is not definitive or exemplary, but it might be useful to consider any system you have and how rigorously and consistently  it is being used.

All assessments are based on curriculum content that has been taught so far. They are not based on end of year content until, obviously, the end of the final term in the Summer.

The expectation would be that pupils who are Green and Blue from previous data points would stay Green and Blue respectively, although there could be some pupils who move from Green to Blue.

Likewise there should  be pupils who move from Orange to Green as the support /catch-up/ adaptation/ interventions have an impact over time.

Any large changes in the bands from one data drop to another, or from one year to another, should be cause for discussion and investigation by subject / phase leaders in the first instance.

As a result of these discussions, going forward in the MAT subject leaders will be carrying out light touch sampling at the end of the year, just to moderate teacher assessments.

At the end of term senior leaders will choose three or four foundation subjects. The respective subject leaders will carry out a book scrutiny and pupil voice of three or four children in each year group to see if the evidence supports teacher assessments. This will then give assurance to the school about the validity and accuracy of the assessment information. The next step will be what do we do with that information, but that’s for another blog.

I’m sure other schools will have alternative and probably more effective methods to help validate the information they collect. If so, please share!

Continue the Conversation

For more information about Tim’s courses he is running click here.

To book Tim or one of our consultants to work with your school, email us consultancy@focus-education.co.uk 

You can find us on Twitter @focuseducation1 or get in touch with the Focus Education office on 01457 821 818.

 

 

 

Self Evaluation for a New Era Book

Self-Evaluation for a New Era

Planning for Improvement Book

Planning for Improvement

Writing a Self Evaluation Book

Writing a Self-Evaluation Statement

Posted by: Category:

Related Articles