How to make data work in a network of schools

Monday 12th November 2018

Rich Davies

As Ark's Director of Insight, Rich Davies is responsible for bridging the gap between data and action. He leverages data analysis to advise senior leaders and works to ensure that all Ark leaders, teachers and students can make data-informed decisions. He has overseen the development of Ark's award-winning data analytics systems and has led reviews of assessment, curriculum, destinations and many other key areas.

Prior to joining Ark, Rich was a Project Leader at the Boston Consulting Group, working across the Non-Profit, Retail, Energy and other sectors. He has also previously worked for Aspire Public Schools – a US-based network of Charter Schools – and was an Education Pioneers Fellow. He holds an MBA and MA Education from Stanford University as well as an MEng from Oxford University.
 

The DfE recently published an important new report ,  Making data work ,  by the Teacher Workload Advisory Group, chaired by Professor Becky Allen. At Ark, we believe it’s possible to use data more effectively while also reducing workload. That’s why we were pleased to participate in the group and why we support the report.

The first principle listed in the report is the most important — that the purpose and use of all data must be clear. At Ark, we believe that improved outcomes are made possible by informed action; that informed action is made possible by insightful analysis; and that insightful analysis is made possible by accurate data. The fundamental purpose of our assessment data is therefore to inform the teaching and leadership actions that will improve student outcomes.

But our assessment data has not always fulfilled this purpose. A few years ago, Ark was at risk of being one of the multi-academy trusts (MATs) implicitly criticised by the report. We centrally collected teacher-assessed sub-levels six times a year and many of our schools locally tracked additional lesson grades and/or granular checklists. Our analysis tools did not give teachers all of the answers they needed, so they created their own spreadsheets. The result was a lot of work for (at best) limited value.

Around this time, Ark’s then Head of Assessment, Daisy Christodoulou, was researching how to increase the quality of our assessments. Her book, Making Good Progress?, describes her conclusions in detail, but the main principles we derived from this research were:

  1. Different assessments for different purposes
    Summative and formative serve different needs and should be clearly delineated
  2. Common summative assessments
    The most accurate comparison is to ask students the exact same questions
  3. Cumulative tests for summative assessments
    Testing only what has recently been taught works for formative but not for summative
  4. Age-related grading bands for summative assessments
    Comparing to the national peer-group negates ‘need’ for fictional flightpaths
  5. Frequent, specific, non-graded formative assessments
    Ongoing checks for understanding are vital, but grading this is meaningless

When turning these into practice, we had to consider the third and fourth principles listed in “Making data work” — ensuring that the volume and frequency of data collection was proportionate and that the collection and analysis processes were as efficient as possible. We needed to minimise the time teachers spent on collection and analysis, freeing up their valuable time for informed action. To achieve this, we chose to leverage:

  • Consistency: Common definitions, common assessments, common calendar, common measures, common dashboards/tools
  • Scale: Large sample size (~2,000 students per year group), central administration, targeted network resource, network collaboration
  • Technology: “Enter once, use many times”, single data warehouse — integrating multiple sources, automated calculations/logic, interactive analysis tools

In practice, this led to Ark:

  • Reducing the number of summative assessments from six to three (and now in many cases two) per year
  • Introducing nationally standardised tests for all key stage 1 & 2 reading and maths assessments
  • Developing curriculum-aligned network tests for core subjects at key stage 3 (annually sense-checked against a sample of nationally standardised English and maths test results)
  • Using common exam board materials for curriculum-aligned network tests at key stage 4 & 5
  • Bringing together all network-wide subject teachers once per assessment window to align on assessment marking/moderation and post-assessment action planning
  • Building network-wide systems that automatically calculate all raw marks and age-related grading bands (post-hoc), as well as performance vs. baselines and (teacher-facing) targets
  • Creating visually consistent dashboards and analysis tools, tailored for different audiences (e.g. interactive teacher tools that drill down to individual questions/students vs. more high level one-pagers for management and governors)

This new approach to assessment is now in its third year at primary level and its second year at secondary level. The impact at primary level has been most notable, with the vast majority of school leaders feeling that the increase in data quality has contributed to improved student outcomes. Most also feel that it has enabled them to discontinue other time-consuming assessment activities.

The approach is less mature at secondary level and is subject to additional challenges, including curriculum alignment, marking consistency and the complexity of entry patterns and tiering. However, we believe that these challenges can be addressed through further collaboration within and beyond the network.

In the meantime, we must heed the second principle in “Making data work” — that the precision and limitations of our data are well understood. We believe that the model we have developed provides improved trade-offs between accuracy and efficiency, but we don’t pretend that it provides a perfect measurement of student learning, nor does it completely eliminate the workload associated with assessment data. This model is a work in progress and we will continue to listen to our teachers and school leaders as we develop it further, as well as drawing upon research and evidence from elsewhere — including this week’s DfE report.

To reiterate, our assessment data’s main purpose is to inform teaching and leadership actions — i.e. which students need what teacher support, which teachers need what leadership support and which leaders need what network support. As long as it continues to serve this purpose, we will keep doing everything we can as a network towards making data work for our schools, our teachers and, most importantly, our students.

Want to see more like this? Sign up to Teach – our monthly newsletter featuring practical tips, insights and ideas from Ark, our partners and our friends – visit www.arkonline.org/newsletter.