Exploring error identification to improve data and evidence on children in care

Full Application: Not funded at this stage

Our discovery highlighted that keeping Looked After Children (LAC) data accurate is time-consuming and difficult. Consequently, leadership often don’t have the reliable insights they need for key decisions. (See p.22 for full user needs.) In alpha, we’ll explore different solutions to this problem and test our biggest assumptions and risks, following agile methodology and the GDS Service Standard.

The discovery partners (GMCA, Manchester, Stockport and Wigan) will lead the work, with the DfE. We’ll user test and assess technical feasibility with councils of different sizes and structures (mets/unitaries/counties/trusts) to ensure we solve a common problem – including partners West Berkshire, Milton Keynes, Isle of Wight (IOW), Buckinghamshire, Bracknell Forest, East Sussex and others e.g. Slough. 

Our discovery concluded that an error-identification tool could meet user needs through enabling year-round error cleaning – currently not possible (p.38). We’ll build and test a prototype that tests data against the DfE’s LAC data-validation code and local validation rules. We’ll explore ways the tool can improve error-cleaning for analysts, such as automating cleaning and notifications to social workers. 

To plan for alpha, we held a workshop to identify key hypotheses underlying these ideas and develop a testing approach:

To plan for alpha, we held a workshop to identify key hypotheses underlying these ideas and develop a testing approach:

  • Better data will lead to better decisions

In discovery, leadership said they need accurate data to improve decisions on LAC (p.21). In alpha, we’ll use A/B testing with cleaned and uncleaned data to quantify the impact of data quality on decisions. 

  • Analysts will use the tool and clean errors identified

Analysts need to identify errors in LAC data year-round (p.41). In alpha, we’ll test how to automate cleaning, track usage and error counts over time to test if quality improves, and conduct user research into the experiences of analysts using the prototype, to assess the impact of the tool.

  • Automatic notifications will help social workers fix errors 

Analysts spend time chasing social workers and business-support to fix errors (p.51). In alpha, we’ll use Wizard of Oz prototype testing and observation with social workers to test if automatic notifications improve data quality at input stage.

  1. The tool will be feasible and scalable across all councils 

In alpha, we’ll use semi-structured interviews, data-analysis and moderated usability testing across the ten partner councils and existing networks (e.g. Regional Information Groups (RIGs), South-East Sector-Led Improvement Partnership (SESLIP) and National Performance and Information-Management Group (NPIMG)) to test scalability, and assess applicability to other statutory returns (134 in total, p.11).

Our discovery highlighted two key linked unmet user needs:

 

  1. Analysts need the ability to identify and fix errors year-round, to prevent errors from building up.

    Impact: They can only fix errors in an intensive, time-consuming 3-month period, leaving little time for analysis.

 

 

  • Leadership need accurate, up-to-date data so they can rely on evidence when making decisionsImpact: Leadership find “data quality makes the analysis unreliable”, meaning “evidence on how well things are working is limited.”

 

 

There is a gap in the market to address this problem. These needs exist in all 14 councils we’ve spoken to directly – regardless of case management systems. There’s no common error-checking tool. However, a common solution is feasible. Every council submits the same dataset to the DfE – meaning error-checking is applicable to all. 

 

There are also significant potential benefits from the other 134 annual statutory returns required of councils. In discovery, we investigated the Children in Need and School Censuses, which require time-intensive error-checking. 

 

Our core user group is Children’s Services analysts, but social workers, leadership and LAC are also beneficiaries. Analysts clean and prepare data on LAC for leadership and the DfE. Our aim is to make cleaning more effective for them (p.19-20). This will also benefit:

 

  • Social workers, who also clean data(p.18).

 

  • Leadership, who use this data to make operational, strategic and commissioning decisions about LAC services (p.21).

 

 

  • LAC who need the best support possible and an accurate record of their childhood (p.15).

 

Our hypotheses evolved in discovery. Before discovery, we knew leadership don’t have timely access to all the data and evidence needed to ensure LAC get the best support. Our original hypothesis was that a better common data model could provide leadership the evidence they need.

 

Our discovery confirmed the evidence gap (p.94,103), but revealed a more complex situation with several distinct problems. Of these, improving data quality is most pressing: this will drive immediate benefits and move towards fixing the plumbing. 

 

We considered whether the DfE opening their validation portal year-round would solve the problem. However, this only solves part of the problem. It does not make it faster or easier to clean the data.

We’ll improve the analyst user journey through enabling identification of errors year-round and more effective and efficient error cleaning. This should free up analysts to build the evidence leadership need.

Discovery highlighted three levels of benefits (p.27):

  • Short-term: analysts and social workers save time cleaning data, freeing-up time for analysis and working with families.

  • Medium-term: better quality data makes analysis and tools more reliable, giving leadership the evidence needed to improve services. Current tools benefiting include – Stockport CS dashboards, Children’s Services Analysis Tool, Local Authority Interactive Tool, Unit Cost Calculator, etc.

  • Long-term: better LAC services mean better outcomes and associated cashable savings. Current outcomes are poor (4x more crime, 5x more exclusions, 40x more homelessness, etc. (p.5)) and costly for government (costing ~£1BN/year to MoJ, DWP, HMRC alone). Data quality also impacts education and social care policy. Our discovery showed significant inaccuracies in DfE/ONS data.

Benefits scale: Given common data formats and processes, an error-identification tool could unlock benefits in all councils and benefits could scale to the 134 other stat returns. We’ll open-source and promote the tool through existing communities of practice and networks, starting with the 10 Greater Manchester (GM), 21 North-West and 19 SESLIP councils.

Savings: Our benefits case used GDS and Treasury Green Book guidance to quantify savings and costs:

  • Short-term: based on the proportion of errors already eliminated by at least one council, the number of common errors and time spent by analysts and social workers cleaning data, we estimate time savings of 220 days/council/year, equivalent to £57,000/council/year (pp.40).
     
  • Medium-term: better, data-enabled, targeted support, e.g. Multi-Systemic Therapy (p.8), would provide significant societal and financial benefits – LAC support is expensive (£100,000-£600,000/child), so saving potential is strong

  • Long-term: The largest benefits are long-term outcomes for LAC, with lower rates of crime, school exclusion, homelessness, unemployment, special educational needs, health issues and mental illness. 

In discovery, we didn’t believe we could accurately quantify medium- and long-term benefits – we’ll test these in alpha. Applying conservative Green Book confidence factors to the £57,000/council/year gives a savings estimate of £22,500/council/year (pp.40)

Depending on scale achieved, total savings would be:

  • Downside (10 GM councils): £225,000/year
  • Base-case (30 councils): £675,000/year
  • National: £3.4m/year


Costs (details p.35):

  • Discovery, alpha & beta development: £360,000/one-off
  • Live development: £100,000/year
  • Set-up costs (onboarding & service-change): £5,500/council
  • Ongoing costs (support & hosting): £2,000/year/council

Investment case (p.37): 

  • Downside scenario (only scale across GM):
    Tool repays £417,000 investment in 3.3 years (1.5x 5-year ROI)
  • Base case (scale to 30 councils):
    Tool repays £517,000 investment in a year (5.1x 5-year ROI)

National scale:
Tool overpays £1.2m investment a year (12.5x 5-year ROI)

We tried some great new tools, ceremonies and collaborative and iterative ways of working in discovery. These enabled agile working and helped ensure we were always focused on user need, continually learning, and able to pivot when necessary. We’ll build on these in alpha, as well as trying other great approaches we’ve seen from other Local Digital projects.

 

Tools

We’ll continue to use:

  • Huddle as a shared project collaboration space, making our materials open
  • A public Kanban Trello board so it’s easy for everyone to share the plan
  • Email rather than Slack, as we know from discovery not all IT-departments allow Slack

We’ll use more of:

  • YouTube to livestream our show-and-tells, as it’s easy and well-known
  • Github for our prototypes and guidance 
  • Pipeline for project updates and videos.

 

Ceremonies and approaches
We’ll continue to use an agile approach, working in sprints with daily standups and regular show-and-tells and retrospectives, following Kanban project management. 

 

In discovery, 1-2-4-All and Walking Brainstorms worked well for idea generation, while lean canvases and WWWWWH worked well for building more thorough understanding of potential solutions – we’ll continue these.

 

In alpha, we’ll try new ceremonies and liberating structures (e.g. from www.liberatingstructures.com, www.sessionslab.com, www.funretrospectives.com) e.g. the Six Thinking Hats Method when designing prototypes to help consider the user need and solution from different perspectives. 

 

Team collaboration
We’ll continue meeting remotely for sprint-planning sessions at the start of each sprint to set objectives and holding in-person show-and-tells at the end of each phase to share findings, with futurespectives to collaboratively plan the next phase. We’ll invite wider networks to our show-and-tells and livestream them to share learnings more widely. This will be particularly valuable for collaborating with our partners across the country. 

 

We’ll continue using retrospectives based on the FLAP, KALM and 3 Ls models, which we found helpful in discovery to identify what worked well and what to change. In alpha, we’ll use a Team Purpose and Culture workshop template at our kick-off meeting to establish how we’ll work together and ensure everyone is aligned on goals; this worked well for Stockport’s Local Digital project with Leeds.

 

Team structure and governance
Having small, close teams, with one representative/council worked well as a team structure in discovery – we’ll continue this. During Alpha, the project will be a standing item in SLT meetings to ensure oversight, buy-in and help build the business case for sustainability.

Support from the LDCU in discovery was very helpful. In particular, training and networking events, help with comms and feedback enabled us to more effectively develop necessary skills, share our findings more widely and get valuable insight and challenge from an ‘outside’ and expert perspective. 

 

Training: In discovery, we attended the LCDU’s 3-day GDS Academy Agile For Teams Training. This gave us a much better understanding of agile approaches, enabling us to use Kanban project management, and better ensure our user research was effectively capturing user needs. We’d welcome the opportunity to put further team members through this training, the User Research – Working Level Training, and the Introduction to User-Centered Design training. 

 

Community events: We really valued the Local Digital Fund events we attended. The kick-off event in London and the Roadshow in Bradford were great opportunities to meet and learn from other councils signed up to the Declaration. The kick-off gave us great stimulation to think through some project planning elements and the roadshow showed us approaches other councils had taken in their projects. We’d be keen to attend further community events. 

 

Sharing learnings: In discovery, the LDCU retweeting and commenting on our blogs was very helpful, enabling us to share our findings more widely and get feedback from wider networks. One blog post led to us being contacted by the Children’s Society, whom we met with to discuss our respective pieces of work in Children’s Services, including better use of data and digital. We’d encourage more of this in alpha.

 

Feedback: Feedback from the LDCU helped ensure we were following agile methodology, working in the open and producing effective outputs. Specifically, feedback from Sam and Rushi on our user research report and benefits case meant we included demographic information in our user personas, explained our user research approach in more detail, more clearly labeled where conservative assumptions had been used in the benefits case, and included what estimates would look like with less conservative assumptions. This helped make our outputs as valuable as possible for other councils. We’ve since had very positive feedback. We’d be keen for feedback throughout alpha.

Project team membership: In alpha, we’d be keen to bring our Local Collaboration Manager closer into the project team to help steer our approach to maximize learnings between projects and leverage LDCU’s expertise in public sector digital transformation.