With the controversy surrounding last year's English exam results still fresh in the mind, the government has just issued a 7 point framework for use of automated and algorithmic decision making. 

Jointly developed by the Cabinet Office, Central Digital and Data Office and the Office for Artificial Intelligence and aimed at civil servants and ministers, the principles outlined in the framework provide a useful reference point for purchasers and suppliers of automated decision making technology in any context, public or private. 

While some of the guidance (comply with the law?) might sound obvious, the 7 principles go to the heart of the potential impacts of automated decisions on the rights of individuals, whether that's as 'citizens' or 'customers'. These include:

  • the need to pre-test systems and algorithms to flush out unintended consequences or outcomes;
  •  implementation of processes to remove inherent bias or prejudice which could result in unjust or unequal decisions;
  • ensuring data is handled securely and in accordance with relevant legal obligations - for example the interplay with Article 22 of UK GDPR where decisions are entirely automated;
  • future-proofing, by building in continuous monitoring and fixed review points to re-assess efficacy and delivery of intended outcomes.

This is just a framework for government at this stage, but with automated decisions currently regulated by a patchwork of legislation and industry guidance, and with the 7 principles following a pattern laid down by international cross-government organisations like the OECD, the trickle-down of these principles into government model IT contracts and into private sector contracts in regulated sectors like financial services and healthcare, can't be far away.

Either way, IT procurement teams intending to deploy automated decision making as part of new projects should be prepared to demonstrate how they are managing these risks, including through their contract terms.