Identifying a pattern of risky experimentation with automated systems in the Home Office, this book outlines precautionary measures that are essential to ensure that society benefits from government automation without exposing individuals to unacceptable risks.
In recent years, the UK’s Home Office has started using automated systems to make immigration decisions. These systems promise faster, more accurate and cheaper decision-making, but in practice they have exposed people to distress, disruption and even deportation.
This book identifies a pattern of risky experimentation with automated systems in the Home Office. It analyses three recent case studies including: a voice recognition system used to detect fraud in English language testing; an algorithm for identifying ‘risky’ visa applications; and automated decision-making in the EU Settlement Scheme.
The book argues that a precautionary approach is essential to ensure that society benefits from government automation without exposing individuals to unacceptable risks.