10.6 C
Leeds

AI Used to Detect Welfare Fraud in the UK Shows Bias

Published:

An internal assessment by the UK’s Department for Work and Pensions (DWP) has identified potential biases in an artificial intelligence system used to detect welfare fraud.

The system, which evaluates claims for universal credit advances, was found to disproportionately select individuals for fraud investigations based on factors such as age, disability, marital status, and nationality, according to a report by The Guardian.

The findings, uncovered through a fairness analysis conducted in February 2024, revealed “statistically significant outcome disparity” in the AI system’s recommendations. This contrasts with earlier statements from the DWP, which had maintained that the system did not pose risks of discrimination, citing human oversight as a safeguard in the decision-making process.

The DWP has not conducted fairness analyses for other AI systems it uses to detect fraud. Despite the findings, the department has defended the technology’s use, describing it as a “reasonable and proportionate” measure to address an estimated £8 billion in annual losses from fraud and error.

The Guardian reported that the fairness analysis raises questions about the potential for AI systems to produce biased outcomes. However, the DWP has not publicly responded to the report. It previously stated that human officials have the final say on payment decisions, which it argued mitigates the risk of unfair treatment.

Recent articles