The use of opaque artificial intelligence (AI) systems by Försäkringskassan, Sweden’s Social Insurance Agency, must be immediately discontinued, Amnesty International said today, following an investigation into Sweden’s welfare system by Lighthouse Reports and Svenska Dagbladet, which found that the system unjustly flagged marginalized groups for benefits fraud investigations.
The investigation exposes that the system disproportionately flagged certain groups for further investigation regarding social benefits fraud, including women, individuals with foreign backgrounds — those born overseas or whose parents were born in other countries — low-income earners, and individuals without university degrees. Amnesty International supported the investigation by reviewing the analysis and methodology taken by the project team, providing input and suggestions, and discussing the findings within a human rights framing.
“The Swedish Social Insurance Agency’s intrusive algorithms discriminate against people based on their gender, ‘foreign background’, income level, and level of education. This is a clear example of people’s right to social security, equality and non-discrimination, and privacy being violated by a system that is clearly biased,” said David Nolan, Senior Investigative Researcher at Amnesty Tech.
The machine learning system has been used by the Swedish Social Insurance Agency since at least 2013. The system assigns risk scores, calculated by an algorithm, to social security applicants to detect social benefits fraud.
Försäkringskassan conducts two types of checks, one is a standard investigation conducted by case workers which does not presume criminal intent and allows for individuals to have simply made mistakes, the other is conducted by the “control” department which takes on cases where there is suspicion of criminal intent. People with the highest risk scores as designated by the algorithm have been automatically subject to investigations by fraud controllers within the welfare agency, under an assumption of “criminal intent” right from the start.
A ‘witch hunt’
Fraud investigators who see files that have been flagged by the system have enormous power. They can trawl through a person’s social media accounts, obtain data from institutions such as schools and banks, and even interview an individual’s neighbors as part of their investigations.
“The entire system is akin to a witch hunt against anyone who is flagged for social benefits fraud investigations,” said David Nolan.
People who are incorrectly flagged by the biased social security system have complained that they end up facing delays and legal hurdles in accessing their welfare entitlement.
“One of the main issues with AI systems being deployed by social security agencies is that they can aggravate pre-existing inequalities and discrimination. Once an individual is flagged, they’re treated with suspicion from the start. This can be extremely dehumanizing,” said David Nolan.
Although the project team at Lighthouse Reports and Svenska Dagbladet submitted freedom of information requests, the Swedish authorities have not been fully transparent about the inner workings of the system.
Despite being refused by the welfare agency, the SvD and Lighthouse Reports team did however manage to access aggregate data on the outcomes of the fraud investigations conducted on a sample of cases flagged by the algorithm, alongside demographic characteristics of the people subject to the system. This was only possible as the Inspectorate for Social Security (ISF) had previously requested the same data.
Using this data, the team was able to test the algorithmic system against six standard statistical fairness metrics, including demographic parity, predictive parity, and false positive rates. Each indicator uses a different approach to try and measure bias and discrimination within an algorithmic system, and the findings confirmed that the Swedish system is disproportionately targeting already marginalized groups within Swedish society.
Embedded bias
There are longstanding concerns about embedded bias in the system used by Sweden’s Försäkringskassan. A 2018 report by the (ISF) outlined that the algorithm used by the agency “in its current design [the algorithm] does not meet equal treatment.” The Swedish Social Insurance Agency argued that the analysis was flawed and rested on dubious grounds.
A data protection officer who previously worked for the Swedish Social Insurance Agency, meanwhile, warned in 2020 that the entire operation violates the European data protection regulation, because the authority has no legal basis for profiling people.
Due to their high-risk to people’s rights, under the newly adopted European Artificial Intelligence Regulation (AI Act), AI systems used by authorities to determine access to essential public services and benefits must meet strict technical, transparency and governance rules, including an obligation by deployers to carry out an assessment of human rights risks and guarantee mitigation measures before using them. Further, specific systems which would be considered as tools for social scoring, are prohibited by the Act.
“If the system used by the Swedish Social Insurance Agency continues, then Sweden may sleepwalk towards a scandal similar to the one in the Netherlands, where tax authorities falsely accused tens of thousands of parents and caregivers from mostly low-income families of fraud, and disproportionately harmed people from ethnic minorities,” said David Nolan.
“Given the opaque response from the Swedish authorities, not allowing us to understand the inner workings of the system, and the vague framing of the social scoring ban under the AI Act, it is difficult to determine where this specific system would fall under the AI Act’s risk-based classification of AI systems. However, there is enough evidence to suggest that the system violates the right to equality and non-discrimination. Therefore, the system must be immediately discontinued.”
Background
On November 13, Amnesty International’s report Coded Injustice exposed how AI tools used by the Danish welfare agency are creating pernicious mass surveillance, risking discrimination against people with disabilities, racialized groups, migrants and refugees.
On October 15, Amnesty International and fourteen other coalition partners led by La Quadrature du Net (LQDN) submitted a complaint to the Council of State, the highest administrative court in France, demanding the risk-scoring algorithmic system used by CNAF be stopped.
In August 2024, the AI Act came into force for EU artificial intelligence regulation that protects and promotes human rights. Amnesty International, as part of a coalition of civil society organizations led by the European Digital Rights Network (EDRi), has been calling for EU artificial intelligence regulation that protects and promotes human rights.
In 2021, Amnesty International’s report Xenophobic Machines exposed how racial profiling was baked into the design of the algorithmic system by the Dutch tax authorities that flagged claims for childcare benefits as potentially fraudulent.
Contact: [email protected]