Swedish authorities urged to discontinue AI welfare system


Sweden’s algorithmically powered welfare system is disproportionately concentrating on marginalised teams in Swedish society for profit fraud investigations, and have to be instantly discontinued, Amnesty International has stated.

An investigation revealed by Lighthouse Reports and Svenska Dagbladet (SvB) on 27 November 2024 discovered that the machine studying (ML) system being utilized by Försäkringskassan, Sweden’s Social Insurance Agency, is disproportionally flagging sure teams for additional investigation over social advantages fraud, together with ladies, people with “international” backgrounds, low-income earners and folks with out college levels.

Based on an evaluation of mixture information on the outcomes of fraud investigations the place circumstances have been flagged by the algorithms, the investigation additionally discovered the system was largely ineffective at figuring out males and wealthy individuals that truly had dedicated some type of social safety fraud.

To detect social advantages fraud, the ML-powered system – launched by Försäkringskassan in 2013 – assigns risk scores to social safety candidates, which then routinely triggers an investigation if the danger rating is excessive sufficient.

Those with the very best danger scores are referred to the company’s “management” division, which takes on circumstances the place there may be suspicion of prison intent, whereas these with decrease scores are referred to case employees, the place they’re investigated with out the presumption of prison intent.

Once circumstances are flagged to fraud investigators, they then have the ability to trawl by an individual’s social media accounts, acquire information from establishments corresponding to colleges and banks, and even interview a person’s neighbours as a part of their investigations. Those incorrectly flagged by the social safety system have complained they then find yourself going through delays and authorized hurdles in accessing their welfare entitlement. 

“The complete system is akin to a witch hunt in opposition to anybody who’s flagged for social advantages fraud investigations,” stated David Nolan, senior investigative researcher at Amnesty Tech. “One of the principle points with AI [artificial intelligence] techniques being deployed by social safety businesses is that they will worsen pre-existing inequalities and discrimination. Once a person is flagged, they’re handled with suspicion from the beginning. This will be extraordinarily dehumanising. This is a transparent instance of individuals’s proper to social safety, equality and non-discrimination, and privateness being violated by a system that’s clearly biased.”

Testing in opposition to equity metrics

Using the mixture information – which was solely doable as Sweden’s Inspectorate for Social Security (ISF) had beforehand requested the identical information – SvB and Lighthouse Reports have been in a position to test the algorithmic system in opposition to six customary statistical equity metrics, together with demographic parity, predictive parity and false optimistic charges.

They famous that whereas the findings confirmed the Swedish system is disproportionately concentrating on already marginalised teams in Swedish society, Försäkringskassan has not been absolutely clear concerning the internal workings of the system, having rejected quite a few freedom of data (FOI) requests submitted by the investigators.

They added that after they offered their evaluation to Anders Viseth, head of analytics at Försäkringskassan, he didn’t query it, and as an alternative argued there was no downside recognized.

“The picks we make, we don’t think about them to be a drawback,” he stated. “We take a look at particular person circumstances and assess them based mostly on the probability of error and people who are chosen obtain a good trial. These fashions have confirmed to be among the many most correct now we have. And now we have to use our assets in a cheap means. At the identical time, we don’t discriminate in opposition to anybody, however we observe the discrimination legislation.”

Computer Weekly contacted Försäkringskassan concerning the investigation and Amnesty’s subsequent name for the system to be discontinued.

“Försäkringskassan bears a major accountability to stop prison actions concentrating on the Swedish social safety system,” stated a spokesperson for the company. “This machine learning-based system is one among a number of instruments used to safeguard Swedish taxpayers’ cash.

“Importantly, the system operates in full compliance with Swedish legislation. It is value noting that the system doesn’t flag people however reasonably particular functions. Furthermore, being flagged doesn’t routinely lead to an investigation. And if an applicant is entitled to advantages, they’ll obtain them no matter whether or not their utility was flagged. We perceive the curiosity in transparency; nonetheless, revealing the specifics of how the system operates might allow people to bypass detection. This place has been upheld by the Administrative Court of Appeal (Stockholms Kammarrätt, case no. 7804-23).”

Nolan stated if use of the system continues, then Sweden could also be sleepwalking right into a scandal comparable to the one within the Netherlands, the place tax authorities used algorithms to falsely accuse tens of hundreds of oldsters and caregivers from largely low-income households of fraud, which additionally disproportionately harmed individuals from ethnic minority backgrounds.

“Given the opaque response from the Swedish authorities, not permitting us to perceive the internal workings of the system, and the imprecise framing of the social scoring ban underneath the AI Act, it’s tough to decide the place this particular system would fall underneath the AI Act’s risk-based classification of AI techniques,” he stated. “However, there may be sufficient proof to recommend that the system violates the appropriate to equality and non-discrimination. Therefore, the system have to be instantly discontinued.” 

Under the AI Act – which got here into drive on 1 August 2024 – the usage of AI techniques by public authorities to decide entry to important public providers and advantages should meet strict technical, transparency and governance guidelines, together with an obligation by deployers to perform an evaluation of human rights dangers and assure there are mitigation measures in place earlier than utilizing them. Specific techniques which might be thought-about as instruments for social scoring are prohibited.

Sweden’s ISF previously found in 2018 that the algorithm utilized by Försäkringskassan “in its present design [the algorithm] doesn’t meet equal remedy”, though the company pushed again on the time by arguing the evaluation was flawed and based mostly on doubtful grounds.

A knowledge safety officer who beforehand labored for the Försäkringskassan additionally warned in 2020 that the system’s operation violates the European General Data Protection Regulation, as a result of the authority has no authorized foundation for profiling individuals.

On 13 November, Amnesty International exposed how AI tools used by Denmark’s welfare agency are creating pernicious mass surveillance, risking discrimination in opposition to individuals with disabilities, racialised teams, migrants and refugees.

Recent Articles

spot_img

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here

Stay on op - Ge the daily news in your inbox