Decoding the dialect: The AI translational paradox - Digital Threat Digest
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
The other day, I came across this article from the Guardian on the use of AI translation in asylum applications. In the article, a Brazilian refugee who tried to seek asylum in the US was misunderstood not just by people, but by AI translation tools acting as interpreters. The article highlights the current lack of ability for AI to capture regional accents or dialects. In this case, this failure led to the refugee spending six months in ICE detention, unable to communicate with anyone.
AI technology is, of course, still developing, and it is understandable that it will take time to guarantee limited errors. However, deployment before this process is complete has already led to serious consequences in matters which are life and death. The Guardian isn’t the first to capture a story like this. A quick search on Google shows several articles and stories of people who were denied asylum because AI technology made minor errors that had large impacts. In one, AI changed ‘I’ to ‘We’ leading officers to believe that more than one person was seeking asylum. In another, AI claimed one woman was trying to escape abuse from her boss, when in fact it was her father she was trying to escape from.
Generative AI has the potential to massively improve upon the older, imperfect technology of machine translation. But cases like these show that we must have conversations around the ethics of using this new and developing technology in such complex situations.
Likely in response to these concerns, OpenAI updated its user policies in late March 2023 with rules that prohibit the use of ChatGPT in ‘high-risk government decision-making’ work; including work related to migration and asylum. This is a start, but I can’t help but wonder how many people were impacted by these errors prior to this policy update. Further, as OpenAI’s capacity to enforce this rule is unclear, it won’t help those who are trapped in processes which simply ignore the policy for the sake of convenience.
Despite this, I am a proponent of new technology; I think there’s still more to learn than to lose. If we hadn’t embraced technology, then I wouldn’t have the job I have today. I wouldn’t be able to send out these thoughts into the ether. And I wouldn’t have a title for this Digest because I used ChatGPT to make it up for me (I did combine two of the options, though, so is it really cheating?).
BUT I don’t think our current conversations around the downsides of AI capture the effects the technology has on those who suffer most. I don’t think we’re doing enough to highlight how implementing unfinished technology can harm those escaping things like wars, abuse, and poverty. Because in those cases, one wrong decision could be catastrophic to those who are already just barely surviving. The conversation must include these groups for us to make new technology better for everyone. To that end, until AI technology and the conversation around it is more developed, we will always need human review and input. This will likely always be true in areas where a sense of compassion, empathy, and understanding is vital. So, while I think we should certainly embrace new technology, we must do so in a human led, technology enabled way to minimise errors and protect the most vulnerable amongst us.
More about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.
I remember when I studied the susceptibility to committing crime in my Crime Studies post-grad. According to research, many factors, ranging from cognitive biases, emotional vulnerabilities, to social environments, influence a person’s likelihood of committing a crime.
Last week, my social media feeds were filled with news of Israel's synchronised attacks in Lebanon, ranging from news updates and victim testimonies to Hezbollah memes and edgy tankie shitposting.
Protection Group International (PGI) is pleased to announce that it has joined WeProtect Global Alliance to support the creation of a safer online environment for children.