Artificial Intelligence (AI) in the asylum system
The UK has a significant backlog in processing asylum claims. In June 2023, 215,500 asylum cases were ‘in progress’ with the UK Home Office with over half of these representing people awaiting an initial decision. Therefore, the identification of fair and effective strategies to speed up the process – including the mobilisation of AI based technologies – is critical.
The evolving field of AI technologies encompassing machine learning, deep learning and artificial neural networks – could be a positively disruptive branch of data science. Its utilisation allows for improvements in the speed, efficiency and reliability of decision-making. Artificial Intelligence can identify patterns across large data sets and manipulate its own algorithms to increase its accuracy. This could assist with various stages of asylum processing, including information gathering, data sharing, planning, analysis and decision-making. The AI streamlining of asylum casework through real-time language translation, potentially enables efficient communication between applicants and officials regardless of language barriers thus saving costs on translation services. Artificial Intelligence can assist in translating and organising vast quantities of documents, such as personal testimonies or legal paperwork, ensuring accuracy and consistency across multiple languages. Advanced AI translation tools are also capable of learning from context, which may facilitate greater understandings of the nuances and cultural specifics of an applicant's narrative.
In May 2023, the UK Home Office organised a ‘hackathon’ to explore possible ways AI-based technologies can help reduce the asylum backlog. Later in the same year, the AI Safety Summit 2023 highlighted safety concerns posed by the speed of global advances in AI. A clear issue is whether protection of fundamental rights can be ensured. Legal frameworks such as the European Commission's proposed AI Act aim to regulate AI use while also providing a global safety standard. Deploying AI in asylum processes raises critical concerns about the potential harm to already vulnerable groups. Artificial Intelligence systems could lead to unfairly prejudiced decisions by perpetuating or amplifying biases. This danger is compounded by the ‘black-box’ nature of machine processes. Ensuring that AI algorithms are transparent and accountable is essential to maintain the integrity of the asylum process, and to protect asylum seekers’ rights.
This research reviews the existing and emerging applications of Artificial Intelligence (AI) in the context of asylum. Click on the arrow below to read the full article.