Posted: 

Corneliu Bjola gives evidence on use of AI in diplomacy at UK parliament

Associate Professor Corneliu Bjola gave evidence to the Foreign Affairs Committee of the UK Parliament on artificial intelligence and diplomacy last month, outlining the risks and opportunities AI might offer.

Professor Bjola suggested consular affairs, involving repetitive work that required significant human resources, would be a low-risk area in which AI could prove useful. Using AI in public diplomacy, to absorb information about countries and how they are being perceived, could be considered medium risk. 

He said that crisis management, and understanding how patterns lead to conflict, was attracting a lot of interest but was potentially high risk. 

He highlighted a number of issues. Firstly, the problem of explainability: policy-makers need to know why a particular conclusion has been reached and course of action suggested and this is difficult if a decision has been reached via AI.

Secondly, he noted the need to understand what kind of data was being used. He cited the example of the Italian Ministry of Foreign Affairs, which is using a databank of news stories in an effort to predict potential crises. He pointed out the possible problem of using official news sources from China, for example, which may not give an accurate picture, as a data source, or of using news stories in English only.

Thirdly, he spoke about the weighting of information used in algorithms, noting that this is subjective and requires expert knowledge. 

Ultimately he said it was important to distinguish between AI informing decisions and AI driving decisions, and the dangers of eliminating humans from the loop.

Professor Bjola was also asked about the regulation of AI, particularly whether democratic governments would need to collaborate with bad faith actors. 

He suggested three elements need to be considered: conceptualisation – what exactly you are regulating, how granular you should be, and how you can regulate without stifling innovation; format – which other actors you should collaborate with; and implementation ­– how you can ensure that any measures you design take effect. 

He also discussed data agreements, which regulate how governments collect, store and process data from their citizens, and proposed that this type of collaboration could be a useful starting point for understanding how to build safe and transparent digital ecosystems that could foster human-centric AI development.

Read a transcript of the session