Nobel Peace Price Recipient Discusses War Crimes in the AI Era

The Sutardja Center for Entrepreneurship and Technology welcomed Oleksandra Matviichuk, recipient of the 2022 Nobel Peace Prize, to discuss the role of artificial intelligence in war crimes in Ukraine, the global implications of disinformation, and human rights.

On April 16, 2024, The Sutardja Center for Entrepreneurship and Technology (SCET) hosted an event titled “Tracking War Crimes in the AI Era. The Race to Record History and Keep it Intact.” Gigi Wang, Industry Fellow and Faculty at SCET, moderated an insightful panel discussion with esteemed panelists Oleksandra Matviichuk, leader of the Centre for Civil Liberties and 2022 Nobel Peace Prize Recipient, Alexa Koenig, adjunct professor at UC Berkeley School of Law and co-faculty director of the Human Rights Center Investigations Lab, and Gauthier Vasseur, executive director of the Fisher Center for Business Analytics at the Haas School of Business. 

Tracking War Crimes in the Age of AI

“We find ourselves in a digital world polluted with lies.”

Oleksandra Matviichuk 

Artificial Intelligence has profoundly transformed the legal landscape of justice and accountability in the context of war crimes. Until now, victims of war crimes have fought in vain for justice as perpetrators evade prosecution. Now, what previously required expensive tools and extensive coding has been made accessible through a simple natural language query – this revolutionary advancement will allow for the collection of war crime data at a much larger scale, enhancing its accessibility and reliability. With these powerful digital technologies at our fingertips, leaders are better equipped to fight for justice on behalf of victims at the individual level. However, such an operation will require the implementation of global infrastructure and careful regulation, as well as measures to address the psychosocial toll of graphic content. 

Gigi Wang, Alexa Koenig, Oleksandra Matviichuk and Gauthier Vasseur discuss the impact of AI on war crimes and human rights.

AI is a double-edged sword – the power of these technologies, while offering revolutionary solutions to age-old problems, has also opened doors to uncharted, treacherous digital territory. In particular, deepfakes – computer-generated images generally created with malevolent intentions – have undermined the integrity of information during war. Audio deepfakes pose an especially hazardous threat, as there are fewer points of verification than in images. Moreover, the speed at which social media algorithms disseminate disinformation has debilitating geopolitical consequences. By the time disinformation is debunked, it is already too late. The prevalence of “weapons of mass disinformation” threatens the trustworthiness of the facts from two sides: not only do deepfakes perpetuate violence and distrust, but they undermine legitimate content when people dismiss authentic content as fake.

Building Trust in Our Leaders and the Facts

“It’s time to take responsibility.” 

Oleksandra Matviichuk 

Alexa Koenig, who has researched how digital technologies affect human rights, describes a three-step verification process: examination of the technical data, content and contextual analysis, and source analysis. However, building trust is critical to dispelling disinformation – proving factual legitimacy with advanced verification methods does not mean that people will abandon a false narrative. Once beliefs are cemented, it can be difficult to convince people to change their minds. Evocative content can often bypass logical reasoning, leading to confirmation bias, amplified by social media algorithms.

Koenig noted, “Trust is a relationship.” Especially in times of crisis, our institutions and politicians must be trustworthy; a lack of trust will undermine leaders’ abilities to have authority and resolve division. 

Furthermore, media literacy is paramount in a digital world plagued by distrust. Social media sites in particular are hotbeds of disinformation and hate. Individuals, especially members of younger generations, must recognize the ramifications of engaging with deceptive content. Empowering individuals to conduct investigations into the media they consume is vital to halting disinformation in its tracks – if we are not cognizant of the consequences of our actions, we too are complicit in the propagation of these attacks on truth. As Gauthier Vasseur, executive director of the Fisher Center for Business Analytics, put it, “Let’s stop feeding the beast.” 

However, not all the responsibility can be placed on the individual, as Alexa Koenig points out. She notes that these changes necessitate broader cultural shifts and structural interventions required to reinforce these shifts at the legal level. At the institutional level, policymakers must advocate for legislation promoting transparency and accountability, and institutions must increase support for research initiatives exploring the ethical implications of AI. Corporations also have a responsibility to establish social norms that curb the spread of disinformation. Short-term profits are never worth unleashing long-term catastrophes. 

From left to right: Alexa Koenig, Oleksandra Matviichuk, Gauthier Vasseur (photo by Vicky Liu/Berkeley SCET)

Where Innovation and Collaboration Come Together

“We have a historical responsibility for each person affected by this war.”

Oleksandra Matviichuk 

The implications of cybersecurity threats and widespread disinformation reach far beyond Ukraine’s borders–the war in Ukraine is not simply “Ukraine’s problem” but an international issue that represents a broader fight for justice. In the words of Oleksandra Matviichuk, Ukraine is engaged in a “fight for freedom in all senses” – the freedom to preserve the Ukrainian identity, the freedom to uphold democratic choice, and the freedom to live in a society in which rights are protected. 

The post Nobel Peace Price Recipient Discusses War Crimes in the AI Era appeared first on UC Berkeley Sutardja Center.