Related Post

The spread of misinformation and disinformation is already having real world political, social, geopolitical, and security consequences. Of particular concern are Deepfakes, which use AI to create realistic face and/or voice swaps to create synthetic or manipulated media that make a person say or do something they have not in actuality said or done. The rapidly growing development and use of Deepfakes is one of the greatest threats to open societies and needs to be addressed at every level.

The business, political and geopolitical manipulation possibilities are limitless, and the proliferation of these tools going unchecked will threaten the safety and stability of all open societies. As highlighted in Tate Nurkin’s ‘Challenge of Disinformation Campaign in the Age of Synthetic Media’, these tiered risks require tiered solutions – solutions that incorporate technology, collaboration, heightened social media company efforts and even government regulation to help bind and contain the threat from disinformation campaigns and, increasingly Deepfakes.

This includes :

Detection Technology and Data Sharing
Continued development of Deepfake detection technologies and approaches is critical to meeting the challenge of this rapidly evolving and improving technology. So too, will be the sharing of data across social media companies and potentially government agencies about uses of synthetic media. The CSET report posits the idea of developing a “deep fake zoo” that “aggregates and makes freely available datasets of synthetic media as they appear online”. More detail is required, of course, on how such a concept would be implemented (and by whom), but it certainly offers an interesting approach to ensuring coordination and collaboration, which will be central to meeting the AI enabled disinformation challenge at scale.

Social Media Company Intervention and Fact Checking
Social media companies have become much more active in addressing the challenges of disinformation and misinformation through labelling media as inaccurate and/or as synthetic, attacking, and taking down networks of inauthentic behaviour and content. Adjusting platform algorithms to stop featuring the most controversial or extreme content through personalised advertisements based on user data will also be critical in reducing the impact of disinformation campaigns and synthetic media.

But challenges remain, both in detecting false or misleading social media activity and in addressing it. Manual fact-checking by social media companies and other research institutes is time-consuming and inefficient, while automated fact checking algorithms still have gaps in capability. Furthermore, disparities in policies between social media companies can lead to the continued sharing of false material on one platform, even after it has been removed from others. Such inconsistencies will be difficult to resolve as different companies seek to differentiate their platform’s content moderation in a competitive environment, but also create space for false, misleading, and synthetic content to spread even after it has been detected. This is an issue that would have to be addressed by a combination of public, private and governmental sectors, to create more active and coordinated measures equally across all social media platforms. The response would have to undeniably evolve over time.

Government engagement
While many democratic societies will have considerable reservations about any government efforts to moderate content, a role for government in creating legal and policy frameworks and incentives to help manage the disinformation and the rising issue of Deepfakes, does exist. Governments globally are beginning to take the issue of disinformation seriously, particularly with the intensification of geopolitical competition. A range of actors are attempting to capitalise on the growing sense of vulnerability and uncertainty many people throughout the world feel in order to achieve their specific political or geopolitical objective. This then intersects with political events and social unrest in countries throughout the world. Hence, it is imperative that governments take coordinated measures and create borderless policies to stem the rise of disinformation.

In conclusion, Deepfakes are already significantly impacting our societies and our trust of online information, and the technology will only continue to advance. Managing disinformation and protecting open societies will be even more challenging than ever, as artificial intelligence gets more powerful and accessible to a growing range of actors.

Meeting the challenge of AI-enabled disinformation will require a multi-faceted response for all areas of the community, including governments, social media companies, academia and applied research centres. New technologies will need to be developed to detect, disrupt, and counter synthetic media and AI-enabled disinformation. Lastly, and perhaps most importantly, this data must be shared across the public and private sector and governments as well as more active and coordinated measures on the part of social media companies to help identify, label and stop the spread of synthetic media.

Andrew Vasko
Managing Director
IScann Group


To request the whitepaper ‘Challenge of Disinformation Campaign in the Age of Synthetic Media, please email :