Much of the attention on deepfakes has focused on their role in spreading propaganda. A paper from Northwestern University and the Brookings Institute highlights how it can be used for more insidious ends.
“The ease with which deepfakes can be developed for specific individuals and targets, as well as their rapid movement — most recently through a form of AI known as stable diffusion — point toward a world in which all states and nonstate actors will have the capacity to deploy deepfakes in their security and intelligence operations,” the authors explain. “Security officials and policymakers will need to prepare accordingly.”
Security risks
The researchers developed a number of deepfake videos featuring the deceased Islamic State terrorist Abu Mohammed al-Adnani. The videos used the words of Syrian President Bashar al-Assad but made them appear as though they were spoken by al-Adnani.
The process of creating the lifelike video was so simple that the researchers were able to produce it within hours. The researchers argue that militaries and security agencies should assume that competitors have the ability to quickly create deepfake videos of any official or leader.
“Anyone with a reasonable background in machine learning can — with some systematic work and the right hardware — generate deepfake videos at scale by building models similar to TREAD,” they explain. “The intelligence agencies of virtually any country, which certainly includes U.S. adversaries, can do so with little difficulty.”
Disinformation campaigns
The authors think that deepfakes will be used by both state and non-state actors to further their disinformation campaigns. Deepfakes have the potential to cause conflict by making wars seem legitimate, creating confusion, diminishing public support, dividing communities, and discrediting leaders. In the short term, experts in security and intelligence can try to detect deepfakes by creating and training algorithms to identify fake videos, images, and audio. However, this approach is unlikely to be successful in the long term.
“Anyone with a reasonable background in machine learning can generate deepfake videos at scale,” the researchers explain. “The intelligence agencies of virtually any country can do so with little difficulty.”
“The result will be a cat-and-mouse game similar to that seen with malware: When cybersecurity firms discover a new kind of malware and develop signatures to detect it, malware developers make ‘tweaks’ to evade the detector,” they continue. “The detect-evade-detect-evade cycle plays out over time…Eventually, we may reach an endpoint where detection becomes infeasible or too computationally intensive to carry out quickly and at scale.”
They make a number of recommendations to try and improve matters:
- Raise public awareness and improve digital literacy and critical thinking skills
- Implement systems for monitoring the movement of digital assets and recording the individuals or organizations that handle them
- Promote fact-checking and verification among journalists and intelligence analysts before publishing information
- Use additional sources of information, such as authentication codes, to verify the authenticity of digital assets
- Encourage journalists to adopt similar practices as intelligence products by discussing confidence levels for their judgments.
The authors stress the importance of the government implementing policies that provide strong oversight and accountability for controlling the creation and distribution of deepfake content. Before using deepfakes as a countermeasure, they suggest that guidelines should be established and put in place. To achieve this, the authors propose the creation of a “Deepfakes Equities Process,” similar to existing processes for cybersecurity.
“The decision to generate and use deepfakes should not be taken lightly and not without careful consideration of the trade-offs,” they conclude. “The use of deepfakes, particularly designed to attack high-value targets in conflict settings, will affect a wide range of government offices and agencies. Each stakeholder should have the opportunity to offer input, as needed and as appropriate. Establishing such a broad-based, deliberative process is the best route to ensuring that democratic governments use deepfakes responsibly.”