Deepfake videos, which use AI to create realistic but fake footage of people, have become a growing concern due to their potential to spread misinformation and manipulate public opinion. DeepMedia, a company co-founded by Yale graduates Rijul Gupta and Emma Brown, is dedicated to unmasking deepfake technology. Their primary product, DeepIdentify.AI, is a deepfake detection service that has recently secured a $25 million three-year contract with the US Department of Defense (DoD). The company’s technology analyzes photos and videos for inconsistencies or inaccuracies that indicate they are deepfakes. DeepMedia also offers DubSync, an AI translation and dubbing service that helps generate deepfakes for training purposes. The company aims to work with governments and large institutions to ensure ethics is built into AI development and combat the potential threats posed by deepfakes.
Deepfake technology has the potential to cause real harm beyond its current use in parody videos and fake pornography. It could be used as part of information operations during wars or political conflicts, leading to casualties and terror attacks. Deepfakes can also be employed to sway elections or discredit public figures by releasing falsified videos of them making incendiary comments or engaging in inappropriate behavior. Furthermore, deepfakes can be used to deny reality and undermine factual information and events, creating further confusion and obfuscation. To address these concerns, DeepMedia is collaborating with the US DoD ecosystem, allied nations, partnered nations in Europe and Asia, and a large social media company to detect circulating deepfakes. As deepfake technology continues to evolve rapidly, detection firms like DeepMedia play a crucial role in ensuring public safety and information integrity.
Hot Take: Combating the Threat of Deepfake Technology