Jason Crawforth is the Founder and CEO of Swear.com , a company working to restore confidence in digital media authenticity. We recently witnessed a significant shift in the U.
S. approach to artificial intelligence (AI) development with the revocation of Executive Order 14110 . Enacted in 2023, this order provided crucial administrative guidance on the safe, ethical, secure and trustworthy development of AI technologies.
It also represented the nation's sole official stance on the ethical considerations surrounding this rapidly evolving field. The revocation of this order leaves a significant void, raising concerns about the future of responsible AI practices and underscoring the critical importance of initiatives that focus on authenticating video content and securing data. AI is undeniably a powerful tool, offering immense potential for progress across various sectors.
However, the "dark side" of AI is already manifesting in the proliferation of fake videos , audio recordings and photographs generated by increasingly sophisticated algorithms. These deceptive creations can have profound consequences, from spreading misinformation and manipulating public opinion to damaging reputations and facilitating fraud. A report by IBM found that there are 10 significant AI risks, but there are also a variety of ways to manage them from creating practices that promote fairness, such as including representative training datasets and forming diverse development teams, to informing consumers about data collection practices for AI systems.
The absence of clear national ethical guidelines increases the risk of AI-generated content being misused, placing both individuals and industries in a vulnerable position. To navigate this uncertainty, businesses must not only understand how AI can support their goals but also recognize the ways it could pose risks. Developing clear strategies for AI use can help raise employee awareness, but equally important is taking the time to define how AI can be best leveraged to enhance the unique needs and strengths of the business.
Furthermore, what might effective guidelines look like? One step is to push both AI companies, users and regulators to consider how AI can actively enhance lives—shifting the focus from control to meaningful progress. But moving forward, we need to find a balance in both. The challenge is further amplified by the existence of powerful AI tools like facial recognition, video analytics and data scraping systems.
While these technologies hold legitimate applications, they can easily be weaponized for unethical purposes. Facial recognition, for example, can be used for mass surveillance or discriminatory targeting. Video analytics can be manipulated to misrepresent events or fabricate evidence.
Data scraping systems can be employed to harvest personal information without consent, leading to privacy violations and identity theft. In the absence of robust safeguards and ethical frameworks, the potential for abuse is substantial. Industries and agencies that rely heavily on the authenticity and integrity of digital assets are particularly exposed to the consequences of unchecked AI development.
Below are just a few key examples: In the legal profession, video and audio evidence play a crucial role in judicial proceedings. The rise of AI-generated deepfakes poses a direct threat to the integrity of evidence , potentially leading to wrongful convictions or miscarriages of justice. Similarly, the media industry faces the challenge of discerning authentic news footage from AI-generated fabrications, a task that becomes increasingly difficult as these technologies advance.
The spread of misinformation through manipulated media can erode public trust and destabilize social discourse. Law enforcement agencies also face new challenges in a world where AI can create convincing yet entirely fabricated evidence. The potential for deepfakes to be used in blackmail, extortion or even to frame innocent individuals is a serious concern.
Furthermore, the use of AI-powered surveillance technologies raises complex ethical questions about privacy and civil liberties. Without clear guidelines and oversight, the line between legitimate law enforcement and intrusive surveillance can become blurred. The financial sector is another area of significant vulnerability.
AI-generated deepfakes have already been used to impersonate executives , leading to significant financial losses. The increasing reliance on AI-powered trading algorithms also raises concerns about market manipulation and the potential for unforeseen consequences. As AI becomes more deeply integrated into the financial system, the need for robust security measures and ethical frameworks becomes paramount.
In this climate of increasing uncertainty, private companies can play a vital role in safeguarding the integrity of digital content. In a world where seeing is no longer believing, the ability to verify the authenticity of digital assets is more critical than ever. Efforts to develop robust authentication methods and secure data storage solutions are essential for protecting individuals, industries and institutions from the potentially devastating consequences of AI-generated deception.
The revocation of EO 14110 underscores the urgent need for a renewed focus on ethical AI development. While the absence of national-level guidance is a setback, it also presents an opportunity for industry leaders, researchers and policymakers to collaborate on developing effective solutions. This includes establishing clear ethical guidelines for AI development, investing in research on AI safety and security, and promoting public awareness about the risks and opportunities associated with this transformative technology.
The future of AI depends on our ability to harness its power for good while mitigating its potential harms. As AI continues to evolve, we must prioritize ethical considerations and work together to ensure that this powerful technology is used responsibly and for the benefit of all. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives.
Do I qualify?.
Technology
The Unforeseen Consequences Of Revoking EO 14110: A Call For Content Authenticity

The future of AI depends on our ability to harness its power for good while mitigating its potential harms.