We live in an era where the line between reality and illusion is becoming increasingly blurred. Artificial intelligence has given us the ability to create highly convincing, sophisticated fake images, audio and video. This fake content, known as deepfakes, can be used to create funny images and entertain, but it can also be used to deceive, manipulate and extort us. We face a crisis of truth, as it is becoming more difficult by the day to discern the authenticity of the media proliferating across our screens.
Understanding and mitigating this threat are crucial to protecting yourself, your firm and your clients.
AI as a Weapon
These deepfakes are being weaponized against us. Malicious actors are creating fake content to imitate employees, executives and clients to exploit our weaknesses, manipulate and defraud us. The realistic appearance of these attacks makes it difficult to distinguish authenticity, and their resultant success is very lucrative for the attackers. This threat is particularly significant to law firms because of the confidential and private nature of their businesses.
Imagine discussing confidential matters with a client over a video call only to later discover that the person on the other end was an imposter. Or imagine it is a call from someone impersonating a senior partner asking you to transfer funds—instead, you’ve just transferred money to a malicious actor.
Now imagine the impact such an incident would have on your firm and its reputation, plus the resulting legal issues. Not only does the firm have to recover from the incident, but it also must deal with the cultural impact. Too often, the employee will be blamed for falling for the fraud, and the business will provide coaching, counseling, policy changes and even discipline if the situation is not handled properly.
As if the deepfake threat were not enough, there are other ways that artificial intelligence is being weaponized against us. The rise of AI has allowed malicious users to become more efficient and effective with traditional cyberattacks such as phishing and ransomware. AI can use predictive models to estimate human behavior, identify which types of attacks and content are most likely to yield results and create the content for those attacks automatically employing the wealth of data on the internet as source material. Applying AI, these attacks can more easily target specific people and groups, and the frequency of attacks are increasing at an exponential rate.
How can we protect ourselves from these threats? Fortunately, there are many resources available to help identify and thwart these attacks. Let’s start with how to identify deepfakes.
Identifying Deepfakes
Being able to spot deepfakes is a major step in curbing their malicious use. Understanding what media is authentic and what is not can be challenging. Here are a few tips:
- Educate yourself and your firm: By first informing yourself and your firm on what deepfakes are and how to spot them, you increase your awareness and ability to mitigate the threat. There are many educational resources such as KnowBe4, Hook Security and MIT Media Labs “Detect Fakes” program.
- Be suspicious: Develop a healthy suspicion of the media you see and question the source of content you encounter. We can no longer take everything at face value and must do our due diligence to verify content.
- Pay attention to the details: Especially pertinent for images and videos, look for inconsistencies that either do not make sense or do not match, such as too many fingers, arms and elbows bent in odd ways, teeth not looking right and an overall feeling of something being “too perfect.” AI does not do well with details, especially in the background of images and videos, and you can often find inconsistencies that are dead giveaways for deepfakes.
- Question the reality: Look at the image or video and ask yourself how real an image seems. If the subject or content appears far-fetched, that should be a warning sign to dig a little deeper to determine the authenticity. If it is a voice call or meeting, ask questions that only the real person on the other end would know.
- Use AI detection tools: There are many AI-powered detection tools to analyze media for signs of manipulation or markers that indicate the content was created by AI. These tools look for inconsistencies that may indicate the presence of deepfakes and help you confirm authenticity.
Mitigating the Threat
Now that you have the knowledge and ability to identify a deepfake and know when you are being manipulated, what’s next? Here is what to do:
- End the interaction: Physically or digitally disengage from the situation and end the interaction.
- Verify through official channels: Also known as out-of-band communication, use a known, trusted, alternative communication method to verify legitimacy. For example, if your manager asks you to do something in an email, pick up the phone and verbally confirm it. Or if you are on a video call, send a separate text message to confirm.
- Document and report: Notify the appropriate people at your firm about the potential attack, as it can help others to identify and mitigate the same or similar attacks.
- Invest in technology: Stay ahead of the curve by investing in the latest cybersecurity technologies. AI detection tools, secure communication platforms and advanced authentication methods can provide an additional layer of protection.
- Collaborate with experts: Partner with cybersecurity experts to develop and implement strategies for identifying and mitigating deepfake threats. These experts can provide valuable insights and support in navigating the complex landscape of AI-generated content.
- Develop policy: Establish clear guidelines and frameworks regarding the use and dissemination of media, how to respond to threats, staff members to report suspicious activity to and what those people should do with potential threats.
- Foster an aware culture: Encourage a culture where content is routinely questioned and verified, and support employees if any incidents occur.
- Stay informed: Keep abreast of the latest developments in AI and cybersecurity. Regularly review industry reports, attend conferences and participate in professional networks to stay up to date about emerging threats and best practices.
Conclusion
The rise of AI-generated deepfakes represents a significant challenge and threat for law firms. By understanding the nature of this threat and implementing effective strategies for identification and mitigation, legal administrators and lawyers can protect their firms and clients from the potentially devastating impacts of deepfake content. As technology continues to evolve, staying vigilant and proactive is key to safeguarding the integrity and security of legal practices.
Eric Hoffmaster is Innovative Computing Systems’ chief operating officer.