Ethical AI in Marketing: Legal Compliance Tips

Ethical AI in Marketing: Legal Compliance Tips

Listen

NEW! Listen to article

Sign in or sign up to access this audio feature! No worries … it’s FREE!

What do you get when you mix AI, marketing, and ethics?

Confusion. Frustration. Potentially questionable decisions. And a whole lot of hands in the air (and not because we “just don’t care”).

We asked Arizona attorney Ruth Carter, Esq., Evil Genius at Geek Law Firm, for a few pointers to help navigate the new AI reality during their recent MarketingProfs presentation, “Ethical AI in Marketing: Marketers’ Guide to Privacy, Policy, and Regulatory Compliance.”

Keep in mind this is general legal information, not legal advice. Talk to your legal team or hire an Internet attorney if you need specific advice. If you’re not paying Ruth, then Ruth is not your attorney.

Read the fine print

“Understand what [the AI vendors] might do with what you…put into the ‘AI machine’ or what the ‘AI machine’ creates for you,” Ruth emphasizes.

Yes, that means the entire Terms and Conditions for each AI platform.

In particular, scrutinize their policies on retaining and training on your data. Don’t just blindly click “accept” and run with it (like we all do with social media and iOS updates).

If you don’t like the terms, consider whether that changes your AI use case or whether you should choose a different platform. And use those terms to inform your internal AI practices, including “what absolutely cannot, under any circumstances, go into AI,” Ruth stresses.

Ensure privacy and confidentiality of in-house and client data

Unless you’re using a closed system or know your purchased plan keeps your data sequestered and private, assume the AI will use whatever you upload as public training data.

In most cases, that’s a big no-no for you. Definitely so when personally identifiable information is involved. GDPR, CCPA, and other privacy regulations apply to AI, too.

You should also consider confidentiality and nondisclosure agreements. Those may impact your use of information, such as financial data and other hush-hush details, especially if tied to private companies. Don’t risk legal action or ruin relationships by violating promises.

Ruth suggests avoiding real client information and instead using pseudonyms and anonymized data.

Understand AI security measures and risk management

Hacking. AI is not immune. And it can happen in two ways:

  1. First, “Could an AI be hacked and sabotage the output for what it creates? What it spits out for people who use it? Yes.”
  2. Second, “Could an AI be hacked and the data be stolen? Meaning data was put into the AI” that the AI company now has “copies in storage somewhere. Could all that be obtained by a hacker? Yes.”

So look at the service’s security measures. Ask yourself, “Are they sufficient? What is the worst-case scenario if the AI is hacked?” What will happen to you—and what will the AI company do about it? And what will you do about it?

Make your AI policies clear to employees, contractors, and vendors—and ensure everyone follows the rules

“Have an ethics statement regarding AI…[including] how the company will and won’t use AI, and why they have made that decision,” and publish it in your company manual or as a standalone document, Ruth says.

Check out this clip from the presentation to learn who and what to consider when creating your policy.

 

 

But don’t go overboard and create burdensome restrictions that prevent your team from growing and competing.

Your policy “has multiple purposes, but it helps bring your team on board with ‘this is how we work,'” Ruth points out. So think of it as keeping everyone working legally and ethically—within bumpers.

Verify all AI-generated content—every time

It’s no secret that AI can “hallucinate”—that is, produce false results. So always verify your AI-generated content is correct. Require the same of your team.

Ruth says, “I would expect a company AI policy to require you to verify all facts before using [AI output] in any client’s content, so that way you’re not spreading misinformation.”

Not just unethical—embarrassing. Don’t be the next viral case study in doing it wrong.

Be transparent with your customers

You should also discuss your AI policy—not just what and how but why—with your customers.

Ruth insists, “You want to be transparent with your clients about how you use AI and how it benefits them”:

  • They need to know how AI improves your work and products, and that they’re still profiting from your human expertise.
  • And if you’re building it into your products—either internally or via third-party LLM integration—they need to know that it actually improves what you’re selling.

Posting a corresponding ethics statement to your website and sharing it with your clients can help ensure transparency and build customer trust.

Add legal protection to your customer contracts

Now’s the time to update those contracts, nondisclosure agreements, and force majeure clauses. We’re still in the Wild West of AI technology, so protect yourself from unforeseen issues.

First, have your clients agree that they provide only legally obtained data or content for AI use and that you’re following their instructions: “The contract should say that you are only using their content per their instruction, so you can’t use it for any other purpose, and you are relying on them to provide the instructions,” Ruth explains.

Second, include an indemnity clause to protect you if something goes wrong. Ruth suggests including a statement that “in the event that you are accused of doing something wrong because you followed your client’s instructions, they will be the ones who will indemnify you and reimburse your legal fees and any damages assessed against you.”

Third, add a no-guarantee clause. “I would have a provision that your marketers are not psychics. You cannot provide any guarantees regarding the results of the [AI-generated] content that you are creating for them,” Ruth says.

Just say no to unethical behavior

Finally, don’t be afraid to bid adieu to a client if you believe they’re using AI to engage in unethical or illegal activities—or if they ask you to do so on their behalf. “You get to decide what the rules are for who gets to work with you,” Ruth points out.

But it doesn’t have to be immediate doom and gloom if you uncover something amiss. If you believe it was simply an uninformed mistake, “I would look at it as a teachable moment,” Ruth says.

But that doesn’t mean letting clients—and vendors—get away with mischief if you foresee a long-term problem. “If they are not open to learning, I would walk away because it’s easier to prevent problems than to fix ’em later,” Ruth suggests.

Want to dig into even more of what Ruth shared? Check out their recent AI for Demand Gen Marketers session.

More Resources From the AI for Demand Gen Marketers Series

Can AI Save You From Marketing Inferno?

AI Use Across the Customer Journey Means Aligning Across Teams

Using AI to Build Your Personas: Don’t Lose Sight of Your Real-World Buyers

AI Can’t Write Thought Leadership (But It Can Do Something Else)

Your AI Needs a Human Copyeditor

AI Can Do Hard Things for You (Like Forecasting Future Success)

Seven Steps for Rolling Out AI Across Your Demand Gen Programs

Originally Appeared Here