Navigating the Ethical Landscape of AI in Communications and Marketing

As communications and marketing professionals, we’ve watched with fascination as artificial intelligence (AI) has transformed our industry. From chatbots to content generation tools, AI is undeniably powerful in helping streamline processes, boost productivity, enhance creativity, and improve audience engagement. 

The marketplace has been proliferated with tools and apps promising MAGICAL results (the symbol most companies are using to denote AI is magic sparkles ✨). However, in many respects, the use of AI is more like a Magic 8 Ball than what you see in demos.

The good news is that humans are still very much needed, at least for now. And as we’ve experimented with these technologies, we’ve also grappled with the complex ethical questions and challenges they raise for communicators and marketers.

It has become clear that technology vendors are less concerned with ethics as they iterate and expand the features of their products. One glaring example is Open AI’s new voice assistant sounding eerily similar to actress Scarlett Johannson even after she declined their offer. 

Indeed, much of the reference materials fed into the large language models (LLMs) that AI platforms are built on were done so without the permission of the content owners/creators. While those instances are not the primary subject of this post, they leave those of us who use these platforms with a sense of caveat emptor when using them.

 

With all that in mind, here are the several concerns organizations should keep in mind while considering the ethical use of AI:

  • BIAS & INACCURACIES: AI algorithms are only as good as the data they are trained on, and they can perpetuate biases and inaccuracies if not carefully designed and monitored. AI tools scrape the Internet for content and can’t always decipher what is true or outdated information. In some cases, AI has shown a tendency to “hallucinate”, which is the cute term for simply making things up. This occurs in some categories more than others. For instance, a recent legal case was tossed because a lawyer used ChatGPT to draft a legal argument. You should never simply copy and paste an AI’s first draft because the generated content could be biased, inaccurate, or misaligned with your brand’s voice and messaging goals. 
    • TL:DR: Always proof and verify AI’s outputs.

 

  • LIMITED CREATIVITY & EMOTIONAL INTELLIGENCE: While they can assist with data analysis, content optimization, and basic design tasks, they cannot replace a human’s strategic thinking, intuition, empathy, and contextual understanding. The quality of the output depends heavily on the quality of the input. Learning effective prompting techniques makes a substantial difference in the generated content’s relevance, coherence, and overall usefulness. We recommend a deep dive into every AI tool’s features, because some can display bias when asked questions regarding emotional or social issues, while others are tuned to display humor. 
    • TL:DR: Learn the differences and features of AI tools, and don’t cut corners on ideation and creativity into your prompts.

 

  • TRANSPARENCY: It is essential to be transparent about using AI and not mislead people into thinking they are interacting with a human (e.g., on a chatbot) or engaging with wholly original content (e.g., art created on Midjourney). Failing to disclose the use of AI can erode trust and damage your reputation. For example, we use Grammarly and Claude AI to proofread and lightly edit our blogs, which we disclose in each post. Lack of disclosure also risks an “uncanny valley” response, which denotes when the receiver (e.g., reader, viewer, etc.) detects artificial-ness of the content, and it can provoke an instant feeling of revulsion. 
    • TL:DR: Always disclose the use of AI tools.

 

  • DATA PRIVACY & SECURITY: AI tools often rely on vast amounts of data to learn and improve, and this can include sensitive personal information. Communicators and marketers must ensure that they collect, use, and store data ethically and securely and are transparent about how data is used. Organizations are broadly discouraged from uploading private and/or proprietary data to AI applications. 
    • TL:DR: Do your homework, and read the fine print to protect your audiences’ data, privacy, and security..

 

  • ACCOUNTABILITY & RESPONSIBILITY: If an AI tool generates content that is ultimately inaccurate, offensive, or harmful, who is responsible? Is it the creator of the AI tool, the organization using the tool, or the individual who approved the content? For now, the buck stops with humans, and we must manage (rather than rely) on AI tools to ensure we’re serving our audiences accordingly.
    • TL:DR: Good intentions are not enough–humans are ultimately responsible for the impact of AI tools on their audiences.

 

The AI revolution is exciting, and we must stay alert and adaptable. Our north star must always be doing right by our audiences. If you need help choosing the right communications and marketing AI tools for your team or project, we’d love to strategize ethical solutions and train your team!

 

GO DEEPER:

 

(AI disclaimer: proofed by Grammarly and lightly edited using Claude)

Author:

Recent Posts

Empowering Voices: The Crucial Role of Trauma-Informed Communications in Survivor Storytelling

Building Trust: The Essential Role of Proactive Communication in Construction and Infrastructure

Want to show your brand some care? Sign up to begin your FREE marketing assessment.