4 AI Trends Transforming Communications
Are communications professionals ready for the onslaught of change brought on by artificially intelligent machines, tools and apps?
Before you respond, let me ask you another question: Were you ready for social media?
Unfortunately, the answers may be the same.
According to the Artificial Intelligence (AI) and Big Data Readiness Report, written by Anne Gregory and Swati Vermani and conducted by the Chartered Institute’s AIinPR panel, 43% of practitioners have little confidence in their knowledge or understanding of AI.
And only 13.9% feel comfortable with AI tools.
Yet, whether it’s monitoring social feeds, gathering reports from Google analytics or even conducting a search, PR professionals are using artificial intelligence multiple times each day.
AI isn’t coming, it’s here. If we don’t pay attention and educate ourselves, communicators could be relegated to the role of town criers of Web3.
Here are four AI trends that are about to transform the way we work, communicate, discover information and build relationships:
- Human-sounding voice AI
- Deepfake videos and photos
- Natural language generation
- Relational artificial intelligence
Let’s examine each.
1. Human-Sounding Voice AI
In 2018, Google announced the launch of its Duplex voice assistant, which makes calls to book restaurant reservations and hair appointments. It sounds like a person, complete with a natural cadence and speech disfluencies, those “ums” and “ahs” we all use when we talk. Microsoft’s Custom Voice Studio lets organizations literally develop a synthetic brand voice that’s almost indistinguishable from a person. Companies can add their proprietary voice in marketing, advertising, videos and customer service.
Unlike the more robot-like digital assistants, machines’ ability to sound more like us will provide a sense of familiarity that could reshape the tone of an interaction. How will your audience respond if they know they’re talking to a machine?
2. Deepfake Videos and Photos
A recent study from researchers at Berkley and Lancaster University revealed that about half of respondents were unable to recognize who was human and who was fake when shown images of actual people and faces generated by AI. More troubling was the finding that the least trustworthy faces were human and the most trustworthy were fake. As machines get better at creating images or videos of people that look and act real, it will become more difficult to tell if you’re dealing with a person or a machine. But will that matter to your customers if the interactions solve a need?
3. Natural Language Generation
With natural language generation (NLG), AI agents can become your writing partner and create blog posts, headlines, social media ads and even event releases. Just add a prompt into an application like Copysmith and you’ll be treated to an array of options in a couple of minutes or less.
Some are quite decent because NLGs, the comedy impressionists of the AI world, are good at putting together sentences that sound like us. But many will be riddled with errors, misinformation, hateful, racist language or outright lies. That means sharp editing skills and a human eye are key. If you don’t pay attention, your AI output could spark a crisis. And if your organization doesn’t need to hire as many junior writers, how will that affect your team development, workflow and culture?
4. Relational AI
This brings us to relational AI, where AI agents or chatbots become the new intermediaries between your company and the people you’re trying to reach. Human-AI agent relationships was the subject of my master’s thesis. I interviewed computer scientists, journalists, digital communicators, researchers, academics and entrepreneurs, and asked them to share a perspective on what an ideal human-AI agent relationship might be. Without prompting, each mentioned the movie “Her,” where Scarlett Johansen plays the enticing voice of the AI operating system.
One subject said, “Smartphones will likely be the gateway device, because we’re already in a relationship with them and comfortable using earbuds and talking.” But how will that change the way you engage your customers? Will the conversations be ethical and two-way? If the AI remembers past interactions, will that engender a sense of friendship? How will that affect the way you communicate and ultimately build trust?
Which Brings Us to the Metaverse…
If you’re a gamer, you probably have a good idea of what the metaverse will be. If not, you may be confused by what you read and hear; a combination of jargon-laden potential mixed with extreme hype. The vision for the metaverse is an immersive world where it feels like you’re present rather than simply staring at a screen. You might visit the metaverse to be entertained, learn, socialize or work.
Now, imagine your team consists of an embodied chatbot that looks and sounds human, collaborating and collecting your data and learning from your behavior. What if the machine is your manager? Or if you have to manage a machine? Consider the privacy issues involved with so much machine surveillance.
A Strategic Role for PR
This future isn’t as far off as you think, and communications professionals need to prepare now.
You can begin by learning the basics about AI and what it is and does. That’s not easy since most of us don’t have a mathematical or computer science background. Try reading “Naked Statistics,” by Charles Whelan to get an easy-to-grasp overview of the main statistical models underpinning narrow AI. Another good resource is Janelle Shane’s “You Look Like a Thing and I Love You,” which uses fun and relatable examples to explain how narrow AI algorithms make predictions and why they often go off the rails.
You should also adopt an ethical framework to assess how artificial intelligence is being integrated across the enterprise and what the consequences might be. The CIPR AIinPR panel prepared a free guide to ethical considerations for AI and PR that offers a foundational approach and scenarios to consider. Along with that, you need to understand the biases involved and how to reduce and manage them. Those include the biases in the algorithms themselves, in the people who developed the models, in the data you collect and in your own cognitive biases.
Next, you need to develop a protocol to assess the tools you may be using, what type of data they collect and how safe they are. You also need to test them in a systematic way to determine the opportunities, risks and challenges.
Of course, there’s much more to consider and many other questions to pose. The key is to proactively read, learn and think expansively about AI and its implications. Only then can you begin to reimagine and reinvent your role as strategic counselor/relationship builder in the AI future.
See you in the metaverse!