Cat Yu for HBR
In the world of marketing, brand anthropomorphism can be a powerful mechanism for connecting with consumers. It’s the tactic of giving brand symbols people-like characteristics: Think of Tony the Tiger and the Michelin Man. Today some companies are taking brand anthropomorphism to a whole new level with sophisticated AI technologies.
Consider advanced chatbots, like Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana. Thanks to the simplicity of their conversational interfaces, it’s quite possible that customers will spend increasingly more time engaged with a company’s AI than with any other interface, including the firm’s own employees. And over time Siri, Alexa, and Cortana, and their individual “personalities,” could become even more famous than their parent companies.
The implications are numerous. As chatbots and other AI technologies increasingly become the face of many brands, those companies will need to employ people with new types of expertise to ensure that the brands continue to reflect the firm’s desired qualities and values. Executives should also be wary of how AI increases the dangers of brand disintermediation. As brands assume more and more AI functionality, businesses must proactively manage any potential ethical and legal concerns.
To study those issues and others, we surveyed how AI is being implemented at more than 1,000 global companies. We found that many of those firms are already using (or have been experimenting with) AI to orchestrate the brand experience across a number of business processes. These include customer service (39% of companies), marketing and sales (35%), and even the managing of noncustomer external relationships (28%) where brand power is key, such as in attracting top talent into the organization’s recruiting pipeline. Studying those deployments led to several insights around three new types of decisions executives face at the intersection of technology, personality, and strategy.Beyond Chatbots
The first is that chatbots are just one type of AI technology being used to establish or reinforce company brands. In fact, there’s a spectrum of intelligent personalities and “form factors” (such as screens, voices, physical “boxes” like Amazon Echo, text, and so on) that companies are using to deliver a brand experience. Cognitive agents like IPsoft’s Amelia are incarnated as virtual people on a user’s computer screen, and future advances may deploy hologram technology to make those agents even more lifelike. In Hong Kong Hanson Robotics is developing robots with human features. Those robots, which can see and respond to facial expressions and are equipped with natural language processing, could become the literal front-office brand ambassadors for companies.
Whatever the form factor, companies must skillfully manage any future shifts in customer interactions. Remember that each interaction provides an opportunity for a customer to judge the AI system and therefore the brand and company’s performance. In the same way that people can be delighted or angered by an interaction with a customer service representative, they can also form a lasting impression of a chatbot, physical robot, or other AI system. What’s more, the interactions with AI can be more far-reaching than any one-off conversation with a salesperson or customer service rep: A single bot incarnated on myriad devices, for example, can theoretically interact with tens of thousands of people at once. Because of that, good and bad impressions may have long-term, global reach.How to Properly Rear Your Brand Ambassador
Executives need to make judicious decisions about their use of an anthropomorphic brand ambassador — its name, voice, personality, and so forth. IBM’s Watson converses in a male voice; Cortana and Alexa use female ones. Siri and the nameless AI of Google Home can use either. And what qualities will best represent the values of the organization? The personalities of all these assistants seem helpful, like a nerdy friend, ready with lots of information or a G-rated joke, yet still a bit stilted — perhaps because they take everything we say so literally. It can also be hard to believe they’re as remorseful as they say when they can’t answer our questions or understand our commands.
And then there are important differences. Alexa comes across as confident and considerate — she doesn’t repeat profanity and doesn’t even use slang very often. Siri, on the other hand, is sassy: Her personality is smart and witty with a slight edge, and she is prone to cheeky responses. When asked about the meaning of life, she might respond, “I find it odd that you would ask this of an inanimate object.” Siri can also become jealous, especially when users confuse her with another voice-search system. When someone makes that mistake, her retort is something along the lines of, “Why don’t you ask Alexa to make that call for you?” All this is very much in keeping with the Apple brand, which has long espoused individuality over conformity. Indeed, Siri seems more persona than product.
It might seem flippant to suggest that AI systems will need to develop specific personalities, but consider how a technology like Siri or Alexa has already become so closely associated with the Apple and Amazon brands. It’s no surprise then that “personality training” is becoming such a serious business, and people who perform that task can come from a variety of backgrounds. Take, for example, Robyn Ewing, who used to develop and pitch TV scripts to film studios in Hollywood. Now Ewing is deploying her creative talents to help engineers develop the personality of Sophie, an AI program in the health care field. As one of its tasks, Sophie reminds consumers to take their medications and regularly checks with them to see how they’re feeling. At Microsoft, a team that includes a poet, a novelist, and a playwright is responsible for helping to develop Cortana’s personality. In other words, executives may need to think about how best to attract and retain different types of talent that they never needed before.
In the future, companies might even be incorporating sympathy into their AI systems. That may sound far-fetched, but the startup Koko, which sprung from the MIT Media Lab, has developed a machine learning system that can help chatbots like Siri and Alexa respond with sympathy and depth to people’s questions. Humans are now training the Koko algorithm to respond more sympathetically to people who might, for example, be frustrated that their luggage has been lost, that the product they purchased is defective, or that their cable service keeps on going on the blink. The goal is for the system to be able to talk people through a problem or difficult situation using the appropriate amount of empathy, compassion, and maybe even humor.The Curious Incident of Brand Disintermediation
As AI systems increasingly become the anthropomorphic faces of many brands, those brands will evolve from one-way interactions (brand to consumer) to two-way relationships. Furthermore, as those systems become increasingly capable, they could potentially lead to brand disintermediation. Alexa, for example, can already orchestrate a number of interactions on behalf of other companies — allowing people to order pizzas from Domino’s, check their Capital One bank balance, and obtain the status updates of Delta flights. In the past, companies like Domino’s, Capital One, and Delta owned the entire customer experience with their customers. Now, with Alexa, Amazon owns part of that information exchange and controls a fundamental interface between those companies and their customers, and it can use that data to improve its own services. This might be one reason why Capital One, which initially had built capability on top of Alexa, recently developed and introduced its own chatbot, Eno.
And then there are ethical challenges. Amazon, for example, recently added a camera to its Alexa/Echo platform so the company can use its AI technology to offer personality-driven fashion advice. But what are the ethical issues of potentially collecting photos of barely dressed consumers? And as these AI systems become increasingly adept at communicating, they could appear to act as a trusted friend ready with sage or calming advice. Have companies adequately considered how such applications should respond to questions that are deeply personal? What if a person admits to suicidal feelings or recent physical abuse? A 2016 JAMA Internal Medicine study looked at how well Siri, Cortana, Google Now, and S Voice from Samsung responded to various prompts that dealt with mental or physical health issues. The researchers found that the bots were inconsistent and incomplete in their ability to recognize a crisis, respond with respectful language, and refer the person to a helpline or health resource. For companies that are implementing such AI systems, an in-house ethicist could help navigate the complex moral issues.
With many new innovations, the technology often gets ahead of businesses’ ability to address the various ethical, societal, and legal concerns involved. With AI, any issues become all the more pressing as those systems increasingly become the face of many company brands. As Amazon CEO Jeff Bezos once remarked, “Your brand is what other people say about you when you’re not in the room.” And that would presumably hold true even if your AI system might be listening.