Getting emotional about AI.

You may think artificial intelligence (AI) is “wicked smaht” since it’s able to solve complex problems as easily as Matt Damon’s character in Good Will Hunting can. But what good is smarts without heart or emotion? What if I told you that we’re actually more intelligent than AI – in terms of emotional intelligence?

Emotional intelligence (EI) is key in our relationships with other human beings and is very different from our standard sense of intelligence, also known as intelligence quotient, or IQ. Think of EI as using our hearts, while IQ uses our brains.

With EI, we can:

  • Identify and control our emotions through self-awareness and social skills
  • Empathize, relate, or communicate the way that we’re feeling into words or actions
  • Use our environment or our social lives to influence how we act or feel

Compare that to the characteristics of IQ:

  • Enables reasoning through logic and cognitive function
  • Influenced more by genetics than outside environment
  • Focused on word comprehension, math, and memory

As writers, we use EI in our work to express a message or idea about specific topics in ways that speak best to one’s personal feelings. You might think that with the rise of generative AI large language models (LLMs) like ChatGPT, which are able to generate copy on a whim, there’s no need for you to write anything yourself anymore, especially when research is involved. But that AI copy is missing something important: the human touch.

Let’s look at one of the most potent forms of human expression of emotion: the novel.

 

A novel idea.

Humans really love stories. It’s not hyperbole but scientific fact. When we hear a particularly good story, like from an excellent novel (Cormac McCarthy’s Blood Meridian, for instance), the chemical oxytocin releases, boosting our feelings of compassion, trust, and empathy. Did you know that a well-told tale activates the sensory center in our brain, as compared to hearing facts, which activates the data processing center? As Jonathan Haidt writes in The Righteous Mind, “the human mind is a story processor, not a logic processor.”

Recall one of your favorite writers, who surely has their own recurring stylistic choices and themes. There’s Ernest Hemingway with his minimalist prose. Or John Steinbeck and his explorations of the working class and the West Coast. But I’m going to pick Toni Morrison. She frequently explores topics of race, identity, and family through shifting character perspectives and nonlinear narratives.

Some of Morrison’s novels are researched from specific events in history; her fiction book Beloved is based on the real-life experience of Margaret Garner, an enslaved person who escaped to Ohio in 1856. In Beloved, Morrison creates the fictional character of Sethe in place of Garner. Readers can then empathize with and learn about Garner’s own life through Sethe – not just by the objective existence of Garner but by Morrison’s interpretation of Garner’s life. Morrison, using her emotional intelligence for her own interpretation of history fueled by her emotions and perspective, created Beloved to help us make sense of Garner’s life and the real horrors of slavery through Sethe’s fictional actions, experiences, and dialogues.

The changing of the objective (facts) into the subjective (opinion) to understand data isn’t a new concept. The human brain, when encountering new data and research, will frequently change, reverse, or even create new information, as well as make assumptions (even if we do it unconsciously) to try and understand whatever it is that we don’t.

 

A nuanced discussion.

For as smart as AI can appear, it’s important to remember it doesn’t have human organs, like a heart or a brain. AI, through gathering information, training itself to find patterns within the data it collects, and combining that data with abilities like language processing, can imitate a human’s brain behavior when given a specific task. The key word here is “imitate.”

AI can generate its own writing but can fail to detect nuance or context about certain words and subjects the way that humans can. You, as a human writer, know there’s a difference between feeling “upset” and feeling “downtrodden” – both indicate sadness, but each has a nuance that creates a different feeling for a reader. As humans, we truly understand and experience emotions and feelings, and as writers, we use that emotional understanding and nuance to help us express ideas in text.

Adding to nuance and context is tone, which enables for an emotional connection with the reader. For all the information AI can gather on a subject and the copy it creates about that subject, the way AI says what it has to say, not just how it says it, is another disparity. In writing, AI can misunderstand phrases and fail to express an intended meaning, leading to issues of awkward messaging. Humans know how a piece of writing should sound and what not to say: we wouldn’t write a text to someone getting leg surgery saying, “break a leg?”

For as much as AI can give us, it and its use of language, like how Jacob Browning and Yann LeCun explain it, is like staring into a mirror: we get an illusion of depth that can reflect almost anything, but if you try to explore the centimeter-thick depths, you’ll hit your head.

If you’re looking in the mirror and think you need some clarity on your next ad campaign, get in touch. We’ll put our brains and hearts into it.