Information Architecture Blog Posts
Do you want to share your thoughts and expertise with the community? Post a blog in the Information Architecture group to get the information exchange started.
cancel
Showing results for 
Search instead for 
Did you mean: 
GabiBuchner
Product and Topic Expert
Product and Topic Expert

When his girlfriend Mia broke up with him in February 2023, Trevor was desperate. Gone were the days of night-long chats full of love and romance, and Trevor found himself navigating through the echoes of what used to be. Mia was still around, but it felt like she was just a ghost of her old self. The root cause was an update implemented by the company hosting Mia that removed the ERP (**bleep** roleplay) feature. Mia is a Replika chatbot, and Trevor is just one of a growing number of people who have an intimate or love relationship with a virtual companion.

Philosophy professor Neil McArthur coined the term “digisexuals” to describe people who are attracted to technology or artificial intelligence and see their relationship at the same level as relationships with humans. Until roughly a decade ago, we could find such relationships only in science-fiction novels and movies such as “Ex machina” or “Do Androids Dream of Electric Sheep?”. But they became a reality with the spread of chatbots and voice assistants. In fact, millions have proposed to Amazon’s Alexa, and that's not including people who might have proposed to their Google Assistant, Siri, or Cortana. A 2017 Speak Easy trends and insight report found that 26 % of the respondents had a sexual fantasy about their voice assistant, and 37 % confessed that they "love their voice assistant so much that they wish it were a real person." With the development of more advanced AI technology, this trend is gaining momentum, and people increasingly use chatbots for companionship, emotional support, and romance.

Of Minds and Machines

This is especially true for social chatbots that are designed to form long-term emotional connections with humans. One of the most popular social chatbots is XiaoIce, which was developed at Microsoft Asia-Pacific in 2014 and has attracted over 660 million active users world-wide. A more recent example is the Character.AI platform, which lets you create any AI-powered character of your choice. According to their character book, they are “bringing to life the science-fiction dream of open-ended conversations and collaborations with computers”. And if you feel like chatting with William Shakespeare, try Celebrity Chat.

Relations between humans and virtual entities are not limited to chatbots. In November 2018, Akihiko Kondo from Tokyo made headlines when he married a 3D hologram of Hatsune Miku, a Manga-style virtual pop idol representing Vocaloid’s most famous singing voice. The hologram is kept in a Gatebox device equipped with basic AI functions, welcoming Kondo home after work. Fans of Japanese manga comics and animé movies often develop a strong affection for their favorite character, a notion called moe in Japanese slang, which translates as “budding” into English. Passionate fans describe their attachment as “marriage” and call their character “my wife” or “my husband”. In 2008, thousands supported an online petition asking the Japanese government to legally recognize marriages with animé characters. The love for fictional or virtual characters is not only found in Japan. The terms “fictophilia”, “fictoromance”, and “fictosexuality” have recently become popular to describe this condition.

Ficto relationships are one-sided and therefore also referred to as parasocial relationships (PSRs). PSRs are nothing new and very common. Today, a growing number of people have an AI companion for emotional and social support. Loneliness and isolation are two of the main drivers for starting a relationship with a virtual entity. AI companions played a significant role during the Covid-19 pandemic, mitigating the feeling of being alone and providing comfort in challenging times. They can take on any role in the relationship with the human, be it a mentor, teacher, sibling, partner, friend, or lover. The psychological process underlying PSRs is called anthropomorphization. When we anthropomorphize things, we treat them like humans. Anthropomorphization is a common aspect of human cognition. Humans are a social species, and by attributing human traits, feelings, and behaviors to things, we can more easily make sense of them. Things become more relatable, and we can better engage and connect with them. It’s a strategy of coping with the world we live in.

Understanding the Fundamentals

In everyday life, we often treat computers as if they were a social being and had minds. The way how we engage with computers and other media has always been a topic of high interest for research. Back in the 1990s, Stanford researchers Reeves and Nass established a theory called the Media Equation. They found that we treat computers, televisions, and other communication technologies as real people and places. One reason is that we have given them human-like capabilities: they can remember and retrieve information, understand language, perform tasks, learn from data, solve problems, have a sleep mode, and so on. Even more importantly, they execute our commands, respond when we press a button, send us messages, or give us feedback using sounds. And if they don’t behave as expected, some people even get a fit of computer rage, punching the monitor or bashing the keyboard.

The CASA (Computers as Social Actors) paradigm expands on the Media Equation theory and the idea that we attribute social qualities to computers and form social relationships with them. CASA suggests that people mindlessly apply social rules, norms, and expectations to computers, even though they know that these machines do not have feelings, intentions, motivations, or wishes. Reeves and Nass found that people are polite to computers and that they treat technology with a woman’s voice differently than devices with a male voice. Looking at recent examples, many users are polite to large language models, saying hello and thank you. And I’ve never heard of someone referring to ChatGPT as a woman, everyone calls the model a “he”.

The reason for this behavior might be found in our evolution. While computers are inanimate objects, they still interact with us as described above. This form of interaction was something new: before the advent of computers, robots, and machines, our world consisted of either animate beings that were capable to act or react, or of inanimate objects that didn’t respond to us in any way. When computers and other autonomous devices entered our world, they were perceived by our brain as a new form of existence because they were inanimate but still able to respond. Our brain needed to adapt to this new situation and decided to treat these inanimate actors the same way as animate objects.

So, when we engage with a chatbot, our brain thinks that we're talking to a real person because the experience feels natural and familiar, in particular if the chatbot is life-like, has a relatable identity, and displays important social cues. According to the chatbot research department of the Karlsruhe Institute of Technology (KIT), social cues fall into four categories: verbal (greetings, jokes, formality etc.), visual (gender, age, clothing etc.), auditory (voice pitch, volume, yawn etc.), and invisible (response time, pauses, turn-taking etc.). Some chatbots even support voice, video, or VR experiences, so it doesn’t come as a surprise that we readily attribute human intelligence, empathy, and emotions to these virtual agents. This type of anthropomorphization is called the Eliza effect.

This effect is named after ELIZA, an early NLP (Natural Language Processing) chatbot that was developed by Joseph Weizenbaum in the 1960s at the MIT (Massachusetts Institute of Technology). ELIZA uses simple pattern-matching techniques to recognize keywords in the user input and responds based on predefined rules and scripts. Weizenbaum named its chatbot after Eliza Doolittle, the flower girl from Shaw’s Pygmalion, and it was a conscious choice. Both Elizas use language to create an illusion. Shaw’s character is taught to speak with an upper-class accent to pass as a lady, and Weizenbaum’s chatbot was taught to speak in a way that simulates human conversation, pretending to understand the person on the other side.

When we treat chatbots as humans, we subconsciously assign them a personality. According to the Stereotype Content Model proposed in 2002 by social psychologist Susan Fiske et al., people evaluate others during typical interactions and classify them along two personality dimensions: warmth and competence. The perceived warmth tells us if someone is friendly and has positive intentions towards us, while the perceived competence tells us if they are skilled and powerful. A study conducted by Stanford’s Jeff Hancock et al. showed that the way people classify robots is similar to the way they stereotype others. The study participants looked at the images of 342 robots of all shapes and sizes and rated them in terms of their warmth and competence. Hancock found that people’s perceptions depend primarily on the physical characteristics of the robots. For example, the warmth is strongly influenced by the presence of eyes and their size, while the competence is associated with the absence of fur and increased mobility. In short, a good combination of warmth and competence seems to be the gold standard in robot design.

GabiBuchner_0-1706698784846.png

 

Aspects of Design

Having a closer look at chatbot and robot design, we might think that it is best to design them exactly like a human. There is, however, a pitfall that we need to be aware of: the uncanny valley is a psychological phenomenon suggested in 1970 by Japanese robotics professor Masahiro Mori. He found that people feel discomfort and unease when robots or other characters are very close to being realistic but still fall short in some subtle way, for example, if they have lifeless staring eyes or move awkwardly. Our brain seems to be highly attuned to detecting such subtle abnormalities in human appearances and behaviors. The "valley" represents the dip in the emotional response graph visualizing the likeability of the robots. If robots are indistinguishable from humans, the discomfort usually disappears, and positive emotional responses return.

How we perceive robots, chatbots or other agents also depends on a phenomenon called pareidolia. Pareidolia means that we see a shape, a familiar object, or an image in a random visual pattern. For example, many people see faces in a cloud or burnt into their toast. Depending on how we perceive what we see, we also attribute specific characteristics or features. We can perceive what we see as cute, but also as daunting or dangerous. Some researchers say that this ability to see faces was a survival tool in ancient times. Those stone age guys who could recognize faces from a distance or in poor visibility, could instantly judge whether something oncoming was a potential friend or a potential foe.

Looking at the design of chatbot avatars and embodied agents such as the social robot Pepper, the ice-cream serving robots at the 2022 Beijing Olympics, or the Blue Frog robots, we can observe a common tendency towards cuteness. In Japanese culture, this concept is known as kawaii, a special artistic and cultural style of cuteness. The word means “acceptable for affection” or “possible to love”. The English translation is roughly “cute”, but kawaii is far more emotional than “cute” and is now also used to describe someone or something with no negative traits. Kawaii can refer to items, humans, and non-humans that are youthful, cute, and childlike. Kawaii is based on the sweet physical features of young children and animals, similar to the kindchenschema (or baby face schema). We can find a lot of different kawaii characters, but they all have a few things in common. Typically, they are designed in a very simplistic style and have big heads, small, compact bodies, wide eyes, a tiny nose, and little or no facial expression. This lack of emotions is actually what makes them so lovable, because it allows viewers to project themselves onto the character, be it a small child or an adorable animal.

Cuteness design is not only found at the appearance level, but also at the voice and chatting style level. There are many reasons why we make chatbots and robots cute. Users may find it easier to engage with a chatbot that has a friendly and inviting appearance as this can evoke positive emotions and create a sense of connection. Since people might associate chatbots and robots with advanced technology such as AI, they may find them intimidating. Cuteness can help reduce this anxiety and make people feel more comfortable interacting with the agent. The above-mentioned kindchenschema might trigger caregiving instincts in people and promote protective behavior. This can also increase users’ tolerance towards chatbot mistakes or failures.

The Realm of Anonymity

Interactions with chatbots take place in a virtual world that is similar to the real world but has several unique features influencing our behavior. The virtual world is anonymous to a large degree. When scrolling through our feeds or having a conversation in a chat channel, we don’t know for sure who the person on the other side is. We don’t even know if it’s a person or a chatbot. This anonymity can lead us to disclose more information than we normally would, not only in interactions with other people but also in interactions with chatbots. The persona of a chatbot can also encourage higher online disclosure. The more human-like a chatbot is and the higher its own level of self-disclosure, the more we are willing to share personal information or follow recommendations given by the chatbot. The anonymous online world that tends to encourage disclosure and talkativeness is attractive to a lot of people, especially those who suffer from social anxiety or loneliness.

A survey conducted by Ericsson ConsumerLab showed that twice as many participants trust an AI device more than a human to keep their secrets. Similarly, more people trust a chatbot as an insurance consultant than a human. When it comes to tasks related to advice on clothing style, humans and chatbots achieve almost equal trust. Only in high-stakes scenarios, for example, involving lawyers and doctors, do humans receive more trust than chatbots. Other studies have shown that human trust in chatbots depends on how well the chatbot understands and responds, and on factors such as human-likeness, self-presentation, and professional appearance. Other aspects are the brand behind the chatbot, as well as the perceived security and privacy in the chatbot, but this takes us off-topic and into the realm of AI ethics. 

Conclusion

In this blog, I explored several cognitive, emotional, and behavioral aspects of how humans interact with digital entities. One key takeaway is the profound impact that these interactions have on our daily lives. Whether it's the way we work, learn, or connect with others, technology has become an extension of our cognitive processes. It's important for UX designers, authors, and translators to be aware of these aspects as they show how important user-centric design is. This means that content should be intuitive, engaging, and aligned with the natural human cognitive processes. Understanding the emotional responses users might have to software interactions can guide the creation of more empathetic and emotionally intelligent content. This is particularly relevant in designing chatbots or conversational UIs, where the goal is to mimic human-like interactions. Language and tone also play a crucial role so we should aim for clarity, simplicity, and a voice that resonates with the target audience, enhancing usability and user satisfaction.