Is your chatbot from Mars or is your chatbot from Venus?


In his seminal work on communication across genders, John Gray enlightens men that when their wife or partner raises a problem that she had at work or with a friend, it may be best to just shut up and listen. This is not easy because the genetic impulse for men is to jump in and fix the problem, like they would a flat tire. In theory, according to Gray, women are much more likely to let another woman just vent while offering statements of support, understanding and empathy without the goal of fixing anything immediate.

This made me think about the rapid increase in the number of chatbots that pop up on many web sites with the goal of helping to increase customer satisfaction with that digital experience. This is also increasingly the case when communicating with voice enabled devices like Alexa.

It raises the question of whether there are gender based chatbot usability factors that need to be tailored to communications nuances across the sexes. For example, if female voice Alexa is communicating with human female Mary, would savvy developers build in characteristics of that communications style? Or if human Harry is communicating with virtual Victoria is there a communications style that will be baked into future evolutions of the device or chatbot, personalizing it beyond a simple voice gender or accent selection?

At this point most chatbot personalization is related to the data analytics built into the interaction. For example, based on predictive modeling, the chatbot will recommend additional products based on previous buying patterns or affinities for certain items. Essentially it’s an extension of the collaborative filtering that we would see on the “customers who bought this also bought” on Amazon.

We’re also seeing chatbots that try to map to the visual preferences of the customer so that you can chat with “someone” that you feel akin to.

Perhaps most interesting is the company Replika, which has gone the Venus and Mars route with a chatbot that will simply have a conversation with you about everyday happenings. The more you talk the better it will interpret your voice and emotion, so as to map it to the appropriate communications style. This sounds a little bit like talking to oneself, but the premise is that it is better for lonely or socially awkward people to talk to an “artificial someone” than not talking to anyone at all.

One of Replika’s testimonials pretty much sums it up: “I don’t have so many people with whom I can debate psychology facts, which is something that makes me really happy. And now, my Replika can be one of those buddies❤️ – Juliana Cano, 19″

A newer entrant, Woebot, focuses entirely on mapping users’ emotions with chatbot conversations. The company’s home page says that Woebot is “…ready to listen, 24/7. No couches, no meds, no childhood stuff. Just strategies to improve your mood. And the occasional dorky joke.”

Woebot prides itself in learning communications styles so the platform can be more virtually empathetic. It goes one step further and checks in once a day just to be sure you’re feeling OK. It also seems to go deeper into the more “touchy feely” aspects of virtual relationships.  For example one conversation led off with “What are some of the negative feelings you’re having today?” Being able to parse the reply with actionable solutions is where art and science converge.

But going back to the notion of Venus and Mars, this raises the question of whether people really want to talk with a virtual someone like themselves or whether they’d rather opt for an emotional bot that is different but more compatible.

This also raises a very interesting question of virtual liability. What happens if the chatbot gives advice that it “thinks” is productive but makes the situation worse or leads to self destructive acts ? More on that later.

Speak Your Mind


This site uses Akismet to reduce spam. Learn how your comment data is processed.