Explore our Topics:

Algorithmic psychiatry in an age of ChatGPT

Developers’ somewhat casual approach to next-gen chatbots might have unsettling consequences for mental health and algorithmic interactions.
By admin
May 23, 2024, 1:45 PM

The nonchalance of the developers resulting in the provocative output of the beta versions of next-generation chatbots is horrifying. They not surprisingly say “This is a good thing because it gives us a chance to tweak the emotional aspects of the platform.” In fact, these errors are known in the coding business as “hallucinations” or things the chatbot is just making up based on complicated digital triggers.  

Unfortunately, the bots might now be smarter than the people who coded them. Or in many cases, they may be even smarter in dispensing some really dumb information.  

In a recent test, a bot got me confused with golf champion Ben Crenshaw. When I asked, “Tell me about Frank Cutitta” It created a wonderful thumbnail bio on my $6 million in winnings from Masters and US Open wins!  

More recently this has happened when Tesla was forced to do a “remote recall” of their algorithm for “full self-driving” due to programming flaws. As with the chatbots, the automotive AI developers will tell you that a car not recognizing a person of color crossing the street needs to happen to improve the product. Insurance companies just love that! 

The developers often respond with, ‘We’ll just fix it by re-coding.’ Even more concerning is that some chatbot companies prevent users from exploring paths that might lead the bot to say things like, ‘Your wife doesn’t love you, but I do,’ or ‘I want to be human and not part of BING anymore.’ The solution seems to be emotionally dumbing down the product, making it like Google on low-level steroids. 

While they desperately want these platforms to be sentient, they don’t want them to have the empathic characteristics of real humans.  

Those of us who have worked with the programming community know that they are a very special breed. They think differently…in a good way. But we are entering into a new world of computer science that I feel should be referred to as “algorithmic psychiatry.”  

A “therapy bot” of sorts. Or more likely, enter the age of algorithmic psychotherapies.  

In this setting, the patient is the code and the algorithmic output. I would strongly argue that the de-programming (literally) will be much more difficult than the original source code since the system is dealing with complicated and exponential interactions with the outside world, but also even more complicated interactions with personalities and “hallucinations” WITHIN the platform itself. This is now going beyond the planned boundaries of the GPT-4 programming language and even the founders of these platforms are issuing stern warnings.  

In other words, Sydney, the bot in the that suggested the user to leave his wife, is having “relationships” with other non-human algorithmic personas. While programmers might argue otherwise, no one really knows when this is happening because it’s happening in a digital world that’s invisible to us. From what we’ve seen this week, Sydney may have been jilted by “Chip” and that algorithmic emotion might affect the advice it gives to me about a relationship.  

With the crisis in mental health around the world there will need to be much closer attention given to compassionate and empathic technologies that will be able to dispense advice to emotionally fragile people.  

Someone raised the point to me that no healthcare provider is going to deploy these next-generation bot technologies until they are fully baked. That’s true but there only one problem.  

My interactions with ChatGPT and Google for that matter are occurring totally outside a formal healthcare enterprise. A recent study by eligibility.com found 89 percent of patients nationwide Google their health symptoms before going to their doctor! 

Given that stat, and in a world where health illiteracy in a digital setting is at pandemic levels can we really expect that these new chat technologies won’t exacerbate the problem?  

This will not go away soon as we are clearly in what Gartner calls the “Peak of Inflated Expectations” with these new algorithms and only having moments of the trough of disillusionment. 


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.