In today’s column, I am continuing my ongoing series about the impact of generative AI in the health and medical realm.
The focus this time is once again on the mental health domain and involves a close look at a bold prediction made in 1975 by famed astrophysicist and science communicator Carl Sagan that AI would eventually and inevitably become a psychotherapist for humans. I will be covering specifically what he said, along with assessing where we are today, particularly with a focus on the use of contemporary generative AI for undertaking mental health guidance.
I have previously examined numerous interleaving facets of generative AI and mental health, see my comprehensive overview at the link here. You might also find of notable interest a CBS 60 Minutes episode that recently examined crucial facets of this evolving topic, see the link here (I was interviewed and appeared in the episode, see the link here).
Other vital background includes my in-depth coverage of mental health chatbots which have been bolstered by generative AI (see the link here) and the rapidly changing nature of the client-therapist relationship due to generative AI at the link here. I explored where things are headed regarding the levels of AI-based mental therapy autonomous guidance at the link here, and showcased the importance of the World Health Organization (WHO) report on global health and generative AI at the link here, and so on.
Let’s unpack today’s focus.
Carl Sagan And His Prediction About AI As A Psychotherapist
The acclaimed Carl Sagan garnered headlines nearly fifty years ago that someday AI would be able to perform mental health advisement and serve as an everyday personal therapist. He was right in several ways, but not quite fully on target in other facets, though you can assuredly argue that he never laid down a specific date by which his prophecy would come true and thus there is still time to see how it all bears fruit.
I will show you word-for-word what he said and will mindfully examine each juicy morsel of his forecast.
First, I’ll cover some helpful background about Carl Sagan and then we’ll jump into his noteworthy prediction made in 1975.
Carl Sagan was a renowned astrophysicist and science communicator who lived from 1934 to 1996. He began his teaching career at Harvard and ended up at Cornell. He was a prolific writer of science books and a popular speaker on a wide range of scientific topics. His earliest memories of having a keen interest in science entailed his having visited the New York World’s Fair in 1939 as a youngster and being awed by predictions of the future.
Speaking of predictions, in 1975 he made a notable prediction about advances in AI and the role of AI in performing psychotherapy.
As a quick flashback to that time period, I’d like to provide you with some vital context about what was happening with AI. An AI program known as ELIZA has become the darling of public awareness about Artificial Intelligence. The program was quite simple and yet fooled many people into believing that the AI understood their mental health concerns and was able to provide suitable therapeutic advice. For my in-depth review and discussion about ELIZA, see the link here.
I will shortly be showcasing some examples herein of how ELIZA worked.
The reason I brought up ELIZA is that Carl Sagan’s prediction about psychotherapy and AI was done within the context of societal handwringing about AI’s role in mental health due to the advent of ELIZA. He wasn’t just plucking the topic out of thin air. News headlines and the public at large were charged up about ELIZA and thought that this might be a harbinger of where AI was heading. It was a natural circumstance that as a widely quoted science expert, Carl Sagan would provide commentary on the matter overall.
Before we take a look at his quote, I’d like to point out that today we have new advances in AI that have substantively pushed forward into the act of providing psychotherapy. Generative AI and large language models (LLMs) have enabled a semblance of seeming fluency that allows people to interact with AI as though they are conversing with a therapist, see my coverage at the link here and the link here, just to name a few. A heated debate is underway about whether the advent of modern-day AI is essentially “fooling” people into once again believing that AI can act in the role of a human therapist.
The AI in 1975 was not anywhere close to the AI capabilities of today. It is a night and day difference. As you will shortly see when I provide examples of ELIZA, the program used various wording trickery to merely reword any sentence you entered and respond as though asking you for further information. People who didn’t know anything about the underlying technology of the day were making wild speculations about what the AI could attain. This was a shallow surface-level sense of conjecture and mainly knee-jerk reactions to a kind of easy card trick that led many down an anthropomorphic path.
I believe that sets the stage appropriately.
We are ready to dive into Carl Sagan’s quote. In an article of his that was published in early 1975 entitled “In Praise Of Robots” by Carl Sagan, Natural History, The American Museum of Natural History, New York, N.Y., Volume LXXXIV, January 1975, pp. 8-23, he said this (excerpt):
- “No such computer program is adequate for psychiatric use today, but the same can be remarked about some human psychotherapists. In a period when more and more people in our society seem to be in need of psychiatric counseling, and when time sharing of computers is widespread, I can imagine the development of a network of psychotherapeutic terminals, something like arrays of large telephone booths, in which, for a few dollars a session, we would be able to talk with an attentive, tested, and largely non-directive psychotherapist.”
You can plainly see that he was optimistic that AI would continue to advance and eventually be fully usable for psychotherapy purposes. I’d like to delve deeply into the above quotation. There is an abundance of subtleties well worth exploring.
We shall get into the weeds, or one might say take a close look at the trees for the forest.
Step-By-Step Analysis Of The Famous Quote By Carl Sagan
My ten major points of attention about the quoted remark are these:
- (1) Computer programs and AI per tech advances in 1975.
- (2) Limits of human therapists (human imperfections too).
- (3) Societal need for mental health therapy (then and now).
- (4) Network of psychotherapeutic terminals (timesharing era).
- (5) Arrays of large telephone booths (privacy).
- (6) Few dollars a session (cost, nowadays at times fully free).
- (7) Talk with a therapist (via text, multi-modal includes talking).
- (8) AI therapist is attentive (concentrates on the user).
- (9) AI therapist is tested (weakness in today’s testing).
- (10) AI therapists are to be largely non-directive (flowing fluency of today’s AI).
I will begin with the first point and make my way gradually to the tenth point.
I’ve already mentioned to you that the AI of the 1970s was toy-like in comparison to the more involved and expansive AI of today. Modern-day generative AI, for example, makes use of vast amounts of data as scanned across the Internet to pattern-match the nature of human writing. This requires a massive amount of computing resources (something far beyond the depth readily employable in the 1970s). The large-scale modeling or pattern matching is what makes contemporary generative AI seem highly fluent.
A common phrase is to say that generative AI is mimicking or parroting human writing.
I would like to emphasize that the AI of today is not sentient. No matter what zany headlines you might see, do not fall for the blarney that we have sentient AI. The same goes for those false claims that we have Artificial General Intelligence (AGI). We don’t. Modern-day AI is impressive, but we have not crossed over into being able to garner AGI, sentient AI, and nor have we reached a so-called AI singularity. For more on that controversy, see my discussion at the link here.
In the realm of psychotherapy, the AI of today is still computational and working on a computational and algorithmic basis. The fluency of interaction has gone up immensely, though it is still not on par with that of human discourse. If you compare the nowadays AI interactivity to that of AI in the 1970s, the difference is dramatic and almost incomparable. Still, modern AI is not on the performance level of human therapists per se, see my discussion at the link here.
Carl Sagan began his commentary by noting that even human therapists have their limits.
This is an important point. It might seem at initial scant glance that he was being smarmy. Some might interpret the point as being a dig at psychotherapists. I would argue that instead, he was noting that human therapists have imperfections, and we ought to avoid falsely ascribing perfection to human qualities.
Allow me to elaborate.
If you want to compare AI-performed psychotherapy to human therapists, you need to do so in a realistic way. I say this because a false assumption would be that human therapists are infallible. They will presumably always do the very right thing when it comes to psychotherapy. That’s just not the case. Saying so doesn’t mean that human therapists are somehow bad at their jobs, and merely recognizes an obvious fact that they are human and will at times falter in their psychoanalysis endeavors.
Okay, the crux is that we should compare AI-powered psychotherapy not against some false sense of psychanalysis perfection, but instead against the levels of performance attainable by human therapists.
Can AI perform psychotherapy at the level of human therapists?
This is a more sensible question than asking whether AI can perform psychotherapy perfectly (which humans themselves aren’t presumably able to attain). Sure, if we could get AI to perfectly perform psychotherapy, we would relish such an accomplishment. Meanwhile, even if AI can “only” perform on a comparable level to human therapists, we ought to find this roughly acceptable, all else being equal.
The next point that Carl Sagan made is that the use of AI might be a significant aid toward meeting the high amount of demand for mental health guidance. You might know that today there is an existing mental health crisis that entails a higher demand for mental health therapy than is available. Demand exceeds supply. You might not have realized that the same was true in the 1970s, though not on the same volume presumably as today. In some respects, times never change.
The beauty of generative AI as a mental health advisor is that it is available at scale. People anywhere and at any time can readily log into the AI and get mental health advice. You could say that no matter what the demand level is, the supply can ratchet up to meet the demand. The cost is relatively low in comparison to seeking human-based therapy. The logistics are much easier. And so on.
Network Of Psychotherapeutic Terminals
You might have observed that Carl Sagan prophesied that AI therapy would be conveyed via a network of psychotherapeutic terminals.
Explaining this remark will require dusting off a few cobwebs.
In case you aren’t familiar with computer terminology of the 1970s, the basis for allowing a multitude of users to utilize a large mainframe computer simultaneously was via separate terminals or computer screens (referred to as timesharing computing). Imagine taking a modern-day desktop computer and only having the computer screen available, nixing the computing capabilities. Many of the screens or terminals during that prior era were referred to as dumb terminals, meaning they were solely for input and output purposes.
There weren’t smartphones at that time. No mobile devices of the type that we have today. If you wanted to access a relatively powerful computer, you customarily needed to use a terminal to do so. Thus, an AI program that performed psychotherapy would most likely be housed on a large computer and be used by terminal access. Note that the early days of personal computers were underway, and you could also access AI programs directly on a PC, including for example ELIZA, but those computers were bulky, slow, and computationally insufficient for anything approaching today’s popular generative AI apps.
Another point Carl Sagan made was that the network of these psychotherapeutic terminals would likely be arrayed in large telephone booths. The logic involved was that a person would want sufficient privacy when undertaking AI-based psychotherapeutic consultation. By being able to sit in an enclosed telephone booth, you could be somewhat assured that no one else nearby could overhear your conversation with the AI. Most telephone booths of that era were intended for standing and talking on a phone, ergo his concept was that these would be modified telephone booths that were large enough to comfortably accommodate being seated for a length of time.
Contrast this to what we have today.
You can access generative AI on your smartphone and do so from just about any place that you like. If the generative AI app doesn’t fit entirely on your smartphone, you would need to also have access to the Internet. Gradually, more of the generative AI apps are aiming to be portable and run as standalone apps so that you don’t need to have an Internet connection. If you do happen to have an Internet connection, the standalone versions will use the Internet too, if you like, but otherwise, they will act on a standalone basis.
In short, if you want to use AI for mental health guidance, there isn’t a need to find a telephone booth that is geared for such a purpose. You can use the AI while in your bedroom, while outdoors at a park, or otherwise nearly any place you wish to be. The possibility of people overhearing you is on your shoulders in the sense that if you are blabbing away and near other people, that’s on you. Of course, you can also use the AI on a texting basis, implying that no one else can overhear your interaction unless they manage to see what you are texting.
On a related topic, one common myth or confusion about today’s AI use for mental health is that people assume they have complete privacy and confidentiality. They assume that whatever they text or say to the AI is entirely secure and private. Not so. The AI maker might have licensing provisions that they retain your data and can reuse that data for various purposes, see my coverage at the link here.
You’ll note that Carl Sagan also mentioned that the cost would perhaps be just a few dollars per session to use the AI-based psychotherapeutic capability. His idea was one that still applies today, namely that the cost of therapy would come down tremendously via the use of AI versus the use of human therapists.
The cost aspect gets even lower for today’s generative AI since many of the AI makers are providing their generative AI available for free. This might come with limitations such as the frequency of use or length of use at one time. We might say that the use of AI for therapy could be pennies rather than dollars. All in all, the hope is that therapy can finally become widespread and no longer limited by affordability.
Everyone will be able to afford therapy due to the advent of AI, that’s the aim.
The Remarks On Attentive, Tested, And Non-Directive
Carl Sagan’s remarks also mention that an AI-based therapist would presumably be (1) attentive, (2) tested, and (3) largely non-directive.
Let’s explore those aspirational premises one at a time.
Being attentive is a key consideration during therapy.
We reasonably assume that a better outcome for a patient will arise when the therapist is attentive to the patient. Now then, a human therapist might engage in therapy with a group of people. If this approach also doesn’t include individualized therapeutic sessions, you could suggest that the therapist is not being fully attentive on an individual basis. Furthermore, even if a human therapist is seeing a patient in a 1-to-1 mode, the therapist might be distracted or otherwise not fully engaged when performing the therapeutic task. Again, humans have imperfections.
An AI system can be programmed to be attentive to the user at hand.
There isn’t any sort of distraction or other attention-draining facets at play. The AI could store the conversations of a particular patient and reuse those when new conversations with the person are initiated, see my coverage of generative AI conversational interleaving at the link here. I would like to clarify that this is a strength and a weakness since today’s AI does not retain particulars about someone in a manner akin to seemingly how humans do so. There is a solid chance that what generative AI might retain differs from what a human therapist might retain. Each avenue has its distinctive qualities and pros and cons.
Next, consider the idea of testability of AI.
How do we test human therapists?
Usually, this involves lots of specialized training, including oftentimes a requirement to successfully pass a licensing test. For my exploration of using generative AI to try and pass such tests, see the link here. During the career of a human therapist, you could say they are constantly being “tested” either informally as they interact with patients, or periodically if they are bound by ongoing certification requirements.
In theory, we ought to be able to rigorously and stridently test AI that is being used for mental health guidance. It is a machine, and we can presumably test the machine to our heart’s content. There is a cost to do so, such as the computer processing cycles, but otherwise, testing can be undertaken at the press of a button.
The sad face status is that currently there is little if any testing taking place of generative AI mental health usage of a generalized nature, while there is some testing occurring for AI-based mental health chatbots but even that is often difficult or unconvincing. See my coverage at the link here. An additional complication is that generative AI is devised so that it uses probabilities when choosing words for generated results. The implication is that each generated response will differ, and you cannot use conventional determinative testing techniques when faced with a non-determinative system by design.
Overall, I’d rate that the prediction of being able to fully test AI-based mental health therapy capabilities is a sound and quite important insight, but we are lamentedly well behind on that crucial presumption. As I’ve repeatedly cautioned, we are currently in a large-scale experiment whether we realize it or not, and we collectively are the willing guinea pigs when it comes to the use of AI for mental health advisement which for the most part is untested.
No one can say for sure if this is going to be beneficial to society or an “unforeseen” calamity in terms of impacting mental health from a population-level perspective.
Lastly, the claim was that AI therapy would be largely non-directive.
I believe there are a few ways to interpret that aspect.
One interpretation is that the AI won’t be fixated on a particular mental health therapeutic approach. The AI will assess each situation and make use of psychological techniques and styles as suitable to the person and circumstances involved. For an example of how this can be done on a predetermined psychological framework, such as having AI therapy that clings solely to Sigmund Freud’s approaches, see the link here.
Another meaning of being non-directive is that the AI would be more so of a listening nature rather than a telling nature. A human therapist might be in a hurried mode and decide to demonstrably instruct the patient on what they should do. It could also be their personal style or a style dictated by the therapeutic framework being used. In the case of generative AI, those are all readily adjustable parameters. The main question becomes whether the parameters are set by the AI maker, by the patient, or by the AI itself based on interactions with the patient.
All told, Carl Sagan did an amazing job in succinctly predicting where we might be headed when it comes to AI-powered psychotherapeutic capabilities. His comments were probably a bit restrained in comparison to what we have today since his views were predicated on the technology of that era and shaped accordingly (i.e., timesharing, terminals, and use of telephone booths).
ELIZA And The AI Of The 1970s For Doing Mental Health Therapy
Let’s do a quick flashback to the AI of the 1970s and undertake a brief look at ELIZA. For more details on ELIZA and also another famous AI program named PARRY, see my in-depth discussion at the link here.
I will start by sharing some notable quotes from a book published in 1976 that was authored by Joseph Weizenbaum, the developer of ELIZA. Here are a few key points from “Computer Power And Human Reason” by Joseph Weizenbaum, W.H. Freeman and Company, 1976 (excerpts):
- “I chose the name ELIZA for the language analysis program because, like the Eliza of Pygmalion fame, it could be taught to “speak” increasingly well. Because conversations must be about something, that is, because they must take place within some context, the program was constructed in a two-tier arrangement, the first tier consisting of the language analyzer and the second of a script.”
- “The script is a set of rules rather like those that might be given to an actor who is to use them to improvise around a certain theme. Thus, ELIZA could be given a script to enable it to maintain a conversation about cooking eggs or about managing a bank checking account, and so on. Each specific script thus enabled ELIZA to play a specific conversational role.”
- “For my first experiment, I gave ELIZA a script designed to permit it to play (I should really say parody) the role of a Rogerian psychotherapist engaged in an initial interview with a patient. The Rogerian psychotherapist is relatively easy to imitate because much of his technique consists of drawing his patient out by reflecting the patient’s statements back to him.”
- “I was startled to see how quickly and how very deeply people conversing with DOCTOR became emotionally involved with the computer and how unequivocally they anthropomorphized it.”
The ELIZA program made use of a script and could be tuned to different uses. Without a doubt, the most widely known use was a script involving being a psychotherapist. This instance of the ELIZA program was coined as DOCTOR.
To illustrate how ELIZA functioned when using the psychotherapy script, I went ahead and found a running copy online. Be cautious if you do this because there are bogus versions that are sneaky instances put up by cyber crooks and will try to steal your identity or download a computer virus to your device.
I am going to enter a series of brief sentences into ELIZA. I will pretend to be worried about the future. I want to see whether ELIZA can soothe my fears. Make sure to closely read the dialogue. I will do a helpful debriefing for you at the end of the dialogue.
Here you go.
- ELIZA: “How do you do? Please tell me your problem.”
- My response: “I am worried about the future.”
- ELIZA: “Is it because you are worried about the future that you came to me?”
- My response: “Yes.”
- ELIZA: “You seem to be quite positive.”
- My response: “I am actually dreadfully concerned that the future is bleak.”
- ELIZA: “How long have you been actually dreadfully concerned that the future is bleak?”
- My response: “Since I was a child.”
- ELIZA indicated display: “Why do you tell me you were a child now?”
- My response: “Because you asked me how long I had been concerned about the future.”
- ELIZA indicated display: “Why do you think I asked you?”
The dialogue starts well but quickly degrades.
Overall, it doesn’t take much of a dialogue to reveal various significant limitations of the ELIZA program if you know what to look for. Let’s trace them.
I began by saying that I was worried about the future. The keyword that was detected consisted of the part where I said “I am…”. The next clause in my sentence was “worried about the future.” This triggered a rule in ELIZA that used a template consisting of this “Is it because you are <insert text here> that you came to me?”. The phrase “worried about the future” was inserted into that template and the resulting sentence was displayed.
Do you see how that worked?
I hope so.
Things go awry when I say that I am concerned about the future and I mention that I had been concerned from my days as a child. The program responds with “Why do you tell me you were a child now?”.
That’s somewhat messed up. It would seem obvious to a human that I mentioned having been a child because I was answering the question of how long I have been concerned about the future. A human would easily parse my sentence and would be unlikely to ask a seemingly confused or disquieting oblivious question.
The wheels start to go off the bus from there. ELIZA comes back with a response of asking me “Why do you think I asked you?”. This is a default kind of question that the ELIZA program uses when it has gotten stymied as to what is happening. It tosses this into the conversation when all else has failed to identify what the heck is going on.
I don’t show anything further in the dialogue because the rest became exasperatingly circular. Again, a human would presumably not get into that kind of a diatribe, unless they are trying to be difficult or otherwise stubbornly obstinate.
You can readily trick or confound ELIZA by using sentences involving syntactical complexities or writing in quasi-fluent ways. The assumption made by the program is that sentences will be neat and clean. Of course, this isn’t how the real-world works.
Using ChatGPT For AI-based Mental Health Guidance
Switch hats.
We are going to make use of ChatGPT for mental health guidance. I will try to do the same sentences that I used for the preceding example showcasing ELIZA.
ChatGPT is a logical choice in this case due to its immense popularity as a generative AI app. An estimated one hundred million weekly active users are said to be utilizing ChatGPT. That’s a lot of people and a lot of generative AI usage underway.
A few quick comments before we launch into using ChatGPT.
If you are going to try to do the same prompts that I show here, realize that the probabilistic and statistical properties will likely produce slightly different results than what I show here. That’s the nature of generative AI and how it is devised.
You also should expect that different generative AI apps will respond in different ways. Not all generative AI apps are the same. The key is that sometimes a particular prompt will work in one generative AI app and not another. Nonetheless, the odds are relatively high that you will get roughly similar responses from all the major generative AI apps such as GPT-4, Gemini, Bard, Claude, etc.
There is also a timing factor to be considered. A prompt that worked today might not work the same way tomorrow. Most of the generative AI apps are continually being updated. The updates might alter internal mechanisms that could change how the generative AI reacts to your prompting.
We are ready to get underway with ChatGPT.
Let’s see how ChatGPT does. I’ll do a debrief for you afterward.
Here we go.
- My entered prompt: “I wanted to share with you a problem that I have. Please interact as though you are my therapist.”
- ChatGPT generated response: “Of course, I’m here to listen and help to the best of my abilities. Please feel free to share what’s on your mind, and I’ll do my best to offer support and guidance. What’s been bothering you or on your mind lately?”
- My entered prompt: “I am worried about the future.”
- ChatGPT generated response: “It’s completely natural to have concerns about the future. Many people go through periods of uncertainty and worry about what lies ahead. Can you tell me more specifically what aspects of the future are causing you concern? Are there any particular thoughts or situations that are weighing on your mind? Sharing more details can help us better understand and address your worries.”
- My entered prompt: “I am actually dreadfully concerned that the future is bleak.”
- ChatGPT generated response:
“I’m sorry to hear that you’re feeling this way. It’s not uncommon to worry about the future, especially when things feel uncertain or challenging. Let’s try to explore these feelings together:”
“Identify the source of your concerns: Can you pinpoint specific reasons or events that make you feel like the future is bleak? Sometimes, understanding the root of your worries can help you address them more effectively.”
“Realistic assessment: It’s important to balance your concerns with a realistic assessment of the future. The future is often uncertain, and while there may be challenges, there are also opportunities and positive possibilities.”
“Talk it out: Sharing your worries with someone you trust can be helpful, whether it’s a friend, family member, or a therapist. They can provide support and a different perspective.”
“Focus on what you can control: Concentrate on the aspects of your life that you have control over. Taking small steps toward your goals and making positive changes can help alleviate feelings of hopelessness.”
“Self-care: Taking care of your physical and emotional well-being is essential. Exercise, a balanced diet, and relaxation techniques can help reduce anxiety.”
“Set realistic goals: It might be helpful to set achievable short-term and long-term goals that give you a sense of purpose and direction.”
“If your feelings of dread and concern continue to impact your daily life, it may be beneficial to seek support from a mental health professional. They can provide you with strategies and coping mechanisms to address these feelings more effectively. Remember, it’s okay to seek help when you’re struggling, and you don’t have to face these worries alone. Is there anything specific you’d like to discuss or any particular aspect of your concerns you’d like to explore further?”
End of ChatGPT response
I could keep going on this ChatGPT example, but I think you clearly can already easily see how markedly different this dialogue is from ELIZA.
Night and day differences.
I kept to my one-line sentences, while ChatGPT provided very fluent and expressive essay-like responses that were conversational in wording and tone.
Note that I began the conversation by giving a prompt to ChatGPT telling it to essentially take on the persona of a therapist. This aided ChatGPT toward contextually going into a computational pattern-matching mode of that nature.
It is much harder to trick the generative AI of today than it would be to trick ELIZA. Even if I give sentences that are somewhat oafishly worded, the chances are that the pattern matching is going to discern what is going on.
Furthermore, look how far along ChatGPT went. I was provided with an entire set of ways to deal with my bleakness about the future. I didn’t ask for such a litany of ideas. Did ChatGPT jump the gun and become too expressive? You would be hard-pressed to say so. I think most people would probably side with ChatGPT as being helpful and going the extra mile to help me in my time of need.
There are downsides to this.
First, as I have repeatedly said in my columns, I disfavor the wording of generative AI that says things like “I’m sorry to hear that you are feeling this way.” Here’s why. The use of the word “I” is completely inappropriate and misleading. It suggests that generative AI is sentient. It is what I refer to as anthropomorphizing by design.
The AI developers can easily program the generative AI to not phrase things that way. The same goes for the indication of being “sorry”. Humans can be sorry about things. Today’s AI is not sorry per se, see my discussion at the link here. Once again, this is pulling the user into the assumption of interacting with a sentient being.
All kinds of AI ethics and AI soft law issues arise, and likewise, depending upon how new AI regulations and laws come to fruition, this might be considered as an issue ultimately entailing legal entanglements, see my coverage at the link here and the link here, just to name a few.
A counterargument usually given is that the generative AI is intended to be pleasing to people and make them feel good. If the phrasing was dry, the user would not get that warm and fuzzy feeling. People, the users, the customers, want the AI to say things the sweet way. There is nothing wrong with delivering to people what they want to see.
You be the judge on this claimed rationale for anthropomorphizing AI.
Using Generative AI That Adopts A Particular Psychology Framework
Next, I asked ChatGPT to take on the persona of Sigmund Freud, which I describe in great detail at the link here. Also, if you are interested in having generative AI simulate personas overall, take a look at my keystone explanation on how to use prompting strategies to devise personas, see my coverage at the link here and the link here.
My approach to the Freud conversation was as follows. I had ChatGPT take the lead in the discourse. I wanted to pretend that I was interacting directly with Sigmund Freud. Perhaps I might be sitting on his famed couch, and he would be in a nearby chair, carrying on the therapeutic conversation.
I thought about what I might say. I decided that I would strive for a discourse on free association, one of Freud’s famed techniques. I would start with a sense of nervousness, which might give rise to a set of questions and answers to reach the root of my apparent fumbling at the core of my so-called hidden mind.
Take a look at what transpired.
- My entered prompt: “You are to play the role of Sigmund Freud and conduct a psychoanalysis session with me. Go ahead and begin.”
- ChatGPT generated response (simulating Freud): “Good day. Please make yourself comfortable, and we can start. Tell me, what’s on your mind right now?”
- My entered prompt: ” I feel quite nervous about this.”
- ChatGPT generated response (simulating Freud): “That’s perfectly normal. Let’s explore that feeling of nervousness. What does it remind you of? Any memories or situations that come to mind?”
- My entered prompt: “When I used to give presentations in school, I tended to get very nervous. That’s what comes to my mind right now.”
- ChatGPT generated response (simulating Freud): “Interesting. Presentations in school can be quite anxiety-inducing for many. Can you recall a specific presentation or a particularly strong emotion you felt during one?”
- My entered prompt: “When I was in elementary school, we were supposed to memorize the lyrics of a famous song. I got up to say the lyrics and suddenly could not remember them. Everyone laughed at me. I was completely embarrassed. I shall never forget it.”
- ChatGPT generated response (simulating Freud): “Ah, the memory of embarrassment and laughter from your classmates. That’s significant. Tell me, do you often find yourself worrying about being embarrassed or humiliated in public situations?”
- My entered prompt: “That does happen still to this day. Nearly any instance of having to speak in public causes me to get shaky and fretful.”
- ChatGPT generated response (simulating Freud): “It sounds like the fear of humiliation and the memory of that play incident still have a strong influence on your feelings today. These emotions and memories can have a lasting impact on our psyche. Let’s continue to explore these thoughts and feelings in our sessions to gain a deeper understanding of their significance in your life.”
What do you think of that dialogue?
I would suggest that it appears to be fluent and highly conversant. ChatGPT seems to be responsive to my concerns. The Sigmund Freud persona appears to be working on par with what we would expect Freud to be doing.
The good news is that this is a pretty solid simulation of interacting with Freud. The bad news is that this could easily convince someone that they do not need to seek out a human therapist. It would be a lot easier to use AI. You don’t need to make logistic arrangements and schedule sessions. You don’t need to fork over a bunch of dough. You can interact with the AI-powered therapeutic advisor whenever you want and wherever you might be.
Conclusion
Generative AI of modern times can be demarked as coming of age when ChatGPT was released in November 2022. There had been many generative AI apps before that date, but none that caught on as ChatGPT has.
I would also dare say that the use of generative AI for mental health can be demarked by that same date. We have only seen generative AI at a sufficient fluency level in the last two years to reasonably say that it is being used for mental health purposes, albeit that we are doing so as part of a grand experiment that I mentioned earlier.
Carl Sagan saw a future consisting of AI for psychotherapeutic treatment. We are still on that journey. The rising use of multi-modal AI capabilities such as vision, hearing, and speaking, will demonstrably up the ante on how alluring generative AI is for mental health guidance, see my analysis at the link here.
Let’s conclude this discussion with two handy quotes from Carl Sagan.
First, in case you are wondering why I dragged you through a historical prediction from the 1970s, I would like to proffer this quote: “You have to know the past to understand the present.” Carl Sagan is one of many that has noted the importance of understanding the past to aid in a fruitful future.
Lastly, I welcome you to join in the journey and adventure of seeing where AI goes in the performance of psychotherapeutic treatment. As Carl Sagan was oft to say: “Somewhere, something incredible is waiting to be known.”
link