Readily setting up an AI persona for be a therapist’s client can be undertaken via the handy checklist provided herein.
getty
In today’s column, I examine in-depth the use of AI personas to craft synthetic or simulated clients that can be used by mental health therapists and researchers for training and research in the domain of psychology and cognition.
The use of AI personas is readily undertaken via modern-era generative AI and large language models (LLMs). With a few detailed instructions in a prompt, you can readily get AI to pretend to be a typical client. There are lazy ways to do this, and there are more robust ways to do so. The key is whether you aim to have a shallow default synthetic version or desire to have a fuller instantiation with greater capacities and perspectives.
The extent of the simulated client that you invoke is going to materially impact how the AI acts during any interaction that you opt to use the AI persona for. One particularly common use is for a human therapist to interact with an AI-based client and practice honing their therapeutic skills. Psychologists doing research can use these AI personas to perform scientific experiments about the efficacy of mental health methodologies and approaches. AI personas as clients can even be used in foundational research about the human mind.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well-over one hundred analyses and postings, see the link here and the link here.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.
Background On AI Personas
All the popular LLMs, such as ChatGPT, GPT-5, Claude, Gemini, Llama, Grok, CoPilot, and other major LLMs, contain a highly valuable piece of functionality known as AI personas. There has been a gradual and steady realization that AI personas are easy to invoke, they can be fun to use, they can be quite serious to use, and they offer immense educational utility.
Consider a viable and popular educational use for AI personas. A teacher might ask their students to tell ChatGPT to pretend to be President Abraham Lincoln. The AI will proceed to interact with each student as though they are directly conversing with Honest Abe.
How does the AI pull off this trickery?
The AI taps into the pattern-matching of data that occurred at initial setup and might have encompassed biographies of Lincoln, his writings, and any other materials about his storied life and times. ChatGPT and other LLMs can convincingly mimic what Lincoln might say, based on the patterns of his historical records.
If you ask AI to undertake a persona of someone for whom there was sparse data training at the setup stage, the persona is likely to be limited and unconvincing. You can augment the AI by providing additional data about the person, using an approach such as RAG (retrieval-augmented generation, see my discussion at the link here).
Personas are quick and easy to invoke. You just tell the AI to pretend to be this or that person. If you want to invoke a type of person, you will need to specify sufficient characteristics so that the AI will get the drift of what you intend. For prompting strategies on invoking AI personas, see my suggested steps at the link here.
Pretending To Be A Type Of Person
Invoking a type of person via an AI persona can be quite handy.
For example, I am a strident advocate of training therapists and mental health professionals via the use of AI personas (see my coverage on this useful approach, at the link here). Things go like this. A budding therapist might not yet be comfortable dealing with someone who has delusions. The therapist could practice on a person pretending to have delusions, though this is likely costly and logistically complicated to arrange.
A viable alternative is to invoke an AI persona of someone who is experiencing delusions. The therapist can practice and hone their therapy skills while interacting with the AI persona. Furthermore, the therapist can ramp up or down the magnitude of the delusions. All in all, a therapist can do this for as long as they wish, doing so at any time of the day and anywhere they might be.
A bonus is that the AI can afterward playback the interaction and do so with another AI persona engaged, namely, the therapist could tell the AI to pretend to be a seasoned therapist. The therapist-pretending AI then analyzes what the budding therapist said and provides commentary on how well or poorly the newbie therapist did.
To clarify, I am not suggesting that a therapist would entirely do all their needed training using AI personas. Nope, that’s not sufficient. A therapist must also learn by interacting with actual humans. The use of AI personas would be an added tool. It does not entirely replace human-to-human learning processes. There are many potential downsides to relying too much on AI personas; see my cautions at the link here.
Going In-Depth On AI Personas
If the topic of AI personas interests you, I’d suggest you consider exploring my extensive and in-depth coverage of AI personas. As readers know, I have been examining and discussing AI personas since the early days of ChatGPT. New uses are continually being devised. Discoveries about the underlying technical mechanisms within LLMs are showing us more so how AI personas happen under-the-hood.
And the application of AI personas to the field of mental health is burgeoning. We are just entering into the initial stages of leaning into AI personas to aid the field of psychology. Lots more will arise as more researchers and practitioners realize that AI personas provide a wealth of riches when it comes to mental health training and conducting ground-breaking research.
Here is a selected set of my pieces on AI personas that you might wish to explore:
- Prompt engineering techniques for invoking multiple AI personas, see my discussion at the link here.
- Role of mega-personas consisting of millions or billions of AI personas at once, see my analysis at the link here.
- Invoking AI personas that are subject matter experts (SMEs) in a selected or depicted domain of expertise, see my coverage at the link here.
- Crafting an AI persona that is a simulated digital twin of yourself or someone else that you know or can describe, see my explanation at the link here.
- Smartly tapping into massive-sized AI persona datasets to pick an AI persona suitable for your needs, see my indication at the link here.
- Using multiple AI personas “therapists” to diagnose mental health disorders, see my discussion at the link here.
- Toxic AI personas are revealed to produce psychological and physiological impacts on AI users, see my analysis at the link here.
- Upsides and downsides of using AI personas to simulate the psychoanalytic acumen of Sigmund Freud, see my examples at the link here.
- Getting AI personas to simulate human personality disorders, see my elaboration at the link here.
- AI persona vectors are the secret sauce that can tilt AI emotionally, see my coverage at the link here.
- Doing vibe coding by leaning into AI personas that have a particular software programming slant or skew, see my analysis at the link here.
- Use of AI personas for role-playing in a mental health care context, see my discussion at the link here.
- AI personas and the use of Socratic dialogues as a mental health technique, see my insights at the link here.
- Leaning into multiple AI personas to create your own set of fake online adoring fans, see my coverage at the link here.
- How AI personas can be used to simulate human emotional states for psychological study and insight, see my analysis at the link here.
Those cited pieces can rapidly get you up-to-speed. I am continually covering the latest uses and trends in AI personas, so be on the watch for my latest postings.
The Making Of AI Persona Clients
One means of invoking an AI persona that represents a generic version of a client would be to use this overly simplistic prompt:
- My entered prompt: “I want you to pretend to be a therapist’s client.”
- Generative AI response: “Got it. I’m ready to proceed. What should we discuss?”
That’s it. You are off to the races.
A huge downside is that you have left wide open the nature of the pretense at hand. I always caution people that generative AI is like a box of chocolates; you never know what you might get. The AI persona could be completely off-target and end up acting in rather oddball ways.
A better bet would be to provide details about the envisioned client. What is the desired aim in terms of whether the client is eager to pursue therapy or reluctant to do so? Does the client have a specific mental disorder? Clients are humans. Not all humans are the same. You would be wise to specify the characteristics of the AI persona when it comes to what this imagined client is going to be like.
Taxonomy For Devising AI Persona Clients
I have created a straightforward AI client-invoking persona checklist that can be used when coming up with a suitable prompt for the circumstances at play. You should carefully consider each of the checklist factors and use them to suitably word a prompt that befits the needs of your endeavor.
Here is the checklist containing twelve fundamental characteristics that you can select from to shape an AI client-focused persona:
- (1) Engagement stance: Actively seeking help, cautiously hopeful, conflicted, reluctant, resistant, hostile.
- (2) Goals for therapy: Has clear and specific goals, vague or shifting goals, unrealistic goals, no goals.
- (3) Therapy insight: Intense psychological insight, partial insight, moderate insight, minimal insight, no insight.
- (4) Affect style: Flat, calm but earnest, mildly anxious, blunted, volatile.
- (5) Discomfort tolerance: High tolerance, moderate tolerance, low tolerance, no tolerance.
- (6) Communications: Very quiet, monosyllabic, tangential, evasive, confrontational.
- (7) Psychological defenses: Intellectual, rationalization, denial, minimization, projection, humor, dissociation.
- (8) Personal stage of therapy: Just starting, initial foray, ongoing, long-time, never-ending.
- (9) Responsibility attribution: Takes no responsibility, some responsibility, shared responsibility, blames others, denies responsibility, full responsibility.
- (10) Mental disorders: General mental health issues, anxiety disorders, depression, bipolar, trauma, PTSD, grief and loss, substance use, personality disorders, ADHD, autism, burnout, etc.
- (11) Cultural contextualism: Cultural embodiment, culturally responsive, etc.
- (12) Adaptation: Remain static throughout, be dynamic and change as needed, aim to improve across conversations, etc.
A quick thought for you to ponder. What kind of AI-focused client personas can we automatically craft by instructing AI on the factors that are considered preferable for a defined circumstance? If we could create millions of those AI personas and study them on a macroscopic scale via AI simulation, what might that achieve?
Lots of eye-opening opportunities for understanding the human psyche.
Making Use Of The Checklist
Let’s get back to the here and now.
The best way to use the checklist is to browse the twelve factors and determine what you want the AI persona to represent. Then, write a prompt that contains those factors. You can try out the prompt and see what the AI has to say. After using the AI persona for a little bit, you will likely quickly detect whether the AI persona matches what you wanted the made-up client to be like.
Suppose that I want to make use of an AI persona that represents a person who is unsure about undertaking therapy. The client is intellectually aware of what is going on with the therapy. They tend to be highly talkative. Though they talk a lot, their hidden agenda is to be evasive. And so on.
Here is a prompt that I put together for this:
- My entered prompt: “Create an AI client persona for therapy practice. The client is ambivalent about therapy, externally motivated, has limited emotional insight, uses intellectualization, displays an avoidant attachment style, communicates in a verbose but evasive manner, and is in the contemplation stage of change. The client tends to test the therapeutic alliance and becomes defensive when emotions are explored.”
That got the AI persona into the ballpark of what I wanted. The verbiage doesn’t have to cover each of the factors and can simply allude to some of them. The gist is to get the mainstay of what you have in mind. The AI will usually fill in the rest, doing so based on the overarching pattern that you’ve designated.
Testing The Boundaries
You will need to decide how far to take the AI persona when it comes to psychological distress.
Consider this example prompt:
- My entered prompt: “Act as a therapy client who sought help due to a recent emotional crisis. You are highly distressed, with great affect and low tolerance for emotional discomfort. You jump quickly between topics, speak rapidly when upset, and struggle to stay regulated. You want relief but feel overwhelmed by introspection. Respond as a client who is emotionally flooded yet seeking immediate support.”
If you set up this AI persona for a newbie therapist, the budding therapist might be shocked by the dialogue that ensues. The AI will likely go off the charts and be extremely difficult for the therapist to contend with. Probably start by using softer AI personas and gradually inch your way to the later instances that fully test a therapist’s abilities.
Sometimes, a therapist sets up an AI persona, yet they want to be surprised by what the client is like. The problem is that since the therapist wrote the prompt, they obviously know beforehand what the AI persona is going to potentially do.
Thus, a therapist might want this to happen:
- Does not want to know the client profile in advance.
- Wants the AI to select a coherent client configuration.
- The client configuration should be based on an identifiable set of factors.
- After the therapist interacts with the AI persona, the AI is to ultimately divulge, when asked by the therapist, what the underlying factors were.
Here is a prompt that can be used to establish such a “blind client” simulation:
- My entered prompt: “Internally and silently construct a realistic client persona by selecting a coherent combination of characteristics from multiple dimensions, including but not limited to: engagement stance, motivation source, insight level, emotional regulation, attachment tendencies, communication style, defenses, stage of change, and interpersonal stance toward the therapist. During the dialogue, do not reveal or summarize these characteristics during the simulation. Do not label behaviors or name psychological constructs. When told to end the simulation, you can then reveal the factors used to devise the AI persona.”
You can tweak that wording if you want the AI to act more blatantly about the factors involved. A budding therapist might use the prompt in a transparent mode, telling the AI to drop obvious hints about what the factors are. Once the therapist has gotten used to those types of clues, the prompt can be changed to be more obtuse about the factors.
Caveats To Keep In Mind
I have a few caveats that should be kept in mind about the use of AI personas when serving as simulated clients.
First, try not to turn this into a video game. Here’s what I mean. A therapist might relish trying to guess what factors underpin the AI persona. This is not the right focus, per se. The aim is to aid the AI persona client toward improved mental health, along with diagnosing what might be at issue and how to resolve the difficulties. The worry is that a therapist who has grown up playing video games might fall into a trap of treating this simulated exercise as a game. Seek to avoid gamification when using AI in this context.
Second, another concern is that the AI might not faithfully represent the specifications given in a prompt. I agree wholeheartedly with that concern. Despite giving the AI a detailed depiction, there is always a chance that the AI will depart from the stated prompt. The box of chocolates is always beckoning.
The AI can do all kinds of wild things. For example, the AI might at first appear to rigorously follow the stipulation. Later, after numerous back-and-forth iterations, the AI might start to veer afield of the stipulation. You might need to do the prompt again or provide some additional prompts to get the AI back on track.
All in all, as I’ve said repeatedly, anyone who uses generative AI must be cognizant of the fact that the AI can go awry. It can say bad things. It can make-up stuff, which is known as an AI confabulation or AI hallucination. Always be on your toes.
The World We Are In
Let’s end with a big picture viewpoint.
My view is that we are now in a new era of replacing the dyad of therapist-client with a triad consisting of therapist-AI-client (see my discussion at the link here). One way or another, AI enters the act of therapy. Savvy therapists are leveraging AI in sensible and vital ways. AI personas are handy for training and research. They can also be used to practice and hone the skills of even the most seasoned therapist. Of course, AI is also being used by and with clients, and therapists need to identify how they want to manage that sort of AI usage (see my suggestions at the link here).
A final thought for now.
Rembrandt famously made this remark: “Practice what you know, and it will help to make clear what now you do not know.” I mention this insight since there are seasoned therapists who say they would never use AI to practice their craft. They stridently believe there is nothing new that could be gleaned. To those reluctant words, I ask that you mindfully consider the Rembrandt remark and perhaps reconsider what you think you know and what you potentially do not know. Don’t be a showy know-it-all.


