Office Hours: What Therapists Need to Know About AI with Ted Faneuff
Ted Faneuff, a practicing therapist and Head of Clinical Operations at Upheal, joins Michael Fulwiler to explore how AI is reshaping mental health care and helping therapists reclaim time and reduce burnout.
Ted shares his personal journey, including his late ADHD diagnosis, and explains how AI-powered tools can support therapists in staying fully present with clients while easing documentation burdens.
Listen to this conversation to understand what therapists need to know about AI, data privacy, and ethical adoption (plus why therapist involvement is critical in shaping the future of mental health technology).
In the conversation, they discuss:
- How AI assistants can transform therapy workflows and save hours each week
- The importance of clear consent and privacy standards for AI use in therapy
- Addressing fears about AI replacing therapists and the role of human validation
Connect with the guest:
- Ted on LinkedIn: https://www.linkedin.com/in/ted-faneuff-lisw-s-lmsw-lcsw-mba-b3030350/
- Visit the Upheal website: https://www.upheal.io/
Connect with Michael and Heard:
- Michael’s LinkedIn: https://www.linkedin.com/in/michaelfulwiler/
- Newsletter: https://www.joinheard.com/newsletter
- Book a free consult: joinheard.com/consult
Jump into the conversation:
(00:00) Welcome to Heard Business School
(00:34) Meet Ted Faneuff
(01:20) Exploring How AI Can Support Therapy and Mental Health Care
(02:19) Breaking Down What Artificial Intelligence Really Means
(04:15) A Simple Explanation of Generative AI
(06:58) How Large Language Models Power Today’s AI Tools
(10:07) Addressing the Privacy Concerns That Come with AI
(11:02) What Therapists Need to Know About HIPAA and SOC 2 Security
(13:06) Why Getting Clear Consent Matters When Using AI
(14:45) The Debate on Whether AI Could Ever Replace Therapists
(17:21) Looking at AI’s Ability to Show Empathy and Build Trust
(19:38) How AI Can Lend a Hand and Ease Therapist Workloads
(27:47) Why It’s Important for Therapists to Help Guide AI Development
(35:54) What to Look for When Choosing AI Tools for Your Practice
(44:23) How AI Might Expand Access and Help Therapists Stay in the Field
This episode is to be used for informational purposes only and does not constitute legal, business, or tax advice. Each person should consult their own attorney, business advisor, or tax advisor with respect to matters referenced in this episode.
Guest Bio
Ted Faneuff is a licensed therapist and the Head of Clinical Operations at Upheal, an AI-assisted platform designed to support mental health professionals by reducing administrative burdens and improving clinical documentation. With a deep understanding of therapy practice and a personal journey that includes a late ADHD diagnosis, Ted brings a unique perspective on how technology can help therapists stay present with clients while managing growing demands.
Passionate about the ethical use of AI in mental health, Ted advocates for therapist involvement in shaping technology to ensure it supports clinical care without replacing the human connection. He combines clinical expertise with hands-on experience in health tech to guide therapists in navigating data privacy, AI adoption, and the future of therapy practice in an evolving digital landscape.
Episode Transcript
Ted Faneuff (00:00):
And basically in healthcare and especially in mental health, really the standards should be explicit, opt-in, right? Making sure that clients and therapists who are using the tools need to actively agree with full understanding of what they're agreeing to, what they're getting into and what the data is used for. So really making sure that people are being empowered and being able to have their autonomy respected from a client perspective, from a clinician perspective, and not making them hunt for a way in the process to withdraw from something that they really never clearly choose to begin with.
Michael Fulwiler (00:34):
This is Heard Business School where we sit down with private practice owners and industry experts to learn about the business of therapy together. I'm your host, Michael Fulwiler for Office Hours. This week I'm joined by Ted Faneuff of Clinical Operations at Upheal, a company developing AI powered support tools for therapists. There's a lot of discourse and debate online right now about AI and mental health care. So I wanted to bring in someone close to the topic, and Ted was at the top of my list. As a neurodivergent clinician who understands the cognitive demands of client care and documentation, he brings a clinical lens to tech, ensuring therapist's voices shape the tools they use. In our conversation, Ted explains the practical use cases for AI, like note writing and treatment planning, and reflects on bigger ethical questions around client privacy, consent, environmental impact, and what it means to keep therapy human. This is a fun one, so let's get into it. Ted Faneuff, welcome to Office Hours. Thank you, Michael, happy to be here. Very excited for this conversation. I have a lot of questions about ai. I know therapists have a lot of questions, and you're in a very unique position. You are a therapist, you're also head of clinical operations for an AI company, which is Upheal, which we'll talk about. Before we get into all of that, I'd love to start with just some basic definitions. How does that sound?
Ted Faneuff (02:06):
Yeah,
Michael Fulwiler (02:06):
Sounds great. Awesome. So for therapists who are new to AI in this space, how would you explain what AI is?
Ted Faneuff (02:19):
Yeah, I think that's a really important question foundationally because I think there's a lot of what people think AI is, but if we really unpacked AI for what it is we hear, the core idea of it, especially for today, is it's technology that can perform tasks that typically require human intelligence. It's not really about sentient robots from the sci-fi sort of definition like Terminator or Wally if you're a Pixar fan, but it's more incredibly sophisticated software that excels at recognizing patterns, learning from vast amount of data sets, and then it makes decisions or predictions based on the data set that it's got. And for today's use or what we're talking about, I think of it as a really highly skilled assistant. It can be trained for specific jobs. That's sort of evident in the way that we see AI being used, even in ways that you don't really believe it's being used today. So if you're like, Netflix is a good example. If you are a Netflix subscriber, it sort of uses an algorithm to understand the sort of material that you watch and then uses AI to sort of predict what it thinks you would like. And even Google Maps is another example too. So it's like a highly skilled assistant. It's trained in a very specific way and in the context of mental health, which we'll talk about later, it can be used for some pretty interesting use cases.
Michael Fulwiler (03:35):
That makes sense. My understanding is AI isn't necessarily new, but it's just progressed a lot in the last few years. And we're recording this in May. It'll come out sometime in August, September, later this summer or fall. I imagine AI will also be more advanced by then. And so it's been very interesting to watch and just the pace of it has been, I think exciting but also can be a little bit alarming as well. And so I want to talk about why and some of the concerns. I'm curious, I've also heard the term generative ai. What is generative ai?
Ted Faneuff (04:15):
So to further on the concept of AI being something that uses datasets to make predictions and things, generative AI is sort of a specific type of AI that's making a huge wave right now that it's sort of a buzz term, but really the name gives away what it is, which it generates new content. So unlike AI that might classify data sets or predicting patterns or things like that, generative AI creates original text or images or even coding at this point. And it's funny that you mentioned about this coming out in August, who knows where it's going to be at that point in generative AI abilities, but it's like teaching the AI rules and patterns of say, language or art so well that it can produce its own sort of novel examples of what it thinks that you want. It's not just like regurgitating, it's synthesizing and creating based on its training. And you can see that in some of the use cases today, like we will talk about chat GBT probably in a little bit, but you can create images, you can create customized images based on what you're telling the model, what you would like to see or you even give it just some examples. And so generative AI uses takes that sort of predictive data set thing and goes a step further and creates general content or new content.
Michael Fulwiler (05:26):
I read an article recently by the CEO of Anthropic and they're the ones who started, I think it's Claude, and he was actually warning, I don't know about warning, but just he was talking about generative AI specifically. And what really stood out to me from that article was basically they're admitting that they don't really understand how generative AI works because typical software, there's rules input this, it spits out this, but with generative ai, my understanding, at least from the article that I read, is that AI makes decisions sometimes that we don't fully understand how that works yet.
Ted Faneuff (06:06):
I think at least at this point in my understanding, there's comfort in knowing that even though it makes decisions, there's the aspect of it making decisions based off of statistical probability or at least a set of guidelines somewhere. I think that the reason why people get scared about it, and we'll talk probably a little bit about this later, is just that moniker of intelligence being put on it sort of makes people nervous because it believes that it's got this own sort of consciousness that can fully make decisions or on its own. And I still think that there's a lot of smart people probably smarter way above my pay grade that understand some of the safety implications of building these models. And there's a lot of safety that's sort of put into context about this and how to build these ethically and safely.
Michael Fulwiler (06:49):
What about LLM? That's a term that you see thrown out a lot is chat GPT an LLM. And if so, what is an LLM?
Ted Faneuff (06:58):
An LLM is sort of the engine that drives a lot of the tech-based generative AI and things of that nature. So the large part of that, the large and the large language model is sort of the key there. It's it's models that are trained on absolutely immense volume of text data, books, articles, websites, conversations and just billions of words. And that massive exposure allows them to learn intricate patterns of language to the point of grammar, context, nuance, style, even some small reasoning capabilities. They still can't count, which is ironic to me, but
(07:34):
They're becoming even more incredibly adept at predicting what word should come next in a sentence or it's the fundamental basis of how they generate content or in relevant text images and chat. GPT is really the superstar example that brought LLMs into everyone's home and on their phones really. And really what that is is it's sort of a user-friendly interface that allows a lay person like you or I to interact with that really powerful LLM machine underneath it. So you type in a question or a prompt and it generates a human-like text response and it's pretty versatile. Obviously we know is maybe simplistic, but maybe not for some people, but it can be simplistic from drafting emails to complex to writing poetry or a use case that a lot of people use it for is to explain complex topics, things like that. So chat, GPT really showcases the power of LLMs. It's just one of many and its capability really to understand and generate language in that way. It's really being used to streamline a lot of clinical documentation. And we'll talk about that in a little bit. It's fascinating what LLMs like chat GPT or models like that can do.
Michael Fulwiler (08:44):
Yeah, and I mentioned that the speed at which it's improving and progressing has been pretty remarkable. I remember it feels like last year, maybe it was even two years ago, if you searched on chat GPT, the responses, the information, my understanding was the information that was pulling from in order to produce a response only went up to a certain date.
Ted Faneuff (09:07):
It was 22 or
Michael Fulwiler (09:08):
2022. So anything that had been come out since then, it wasn't trained by, but now it's much more up to date. You can produce images. And so I think just the speed at which it's improving, it's pretty wild and I'm assuming we'll continue to progress at that rate. So definitely something to keep an eye on. So we have that kind of foundational understanding. I want to get into the questions that therapists have about ai, which is a very hot topic right now. I know on my own social media therapist subreddits and therapist Facebook groups because a lot of companies are coming out with AI solutions. And so therapists have a lot of questions. And so the big one that I see is around data privacy. And so for therapists who are concerned about how AI is using their data, what do therapists need to know about data privacy when it comes to using AI in their practice?
Ted Faneuff (10:07):
Naturally with any technology that's this powerful, especially when it intersects with something as personal as mental health and sensitive data, a lot of important questions and very valid concerns sort of come up to the surface. And from my perspective, I usually say that talking about or and kind of engaging in that discourse around the objections to AI is more important in selling the AI solution than it is about talking about the benefits in some ways because people generally are concerned and we do need to continue to hear the objections to this technology to continue to cultivate it in a way that's safe and ethical as it relates to data privacy, obviously that's top of the list. Like you said, anyone in mental health right now is concerned about data privacy and confidentiality. And I think that that's largely because there are so many high profile breaches that come out of any health company, not just mental health, but in that sort of anytime that comes into the picture, people think more critically about it.
(11:02):
But especially if AI is involved when it's processing session content to help with notes and things like that, things that Upheal is designed to do, that security really has to be an ironclad concept built into. It has to actually be the foundation of the product more than anything. And when you're thinking about security, what you're talking about is basically end-to-end encryption, robust data storage protocols that meet or exceed standards like HIPAA here in the us, and really absolute clarity on who has access to what and for how long. And if a session recording is involved for transcription purposes, having that explicit informed consent, it's not just a nice to have, it's sort of an ethical and legal imperative. And really therapists need to be able to understand and confidently explain those safeguards to their clients if they're thinking about using ai. And so to just the data objections, some things for therapists to consider with Upheal the audio recording or really in a lot of these companies, the audio recording only exists for as long as it takes to create the transcript, and that recording is kept in secure data storage until the transcript and note generates.
(12:09):
And for other aspects of data and privacy, a lot of companies will say that they're HIPAA compliant and you can really say you're HIPAA compliant by downloading off of the HHS website, a risk assessment and doing it for yourself and say, oh yeah, I'm HIPAA compliant. But for therapists who are considering technology like this, you need to take it one step further and you need to look at companies who are using third party platforms to really validate that. And something you might consider is something like SOC two type two compliance. That's really important. And what that means really is that there's not really any regulating body in the US like I said, to say that somebody is HIPAA compliant. But what SOC two means is that an organization has had its security, the data availability, processing integrity, confidentiality, and privacy controls, not just designed properly, and that's more of the SOC two type one check, but more operating effectively over a period of time with those frameworks.
(13:06):
So using third party sort of platforms or getting that certification, SOC two type certification. Other things that people might be concerned about or think about with relation to data and privacy is really looking for companies that make it explicit that it's an opt in versus opt out of data sharing. And basically in healthcare, and especially in mental health, really the standards should be explicit, right? Making sure that clients and therapists who are using the tools need to actively agree with full understanding of what they're agreeing to, what they're getting into and what the data is used for. So really making sure that people are being empowered and being able to have their autonomy respected from a client perspective, from a clinician perspective, and not making them hunt for away in the process to withdraw from something that they really never clearly choose to begin with.
Michael Fulwiler (14:01):
So it sounds like there's two layers here. One is the therapist opting in for the recording or their data to be stored or not. And then there's also opting in from the client for an AI tool like Upheal to be used during session. And so it's important to have both of those things. So we're practicing ethically, we're not recording a session and using AI without our client's consent.
Ted Faneuff (14:28):
And involved in both of these layers really is the opt-in aspect is what is the data being used for? So there's the opt-in of like, yes, I'm okay with my therapist doing this, but then what is that data used for? The opt-in really is the use of the data. And so I think a concern kind of going back to some of the objections and the things that therapists are concerned about obviously is how is that data being used to train maybe AI therapists? And I think that that's sort of a really pivotal point and something that it's evolving almost daily. That idea of training AI therapists for tools specifically Upheal the focus on clinical documentation and progress notes, that concern being that the data is being used to kind of create the therapist replacement. But for companies like all these transcribing companies, the AI is really being trained primarily to understand and process the language that's specific for administrative and support tasks.
(15:25):
So I like to, going back to what I said earlier, think of it like a really advanced assistant versus an actual replacing the clinical role. But we need to point out that ambition in other parts of this industry in some parts of the AI world does go a little bit further. And there are companies that are researching and researchers that are developing AI agentic sort of conversational agents and these sophisticated chatbots that are being trained with the goal of providing direct mental health support like psychoeducation or even guiding users through some structured therapeutic exercises like in CBT. So I do think that that's taking it a step further from the data and privacy and opting in, this is why therapists are concerned that their data that they're giving is going to be used to make the replacement basically.
Michael Fulwiler (16:15):
What would you say to that, to a therapist who feels like, well, I don't want to help to train this AI that's going to replace me and take my job. Why would I do that?
Ted Faneuff (16:26):
Well, yeah, I mean, I'm a therapist too. I get that objection wholeheartedly. I approach this sort of response carefully because like we said earlier, the speed at which this is being evolved and developed, it's blindingly fast. I don't know, we don't have a crystal ball to understand where the future goes, and a lot of people are concerned even at the level of insurance companies and what are they going to regulate or demand in their own processes about how AI is used and in the process of delivering care. But I think this is where the conversation gets truly complex and raises some significant ethical questions and practical concerns. And really I think we have to take a step back and talk about what does therapy mean in the context of ai? Can an algorithm really truly replicate the nuanced understanding and empathy and the alliance that are foundational to human therapy?
(17:21):
And really, if we go back to the basic understanding, like we talked about earlier about what AI is, what LMS are currently, Michael AI has the capability of sophisticated pattern matching and generating human text, but it doesn't possess that true genuine consciousness or self-awareness or the lived experience that really informs human therapists. So that's something that's way, way far off from truly being realized. I think in practice or in practicality, I think that that's what I would say is that even though AI has a lot of potential, I think that there's a lot of studies out there that are showing that clients who might be engaging in some of these studies where they don't know that they're kind of talking to an agentic agent, they're sort of doing some of these studies, they still know, they still kind of have this idea something's missing there. And there's this concept, and I'm trying to remember the uncanny valley, that's what it is. The term is the uncanny valley. And really what that implies is that there's still something really kind of uncanny about the delivery of AI that is understood consciously by a human that they're not talking to another human. AI just hasn't gotten that sophisticated enough.
Michael Fulwiler (18:35):
You can see it on social media. It's very clear to me at least on LinkedIn, when someone is using AI to respond to comments or when they're using AI to post and the copy, it just sounds robotic. I think as a marketer, I am somewhat resistant, I would say to AI skeptical, it may be more of a traditionalist, but I'm even coming around to the idea that AI is a tool. It's great for research, for marketing, for example. I can give it information and it can quickly put it into a table for me, things like that. And so I love this reframe as AI as an assistant. It makes me think about when I've been to the doctor, even when I've been to the vet with my dog, sometimes there's a scribe that's in the room or there's a med student and they're taking notes. And so this idea of this third party assistant that's in the room is not necessarily new, it's just using AI instead of a person.
Ted Faneuff (19:38):
I like that framing because I think what does AI actually allow us to do? So I've spent a lot of time thinking about this. When I originally, my story is I didn't find out that I was neurodivergent A DHD until much later in my life, even though I kind of felt it all my life. I just never got diagnosed until recently. But where it showed up most for me was in that process of, okay, I'm a therapist. I need to be a hundred percent in tune with what's going on in front of me with my client. I also need to make sure that I remember what insurance company they have and what I need to make sure is in the documentation. And for me, that created such. It can create a disconnect in the therapeutic process and presence. And so for me, that level of using it as an assistant or a scribe, it takes it a little bit further because I know that I don't have to remember every single key detail. I would write copious amount of notes in therapy sessions before I used a tool like Upheal because I would be afraid of missing key things. And so now I know that I have the backup to go back to it. And that's really what I think of when I think of a big benefit of having something like AI is having that ability to take the administrative or really the mental burden or a cognitive load off of me to remember everything and be fully present.
Michael Fulwiler (21:02):
Being a therapist is about helping people, not crunching numbers, but when you're running your own practice, managing finances can feel like a full-time job one you never trained for. That's where Heard comes in. Heard is the financial management platform built just for therapists. No more cobbling together spreadsheets, DIY software or expensive accountants with Heard, you get bookkeeping tax support and financial insights all in one easy to use platform. Heard was started by an accountant and a software engineer who understand the challenges you face as a business owner. Our mission is to make it incredibly easy for therapists to manage their practice as a business, build wealth and stay focused on what matters most, their clients join thousands of therapists who trust Heard with their finances, schedule a free consultation today at join heard.com/consult. That makes sense. Yeah, it's a tool that helps with accessibility. I mean, we spent our time already talking about all the concerns, but what are some of those other benefits? Sounds like for someone who is neurodivergent being able to use a tool like AI so they don't have to try to remember everything, but what are some of the other benefits that you see for therapists?
Ted Faneuff (22:21):
Yeah, I mean obviously time, it saves an incredible amount of time I think for administrative burdens like progress notes for therapists, it's not just about saving time in an abstract sense. It really is about reclaiming several hours a week at Upheal, and I know this is sort of common across the industry, our average time savings for our is approximately six to 10 hours a week depending on your caseload size. And so if you can imagine what that translates into, that's what we were talking about just a second ago. It means being more present with clients because you're less stressed about the note piling up. It's really about having more capacity if you want to see more clients who are waiting for care or investing in further training. 10 hours is a third of most people's CE requirement to maintain their education in the industry. And really we're simply just achieving a better work-life balance to prevent burnout, which is obviously very prevalent in our profession.
(23:14):
And burnout obviously directly impacts quality and sustainability of care. So I think efficiency in time savings is one thing. And I do think in general, I think accuracy is often on the risk side, but it's also on the reward side too a little bit. But I think that the accuracy and at least more so consistency of the notes when AI is sort of properly overseen, and it does not mean putting it into chat GPT by the way, when it's properly overseen and handled, it can really can contribute to no and consistency. For example, it can be really good at transcribing faithfully or prompting the inclusion of standard elements that are required. In a note, it can help ensure that the documentation that you're doing is consistently capturing essential information per your agency's requirements, your own requirements or from an insurance perspective. So obviously all that comes into play when you have a platform that actually is using is in partnership with the therapist process. So the AI flags or drafts things or kind of points things out, but the therapist is always going to have to validate and ensure that that clinical nuance piece is captured.
Michael Fulwiler (24:22):
It gets at 90, 95% the way they're right. It's the same thing with using AI to draft a blog post. I could use AI to draft A, I could tell chat GBT, write me a blog post on this topic using these statistics, but I'm not going to go and publish that blog post that's written by ai, but it's producing a draft that's pretty good, but to what we discussed, it's still not great. And you could probably tell it's written by ai. So then you go in and you clean it up and the counter argument to that I have heard is that, well, I have to spend so much time editing the AI note or I could just do it myself. And so it seems like the accuracy of that initial output is important, so you're not just spending your time fixing what the AI has created.
Ted Faneuff (25:13):
Yeah, I think that my counter to that would be, again, it can save time and it often does. I think it comes back to your why of what you're using it. For most people, it's interesting because they don't want to necessarily use ai, but then when they do use ai, they're like, well, it's not doing everything exactly a hundred percent. It's like, well, no, we don't want it to. You're absolutely right. We don't want to do it a hundred percent. We never want to lose sight of that human validation perspective. And again, it comes back to what you use it for me and for most therapists, I'm going to put myself out there and say, for most therapists, documentation requirements really progress notes in general started more as an administrative burden from a system versus truly benefiting the therapist. So most people write progress notes, I would say the way that they do because they need to meet compliance requirements that are put upon them.
(26:09):
And so I think for most therapeutic progress notes, there's soap, burp, gerp e, MDR r, dope is a different connotation, totally different concept here, but all of those particular progress notes, there's so much variation about what goes in what section, right? Yes. Going back to does it save time most of the time, yes, it will. Most of the time it'll get it right and there's minor edits, but I think again, what is it doing for you? It's synthesizing and organizing the information in a way that your brain doesn't have to then worry about what goes in this section, what do I put here? And some therapists great that they can some, there's power documenters out there. They're amazing people who can have a session and in the middle of the session by the end of it, or sorry, by the end of the session, they're like their notes done and more power to 'em. But a lot of therapists don't operate in that way. Right.
Michael Fulwiler (27:03):
Yeah. I was going to ask for therapists who are listening and feeling like, I don't really need ai, this isn't a problem for me, how would you respond to that?
Ted Faneuff (27:14):
I would say it's great, and then it's not for everyone. AI really isn't for everyone and it doesn't need to be for everyone. What I would say though, as sort of a cautionary response to that too though is don't demonize the people that do. I see a lot of this in therapist communities now where it's their response will be almost tone deaf or really not understanding that there's people actually out there that really can use this for really intended purpose.
Michael Fulwiler (27:47):
I see that as well. It can come across as shaming or you're doing something unethical using AI and how could you do that? And I think that there's in general a lot of resistance, which is very valid, but something you and I have talked about is that this is happening whether we like it or not, AI is here. People are using chat GPT as a therapist, and so my sense is that therapists almost have a responsibility to help steer the ship here, so we don't go down this path of AI being harmful. It's like that there's an opportunity instead of resisting and saying, we don't want AI to say, okay, let's accept that AI is a thing, and then how can we help to make sure that it's going in the right direction? That's why I appreciate the work that you do, really being the clinician in the room and saying like, Hey guys, this isn't okay, or we can't do this. We can't cut this corner.
Ted Faneuff (28:51):
I like somebody on LinkedIn recently said, the toothpaste is out of the tube and it's not going back in.
(28:58):
I think that that's actually a pretty accurate depiction of this. It is out there, it is being used, and even for the people who don't want to use it for documentation or anything really, I think that their voices is almost as important as those who do in sort of continuing to create this ethically and responsibly. And don't just sideline yourself from having the conversation just because you don't want to use it. Because there's a couple key places that ai, I mean that are risks for AI that are really important for the entire therapist community to kind of get around, and one of that being deepening the inequality in the notes or more bias. So a risk being that if these tools require a lot, they kind of get data sets from one specific population like white Western individuals. They might not be able to be utilized in all contexts, and I think that there are people out there who could really benefit the process by putting their involvement in it.
(29:59):
So yeah, we need more clinicians in this space to raise their concerns and talk about it versus just shutting everything down. I'm never going to use it. Nobody should ever use it. Well, people are using it and get behind that. So when it comes to any sort of fear, that's where we really overcome that fear is by having people at the table. Kind of going back to something we said earlier, a lot of therapists and clinicians are rightfully concerned, but I say continue to approach it with a healthy dose of skepticism, but also a willingness to engage with curiosity, right? Lend your voices to how this continues to progress.
Michael Fulwiler (30:33):
I went to a AI and mental health event here in Rhode Island recently, and one of the speakers shared that when we talk about AI and technology, we tend to compare it to an ideal alternative, whereas with people we don't do that. So the example that he shared, which stuck with me was that when we look at self-driving cars, if one self-driving car gets in an accident, we're ready to say, self-driving cars are dangerous, but people get in car accidents all the time, and so we're not comparing a self-driving car to a human-driven car. We're comparing it to this ideal outcome. And so it really that sort of reframe of thinking about AI mental health and in therapy in particular, there's also a lot of therapists out there who are not very good therapists, and so when we think about an AI therapist, is it better than a bad therapist and where is that line? And so I think that to me is I think helpful to think about the success of AI or how we evaluate. I'm curious your perspective on that.
Ted Faneuff (31:42):
That's actually really a really interesting framing of that. I'll have to use that in future conversations. I think I'm taking this a little bit of a different curve, but I think that yes, when people look at ai, they consider what they perceive it should do, how perfect it should be, and then if there's a mistake like a hallucination in note, which we get feedback all the time, that's one of the things in our product I think very helpful is we get human in the loop sort of validation of the things that are being put out there. And we get feedback all the time where people kind of say, well, this isn't smart enough or it's not et cetera, and it's sort of like we can't really have it all right? You either want AI that's going to get you near the finish line for the most efficiency possible, but you always need to maintain your presence in it or you are holding it to a standard of it will always be perfect a hundred percent of the time.
(32:35):
And I don't think that that's what we necessarily want. I think that we want to always sort of take it from the perspective of like you had mentioned, if one self-driving car gets into an accident, we don't demonize the whole of self-driving cars. We go, okay, so what went wrong and let's fix it. I think that the concern for therapists though is that rightfully so, the nature of the notes and the things that we're talking about, it's hard to get it wrong, but that's where really when people are looking for different tools, that's where it's really important to make sure that there's clinical involvement in all aspects of the product delivery. And I know that this isn't necessarily a pitch for Upheal, but I talk about it, my lived experience, but one of the things that I've advocated for is therapist involvement at all levels where we can, I'm therapist, I'm head of clinical operations, we have licensed marriage family therapist who oversees our clinical product. We have a therapist as an account executive to talk to other therapists, and we have therapists or people with psychological training that are in the background doing the prom engineering very actively. So recognize that we have people behind the scenes that are trying to drive it and give it the best outcome as possible, but we still need input from the people who are using the product to make it better.
Michael Fulwiler (33:55):
Absolutely, and I love that you're doing that Upheal for therapists who are listening and they want to get involved, they are interested in doing what you do. What advice would you have for them for breaking into the tech space?
Ted Faneuff (34:10):
Yeah. Well, like we said, there's a lot of companies that are looking at this. I would say just reach out. I got this role at Upheal. It's based on the backend story of my own burnout, but I think Uri, who's the CEO, I think he posted in LinkedIn maybe that he was beta testing at that time, and I think this goes back to that curiosity and okay, let's jump in IT aspect. I was like, okay. I was very skeptical at first too. Most therapists, I was like, I don't know if I want to do this. But I was curious and I reached out and I said, Hey, I'd love to beta test. I'll give you some feedback. And once I started beta testing and giving the feedback, I'm like, oh my goodness, this literally changed my life. Then I proceeded in the conversations with him and I became a clinical advisor for Upheal, and then I ultimately came on full time, but I think it's for therapists who are really interested in this, reach out to the companies that are doing this and reach out from the perspective like we've been talking about, which is to say you guys need boots on the ground therapist to help you.
(35:15):
Knowing your value as a therapist in this space is really important in projecting that value in these conversations to say, look, I have something to offer. You're creating a product that is designed for care delivery or something like that, and I can help you. So reach out to people in this space and just truly come at it from the place of I want to help create this ethically and responsibly.
Michael Fulwiler (35:38):
What about for therapists who are evaluating using an AI tool in their practice? We mentioned data privacy, you mentioned clinical involvement. Are there other criteria that the therapists should consider when they're looking at say, Upheal versus alternatives?
Ted Faneuff (35:54):
Really making sure that people are looking at the privacy policies in the terms of conditions. I think that that's sort of an umbrella thing to encompass some of what you just highlighted, but really making sure that the platform never sells any data down to the de-identified aggregate data. So in some contracts or privacy policies, it's really carefully structured where it says, we don't sell your data, but then when they de-identify and aggregate the data, it's not your data anymore, so they can sell that. So really making sure that people are understanding down to that level. Also user experience in integration into your workflows. Obviously if AI is going to be a time-saving agent for you, the whole user experience of how you plug that into your tech stack or not to use such a techy term, but into your process,
Michael Fulwiler (36:42):
Right
Ted Faneuff (36:42):
Into your process, making sure that that actually works for you, making sure that it has ways to capture your sessions on all levels with ease. And I think that this comes out of a lot of, I don't want to say trial and error, but really thoughtful thinking about it, at least from our product design perspective. We would put something out there and a lot of companies will put out there, okay, well you can capture the audio of a session, whatever, but now we can capture in-person sessions, we can capture if you're using your own or native sort of video platform within your EHR, we can capture that in a lot of different ways. Making sure that it's just seamless for you is one of the most important things too that we're hearing
Michael Fulwiler (37:22):
When it comes to selling data. One of the questions that I've seen come up is Upheal says they're not going to sell my data, but what if in the future appeal gets acquired and then the company that buys Upheal sells my data? Is that a valid concern?
Ted Faneuff (37:36):
Yeah, absolutely. Right. I think it goes back to we don't have a crystal ball. I say this sort of measured, I can say that in any of those sort of acquisition conversations, I think that you are putting trust in the teams that are sort of telling you now that they're curating data carefully and ethically, and if that's the case, there is some level of trust that you have to extend to them that would kind of continue if there was any sort of acquisition process that you're vetting the people who are potentially buying the platform. I look at it this way, Upheal Powers, Alma's Notice assist in their platform, and the only reason that it even made sense is because both companies sort of said, what do you stand for with each other? And that created a synergy there. And I think that sort of trust has to extend even to that level if a company is acquired. But I don't know. I don't know how to answer that question the best way, but I do know that if the people who are in charge of it are ethical enough and that sort of synergy exists, then contracts can be written with the aspect of this is what you are going to do with the data and what you're not going to do with the data.
Michael Fulwiler (38:47):
Yeah, I think that that's valid. No one knows what's going to happen in the future, and I appreciate your honesty and transparency. It's something that I've always appreciated about you and about Upheal. One of the other objections that I hear a lot from therapists about AI, and not just from therapists, but about AI in general, is that AI is really bad for the environment that the servers that are required to run in the background require a huge amount of electricity. So I'm curious your reaction or take on that.
Ted Faneuff (39:20):
Yeah, that's really important to acknowledge the environmental impact and footprint for sure. It's one that's gaining sort of an increasing prevalence of what we hear. I even think about it from the perspective of we've got a data center that's being built down the road from our house, and it's not even an ai, well, I don't even know if it's an AI data center necessarily, but we hear a lot about, people are like, oh my gosh, it's going to suck up all the energy and our energy costs are going to go up. And it's important one, it's not one that we want to negate, but because training these very large AI models like we're talking about, it does need significant computational power, which translates to energy consumption. And there is growing movement towards developing more energy efficient or green AI models and utilizing more renewable energy for data centers.
(40:09):
It's a conversation in the tech industry, and we as consumers, we need to be having that. And I think to tackle this, there are some companies and researchers working on a few things. They're trying to make the models smaller, more efficient so that they can do the same amount of work without the need for as much power. And they're training models in places that use cleaner energy, solar and wind farms, and there's really a push to reuse parts of models that are already trained instead of starting from scratch every time. So it's really about using smarter designs, cleaner energy, and reusing what's already built. I think one of the things that I like in this too, if you think about just computers in general, back when computers started, they were entire rooms.
Michael Fulwiler (40:51):
I was just going to say that. Yeah. It reminds me of when computers used to be as big as refrigerators
Ted Faneuff (40:57):
And
Michael Fulwiler (40:57):
Now they're on your phone.
Ted Faneuff (40:59):
Yes. I think we'll see that more as the knowledge becomes more sophisticated, the footprint or how much size it takes for the computational power will continue to shrink. Again, far above my pay grade, there's all sorts of quantum sort of things that people are doing. I don't even want to sound smart there that are working to shrink the impacts and the power it takes to do this.
Michael Fulwiler (41:24):
There was a quote I saw recently from Sam Altman who he started to Open AI, which owns Chat GPT and he said that they're spending millions of dollars a year for Chat GPT to respond to please and thank you. Oh, I did read that. Yeah.
Ted Faneuff (41:38):
Yeah,
Michael Fulwiler (41:39):
I thought was great.
Ted Faneuff (41:40):
Well, sort of funny thing about that though, and I saw some responses to that and I think they're so valid. I'm one of those people I maybe need to curb it a little bit, but I do say please and thank you. And I think it's sort of funny because people are like, look, if big, if it ever does become sentient, we want it to remember that we were kind to it.
Michael Fulwiler (41:59):
Yeah, I do the same thing. So what is Upheal and what do you guys do?
Ted Faneuff (42:05):
Yeah, so Upheal is an AI assisted platform built for therapists, and really our mission is to help provide therapists with better balance. And we do that using AI to help create clinical progress notes, treatment plans, and there's a lot of upcoming things that we're going to be doing to help increase compliance throughout the note taking process. Really, it's just a platform designed by clinicians for clinicians to ease administrative burden.
Michael Fulwiler (42:34):
As we wrap up here, wondering, do you have any final thoughts or is there anything that we haven't touched on that relates to AI for therapists to close us out?
Ted Faneuff (42:47):
I think we've pretty much covered it all. We probably could spend several episodes talking, I think, in depth about some of these things. I think that we captured the most important things. I think that the one thing I would say in closing is recognize the benefit that it potentially has for you. And if it doesn't, it's fine. Kind of recap some of the things that we're talking about, but I'll kind of close on a story. We go to conferences and we have conversations with a lot of therapists. Some of 'em walk past the booth, they look at it and they're like, Nope. And they keep walking.
(43:18):
And then there are other people who walk by and they say, you know what? This changed my life. Not even just sort of happenstance, sort of like, yeah, okay, it changed my life. No, they really look at you and they say, this changed everything. And then there's those in the middle who are sort of like, I'm curious about it. And so the story that I'll share just kind of briefly is we had a woman come up to the us the Psychotherapy network or conference in DC and she goes, what's this about? And I started to talk to her about my own story and that resonated with her, and she broke out in tears and she pulled up a seat next to us and she became our best friend for a couple hours, and we really dove into her story and listened to her struggles. And at the end of this, this is my sort of personal mission in all of this, and I hope that any clinician who's sort of overseeing or helping to oversee these processes shares, if we can help therapists remain in the field, there is a larger problem that we're solving, which is access, right?
(44:23):
If we're keeping clinicians able to focus on the work and do good quality clinical care, we're contributing in ways that are more impactful than just can it do documentation. And so this story of this woman, she signed up, she signed up for our platform, I think the following week and has become quite pleased with it, and she's staying right? But in those tiers, there's a lot of conversations where people are like, I feel like I wanted to leave the field altogether. So that's my sort of final thing. I hope that any clinician who's coming into this understands that it's not just about how much time does it save you in your documentation. It's about really increasing access and keeping therapists in the field.
Michael Fulwiler (45:07):
I love that. Ted, thanks so much for coming on. For folks who want to connect with you and learn more about Upheal, where can they do that?
Ted Faneuff (45:15):
Yeah, they can go to Upheal.io. I'm an open book too, and I put this out there. If people want to email me directly, it's ted@upheal.io. I welcome any sort of conversation on this for sure. Yeah, thanks Michael.
Michael Fulwiler (45:28):
Awesome. Appreciate you. Thanks for listening to this episode of Heard Business School, brought to you by Heard, the financial management platform for therapists. To get the class notes for this week's episode, go to joinheard.com/podcast, and don't forget to subscribe on YouTube, Apple, Spotify, or wherever you listen to podcasts. We'll see you in the next class.