In a world where technology is increasingly intertwined with human emotions, a new frontier is emerging—artificial intelligence that can understand how we feel. Imagine an AI system that doesn’t just analyze data but senses our emotional state through the words we choose, the way we describe our experiences, and even how consistently we check in with ourselves.
This isn't science fiction. Advances in natural language processing, psychology, and machine learning are making it possible for AI to interpret human emotions with surprising accuracy. Researchers are now using AI-powered systems to classify mood states based on self-reported text, combining insights from neuroscience and behavioral psychology to enhance human decision-making.
But how does an AI system detect emotions through words? What psychological theories guide its ability to distinguish between stress, motivation, or disengagement? And could this technology reshape how we interact with machines—perhaps even with one another?
Listen For
9:46 AI Can Accurately Estimate YOUR Emotional State
12:34 AI Detects Emotions Before Humans Do
18:41 Is this Ethical?
22:48 Answer to Last Episode’s Question From Guest Greg Wasserman
Guest: Michelle Baty, Neuro Psychologist
Threads | Instagram | Facebook
Rate this podcast with just one click
Stories and Strategies Website
Are you a brand with a podcast that needs support? Book a meeting with Doug Downs to talk about it.
Apply to be a guest on the podcast
Connect with us
LinkedIn | X | Instagram | You Tube | Facebook | Threads | Bluesky | Pinterest
Request a transcript of this episode
09:46 - AI Can Accurately Estimate YOUR Emotional State
12:34 - AI Detects Emotions Before Humans Do
18:41 - Is this Ethical?
22:48 - Answer to Last Episode’s Question From Guest Greg Wasserman
Doug Downs (00:18):
A gleaming spacecraft, a crew bound for Jupiter and the most advanced artificial intelligence ever built. Hal 9,000. Hal was a marvel until it wasn't. It could process data faster than any human control every system on the ship and predict actions with precision. But it couldn't understand emotions. It couldn't recognize doubt, fear, or hesitation, or it could hear them analyze their words, predict their actions, even anticipate their commands. But when it came to emotions, how was blind? So when the astronauts questioned Hal's judgment, it didn't see uncertainty, it saw a threat. But what if Hal had been different? What if AI could recognize stress, uncertainty, and hesitation? What if instead of just analyzing words, it could sense emotion, detect burnout, disengagement, even predict a crisis before it happened today. AI isn't just processing data, it's understanding people today on stories and strategies. In 2001, a Space Odyssey, how 9,000 couldn't understand human emotions in 2025. It can. My name is Doug Downs. My guest this week is Michelle Baty, joining today from Melbourne in Australia. Hi Michelle. G'Day, Michelle. G'Day.
Michelle Baty (02:05):
Good day. Doug,
Doug Downs (02:06):
How are things in Melbourne? I've heard that just kind of west of you. In Adelaide, they reached 48 degrees last week, which is 118 Fahrenheit. Have you seen hot temps like that?
Michelle Baty (02:18):
We definitely have here in Melbourne and Doug, I'm also from Canada, very, very close to you. But I have to say I do not miss the cold, the 40 to 44 degrees that we've had here in Melbourne lately. I would take any day over Canadian winter, but don't tell if it connects that place.
Doug Downs (02:34):
Oh my gosh. So when it slips down to like 30 degrees, or let's call that 85 or so Fahrenheit, is that like you find yourself putting on a jacket in 30 degrees?
Michelle Baty (02:44):
Not quite. I'd say maybe about 21. 2021, yeah.
Doug Downs (02:49):
Yeah. Fair enough. Michelle, you are a Neuropsych educator and an R&D system developer. You build neuro-based retention systems for ethical online coaching companies to prepare to scale the goal for companies and you help them do this is higher retention, higher lifetime value of the client, reduced churn rates, increased retention, all using science, and in your case, brain science. Yes, please. So can you explain for me the neuroscience behind how emotions are expressed in language? Crack open my skull and tell me what my brain is doing in those moments.
Michelle Baty (03:30):
Great question, Doug. I know you love the neuroscience, so I'm keep excited for you to be a part of this episode too. Let's start with a little bit of context on how the nervous system is structured. So when we look at both the brain and the body, it is a continuous communication feedback loop with no start and no finish. And emotions are one of the few pillars that show up in terms of how human beings express, how they understand themselves and how they understand the world. So when we look at how we interpret emotions, we also have to take into account how the nervous system takes in information and interprets that. So let's start with that first, your brain has three different layers. Simplifying for our listeners today. On the top we've got our thinking brain. That's where thoughts come up. Those also are associated to our language centers, how we might speak or write.
(04:22):
Then in the middle we've got our emotional brain. This is where our memories are, but also our fear centers. Very, very important in terms of how we think and behave and speak to the outside world. And then below that we've got our reflex brain. This is all based on survival. It's also very somatic, which means sensations in the body and also how our external environment causes our body to respond to things. That's going to trigger off different emotions. It's then going to trigger off different thoughts. It's then going to trigger off different behaviors. And in turn, coming back to your question, the words that we choose to share with other people that we might write down and also our thoughts and perceptions about things. So your question, Doug, was how are emotions expressed in language? First, it's going to depend on the state of the nervous system.
(05:13):
There are two states that show up dependent on whether or not we feel safe or we feel connected to a tribe. The first state that might show up is a state of threat that's going to cause our emotions to be on the very sharp edges. And we might experience things like frustration, anger, fear, overwhelm, panic. So when a state like that, when we have those really intense emotions that then causes our thoughts to have that same sharp quality. When our thoughts have that same sharp quality, the words that might come out are also going to have that quality. An example of this, Doug, might be there's someone that you really care about and you happen to have a little bit of a tension point. You may have your best interest at heart, as do they. And at the same time, if that threat state is online, we may notice that our words take a much sharper quality.
(06:07):
We might say things that we don't intend to say. We might say them a lot quicker than we might normally say. So we start to see the interaction that when our nervous system is in a state of threat, that's that first state. Our emotions start to get heightened and a lot sharper that causes our thoughts to follow that same path. And then what we say and how we communicate and behave is going to follow that domino effect. Now, in the second state there, Doug, when we're feeling a lot safer and our emotional brain, that mid part is not as sharp, we tend to have a lot more space. Our nervous system is a little bit slower, still so incredibly fast, but by comparison it's a lot slower. And as a result, our thinking brain has time to consider what we might say. We may pause before we speak, we may check in with ourselves or ask a question of curiosity before we make a statement. It may be that we're able to regulate those emotions and the words that are attached to it, frustration in the moment, but we may be able to bring in perspective and start to see through someone else's eyes. So when we look at how emotion shows up in that state in language, we have less amplification of emotion, less speed and sharpness and language. And therefore when we speak to the outside world, it tends to be more considered less abrasive and we tend to be a little bit clearer in how we communicate as well.
Doug Downs (07:34):
Okay, A key piece to this discussion is the polyvagal theory. What does it mean for how we understand and participate in human communication? Polyvagal
Michelle Baty (07:46):
Theory is founded by Stephen Porges and then the application of it, which is the model that I became very involved in about 10 years ago, is from Deb Dana. Polyvagal theory allows us to separate the nervous system into three major activation states. So when we say activation, Doug, what we mean is how much adrenaline is in the system? Is it high, is it low or is it steady? And when we look at these three layers of activation, we can also start to categorize emotions not just as a whole, but emotions in five different kinds of responses. And we put them under those umbrellas. So what do I mean by this? In polyvagal theory, if we've got very high adrenaline, IE, high activation, we'll start to see, like I spoke about before, survival reflexes show up and there are five survival reflexes. You may have heard of one or two.
(08:44):
They are fight, flight, freeze, collapse, and attach. Most people do not know the final two. And they are so important in being able to understand how survival reflexes interact with each other and also the cycles, the ups and downs, the ups and flows that might show up in emotion, in thought, in speaking, and behaviors. In my case, that's working with companies with very high volumes of clients and knowing those interactions as clients go up and down is very, very important to be able to track. So polyvagal is not a therapeutic model, but instead a framework to help us understand and categorize what might be going on in someone's nervous system.
Doug Downs (09:30):
Just for the listener, what this means is whether it's a recorded conversation or an email exchange using the polyvagal theory, you can look at the words that were used and place a fairly accurate estimate on their state of mind.
Michelle Baty (09:46):
A hundred percent. Doug, that's spot on. So it is a general model to be able to understand where someone's nervous system is leaning. And then underneath that we're able to categorize the thoughts, emotions, and behaviors where we can start to zoom in and as a result, look at different interventions, different communication rhythms specific to the survival reflex that they might be in at that moment.
Doug Downs (10:10):
So AI can recognize when I'm struggling emotionally, possibly before I even recognize it myself.
Michelle Baty (10:21):
Doug, that's going to depend on your skill level. Now we know each other a little bit. It seems to be you've got some pretty high skills. It also depends on the level of interaction that you have with your clients who are engaging with this process, inclusive of your coaching process or if you're a contractor, however you're supporting them with it. So there are some dependencies that are there to take a look at how the client is interacting over a period of time, looking at the first interaction that you have with them, whether or not that's a phone call, if it's online, if it's written, we want a base level of information to be able to cross compare. In addition to that, Doug, there are also stock standard patterns across nervous systems as a whole. If you have a nervous system and you have a brainstem, there are going to be very specific patterns that show up.
(11:11):
That will be, I'm not going to say guaranteed because I'm a scientist, but pretty darn close to guarantee to let us know which state that you're in. So if a client or someone else they didn't have the level of skill that I might have with many years of practice or yourself, Doug or someone who's quite reputable and able to practice these models, it could be likely say it might be pretty high, the likelihood that we'd be able to pick it up before they'd be able to notice it, perhaps the second would be before they're able to intervene for themselves to regulate for themselves, they might not have the skill level to be able to do that, or in number three, they might not be able to put to it. So when we look at picking up different states in the nervous system, Doug, first you have to know where to look and you have to know what to do with what you find.
(12:01):
The next is in order to regulate it, you have to be able to notice a name and then bring in resources to support it. And then the third is being able to do that in conjunction with a coach or a contractor or someone that you're collaborating with. So your direct question is, can AI predict before someone is able to pick it up with the sophistication of layers and skill that's required for people to go through that process, notice it, name it, regulate it, and then interact and learn. It's pretty high likelihood that we'll be able to pick it up and intervene before they're able to. Yes.
Doug Downs (12:34):
Okay. So AI has a high likelihood of detecting it, maybe detecting that someone is frustrated or angry or happy, I suppose. Could AI then develop the automated response that guides them more toward where I want them to go?
Michelle Baty (12:52):
Yes, that's exactly the process that we went through. That's the RD that's behind this and it's important to name Doug that in the decision to step into this kind of work, the team that I worked with and myself, ensuring that it is all above board entirely ethical. In fact, we work with the top engineer in Australia for the responsible and ethical application of AI because we know that there are some big risks having these tools and these practices. So with this, I put that as a caveat because it always makes you a little bit nervous with the power of these tools. But in that, when we are able to detect what state a client's nervous system might be in with the level of ethics that are here and the responsible application, the algorithms and the process that we built on the backend of which was informed by polyvagal theory and internal family systems allows coaches or allows whoever the employees are to be able to modify their communication to the client, to be able to settle things like frustration, but also understand the source from which it came.
(13:59):
Or if a client's not feeling as engaged and feeling demoralized or defeated, it can modify communication to be able to encourage and re-engage and support without the normal triggers that might occur in human to human interaction. An example of that, Doug, is often when someone is frustrated at you, your mirror neurons, your nervous system will have a frustrated response back. That's a defensive response. It's very natural, it's very normal. But what happens in coach to client interactions where there is a difference in authority, our job in the authority position is to ensure that we're creating the safest environment possible and the most supportive environment possible to allow a client to engage. Which means if a client is feeling very frustrated, the ability to understand where that's coming from, not take it personally, not become defensive ourselves, that's where this application becomes incredibly powerful. That's the r and d piece.
(15:02):
So in that it allows coaches or anyone in the online environment engaging with this software and with these algorithms, two, respond to the client as if they are number one in the room while supporting their own system to not become defensive. That as a result gives the client a better outcome. You have better coach to client relationships, whoever the coach is and whatever their domain expertise is, they don't have to be a neuropsychologist to be able to pull this off. So if it's in nutrition, if it's in business, if it's in sales, if it's in creative arts, they get to focus in on the product and the practice that they are best at. And this software and these algorithms wrap around that product. So they don't have to go to university for as long as myself. They don't have to do all this work, but instead they have the best possible response unique to the client in that moment, at that time to keep engagement high, the client gets the best outcome, feels less threatened by authority. The coach feels way more regulated, and as a result, you get a beautiful coach to client relationship. Clients have better outcomes, companies have better outcomes. Overall.
Doug Downs (16:12):
It's profound. We're not all the same. The algorithm recognizes pattern and produces pattern like response. But what if I'm someone whose values are different from someone else? Someone I won't even pick geographic. Let me make up a scenario where one person is hardcore conservative, they've never voted liberal in their life, and another person hardcore liberal, they never voted conservative. Can the algorithm still recognize the linguistic patterns and would the AI response still have equal effect? That seems impossible to me.
Michelle Baty (16:54):
It might seem impossible. However, when we look at the source from which the algorithms are built, they're built on patterns of the brainstem and the brainstem is a lower level of consciousness, a part of the brain that does not understand politics.
Doug Downs (17:10):
A primitive part of the brain, yeah,
Michelle Baty (17:12):
A primitive part of the brain. It does not have sophistication to be able to understand values, principles, politics or any of the things that might
(17:21):
Convolute, any of these discussions. So in actual fact, we don't really need to know what someone's political preferences are or any of those things because the brainstem is the thing that drives how sharp those communications are. Someone can have a political opinion about something and their brainstem will be driving how intense that communication is. So I'm not fussed at all what someone's opinion is. I am definitely interested on how much their brainstem is leading the show. If we're able to regulate and support someone's brainstem and nervous system, it's a very high likelihood that they'll be able to communicate and get their point across in a way that is respectful, curious, collaborative, engaging. So it's a bit of a lateral step to your question there, Doug, which is, well, can it pick these things up? And to that I respond, well, it doesn't really matter because the brainstem doesn't know what those things are. As a result, the software and these algorithms cut all of that noise out of the picture and start to look at the primal brain, the primal responses of a human being that might be driving the amplification of those things. If we support that, if we help and regulate that regardless of what someone's opinion is, background is, we are going to get the best out of that person and the best out of the coach. Client interaction.
Doug Downs (18:41):
Where do you see this in the next five to 10 years? And obviously I have to ask you to speak to the human risks that are connected to ai, and I know the ethical, I won't say dilemma, the heart and soul. Look you did into your own ethics before deciding, yes, this is the right way to go. Where is it in five years and is it heading to an ethical place?
Michelle Baty (19:14):
First, I'll give an altruistic answer, and that is if we had these models and frameworks interacting with all of the human beings behind ai, it is very likely, Doug, that we'd have more regulation across the board, less amplification, less primal responses, less greed, more collaboration, more cooperation, and we'd probably see a trajectory that tends to be a bit more ethical and a bit more supportive. That's my altruistic answer. It's the reason I do what I do. It's also the reason before engaging in this product and designing this software, that over a number of years I would work with someone for three years plus before I ever taught this algorithm the screening process, to be able to get to know who they are, to teach them how to be in a state that is ethical, that is responsible, that is taking ownership that does have other people's best interest at heart.
(20:12):
It is very important that these tools are not put in the wrong hands. I think that's very important to say, and I think there's huge risk to it. So little. Me and our team here, we're doing our very best to engage in the most ethical way completely above board. Even the experiments that were designed had time limits on it so we could track and understand what's going on as opposed to just seeing what happened and taking that huge risk. So your question about where it's headed, I think that it is moving very, very quickly. The very small corner of it I'm in, Doug. My hope is that we can support and help those who are engaging with it to get the best possible outcomes for all. It's not my sense, Doug, that it needs to be a manipulative tool to be able to support companies and clients.
(21:02):
In fact, we've seen over and over and over again the ROI of this, a return of investment retention going up, churn, going down client outcomes. That's what builds an effective business and in turn gives higher profit. It also supports clients in getting the result that they want in need and both parties win. So in terms of the very dangerous aspects of it, you are g darn right. It makes seem nervous to talk about. And also for those that might be on the fence, these are tools that allow people to engage in as ethical ways where all parties get the best possible benefit. So have an answered your question there directly, Doug, because it makes me nervous.
Doug Downs (21:45):
I think you answered it fairly well. I think that's fair. There's an old saying about the brain that it's philosophical. If our brains were so simple, we could understand our own brain. We couldn't. And it's just a wonderful paradox to think about, but it feels like maybe we're actually getting closer.
Michelle Baty (22:08):
Yeah, I'm always mindful of the ego saying we understand give it another 10 years, another 20 years. But it has been my experience, Doug working with over 3000 people now across nine different industries with different elements of this software and these products being used that these frameworks help us organize and make sense of things to be able to give the best possible option on where our client is at, where our coach is at. I am incredibly hopeful that I can continue to move in this direction, and I'm also hoping that we can compete a little bit with some of the other strains of ai.
Doug Downs (22:44):
Extraordinary. Thank you so much for your time today, Michelle.
Michelle Baty (22:48):
Welcome, Doug.
Doug Downs (22:49):
Well, hey, in a previous episode, our guest, Greg Wasserman, he left a question for you.
Greg Wasserman (22:56):
We're seeing it harder and harder as media institutions. They've evolved, journalists are fewer. So from a PR perspective, where do you see earned media versus paid media evolving in the next few years?
Michelle Baty (23:19):
The first word that comes to mind on this dog is niche. I am all about niche. When we look at the AI system that's been developed, another interchangeable term for that is niche. So not just niche in terms of how we might describe a demographic of a client, but we're looking at the absolute details, the primal details of psychographic. So Doug, that question on earned media, my response is as we move forward skills and people moving to build skills and ensure that those skills are applied to very specifically solve the problems of the people they're aiming to help. My prediction is I've seen it again and again. I am very much in industries that are doing this as well. My prediction is those are the kinds of media that are going to come out on top. People need their problem solved when you focus right in. When you can use different algorithms like that to be able to genuinely solve problems, not just sell people, not just get them over the line, but truly help people. That's what's going to come out on top.
Doug Downs (24:22):
Agreed, a hundred percent your turn. Michelle, what question would you like to leave behind for our next guest?
Michelle Baty (24:29):
What is one belief you have about your industry that most people would disagree with?
Doug Downs (24:35):
Oh wow. Okay. And I usually, I take a stab at these and I don't know if I deviate from the majority on anything. Going back to Greg's question, this constant quest for our clients to get them earned media coverage, I do deviate strongly from that. Not that earned media doesn't count, and I do like what you said about finding the niches and the niche media, but I'm less interested in getting press coverage and more interested in building a stage upon which others can display their expertise. So I suppose that makes me a deviant in some way. Great question. That's going to make somebody really think,
Michelle Baty (25:26):
I hope so. It made you think so we bond.
Doug Downs (25:29):
I wish I had more time. I do appreciate your time. Thank you so much.
Michelle Baty (25:34):
Big bug.
Doug Downs (25:35):
Here are the top three things I've got from Michelle Baty in this episode today. And I don't mind saying that this kind of software application, this figuratively blows my mind. I'm amazed that AI is advancing to this point and lots of ethical dilemmas here to talk about. Here are the top three things. Number one, emotions shape, language through the nervous system. This is often called Broca's model. The brains three layers thinking emotional and reflexive. They work together to influence how emotions are expressed in speech and writing. Number two, AI can detect emotional states before we humans do. Using neuroscience backed algorithms, AI can identify emotional cues from text or speech, often recognizing a person's emotional state before they consciously do so themselves, making that a powerful tool for communication management. And number three, ethical AI can improve human interaction with responsible application. AI driven emotional analysis can help businesses coaches communicate more effectively, reducing conflict and increasing engagement by tailoring responses based on a person's emotional and cognitive state. Wow. If you'd like to send a message to my guest, Michelle Beatty, we've got her contact information in the show notes, stories and strategies is a co-production of JGR communications and stories and strategies, podcasts. If you like this episode, please leave a rating, possibly a review. Those work great on the primitive podcast algorithms. Thank you as always to our producer Emily Page. And lastly, do us a favor forward this episode to one friend. Thanks for listening.