She's That Founder: Business Strategy, Time Management and AI Magic for Impactful Female Leaders
You’re listening to She’s That Founder: the show for ambitious women ready to stop drowning in decisions and start running their businesses like the confident CEO they were born to be.
Here, we blend business strategy, leadership coaching, and a little AI magic to help you scale smarter—not harder.
I’m Dawn Andrews, your executive coach and business strategist. And if your to-do list is longer than a CVS receipt and you’re still the one refilling the printer paper... this episode is for you.
Each week, we talk smarter delegation, systems that don’t collapse when you take a nap, and AI tools that actually lighten your load—not add more tabs to your mental browser.
You’ll get:
- Proven strategies to grow your revenue and your impact
- Executive leadership frameworks that elevate you from manager to visionary
- Tools to build a business that runs without burning you out
So kick off your heels—or your high-performance sneakers—and let’s get to work.
Tuesdays are deep-dive episodes. Thursdays are quick hits and founder rants. All designed to make your business easier, your leadership sharper, and your results undeniable.
If you’re ready to turn your drive into results that don’t just increase sales but change the world, pop in your earbuds and listen to Ep. 10 | Trust Your Gut: Crafting a Career by Being Unapologetically You With Carrie Byalick
She's That Founder: Business Strategy, Time Management and AI Magic for Impactful Female Leaders
104 | AI Ethics and Security with Elizabeth Goede (Part 1)
Is your AI use exposing your business to risks you can’t see coming?
It’s not just about saving time — it’s about protecting your clients, your content, and your credibility.
In this episode, Dawn Andrews sits down with AI strategist Elizabeth Goede to unpack the real (and often ignored) risks of using AI in business. From ChatGPT to Claude, learn what founders must know about security, data privacy, and ethical use — without getting lost in the tech.
“You wouldn’t post your financials on Instagram. So why are you pasting them into AI tools without checking where they’re going?”
Listen in and get equipped to lead smart, safe, and scalable with AI — no fear-mongering, just facts with a side of sass.
Want to stop talking about AI and actually use it safely and strategically?
Join us at the AI in Action Conference, happening March 19–20, 2026 in Grand Rapids, Michigan. Get hands-on with 12 action-packed micro workshops designed to help you apply AI in real time to boost your business, protect your data, and ditch the digital grunt work.
What You’ll Learn:
- How even small service businesses are vulnerable to AI misuse
- The one rule for deciding what data is safe to input into AI tools
- Why AI models like ChatGPT, Claude, and Copilot aren't created equal
- The hidden risks of giving tools access to your drive, emails, or client docs
- What every founder should ask before signing any AI-related agreement
Resources & Links:
- AI in Action Conference – Registration
- Follow Elizabeth Goede socials (LinkedIn, Instagram)
Related episode:
- Episode 93 | The Dirty Secret About AI No Female Executive Wants To Admit—And Why It’s Hurting You
- This episode dives into the real reason female founders hesitate with AI — and the hidden risks of staying on the sidelines. Includes smart insights on the security tradeoffs when you don’t understand where your data is going or how to control it.
Want to increase revenue and impact? Listen to “She's That Founder” for insights on business strategy and female leadership to scale your business. Each episode offers advice on effective communication, team building, and management. Learn to master routines and systems to boost productivity and prevent burnout. Our delegation tips and business consulting will advance your executive leadership skills and presence.
She's That Founder
104 | AI Ethics and Security with Elizabeth Goede (Part 1)
Elizabeth Goede: I meet with lawmakers whenever I can because we absolutely need some policy put in place much more expediently than they did with social media. They took 12 years to figure out how to do that. We don't have that luxury, in this space right now.
Dawn Andrews: Hey, hey, hey. Welcome to She's That Founder. I'm bringing you something different today, a two-part special guest series that every single one of you needs to hear, whether you're running a solo practice or scaling to 50 employees or beyond. I'm sitting down with Elizabeth Goede, known as the AI Whisperer who works with enterprise level companies, defense contractors, and highly regulated industries on AI implementation.
And before you think, that's not for me, stay with me. Because if Fortune 500 companies and the Department of Defense are worried about AI security and ethics, you should be too.
Here's why you need to listen to both parts of this conversation. You're probably already using AI tools like ChatGPT, Claude Canva notion or copilot in your business, but you might accidentally be exposing your client data, your proprietary frameworks, even your own likeness.
Without realizing it, Elizabeth is gonna show you exactly how to protect yourself. So here are the three critical things you're gonna learn in these conversations. The public social media test for AI safety, and it's the one simple rule that will immediately protect your business data when using any AI tool.
the real difference between learning models versus language models, and why it matters for everything you put into chat, GPT, Claude, or any other AI a. And how to create a simple AI policy for your business. Yes. Even if you're a solopreneur, that policy protects you, your team, your clients, and your intellectual property.
This isn't about fear or overwhelm, y'all.This is about being strategic. Protective and smart as you integrate AI into your business operations. Elizabeth is here with us bringing 25 years of marketing and brand building experience,
She's been in the AI space since way before it was cool. She's my big sister in the AI world, and she is here to have our backs. So grab your notebook, maybe pull up your AI tools while you listen, and let's talk about how to do this, right?
Dawn Andrews: You guys, I get to have one of my besties on. I'm so excited for you to meet Elizabeth Goede. She is known as the AI whisperer and she works with large companies, at enterprise level and even with the defense industry. And today we're talking about AI ethics and security.
And you might have a small business, a medium business, maybe you have an enterprise business. But I know if you're listening to the podcast, you're a female founder leading a business in this age of integrating AI into what it is that you do and what it is that your whole team is doing.
And I wanted to make sure that we had an opportunity to talk about how to do that thoughtfully and responsibly and with some insight and forethought. So Elizabeth, thank you so much for being with me today.
Elizabeth Goede: Thanks so much for having me here. Such an important topic that I think not enough people are really thinking about.
Dawn Andrews: Yeah, AI can kind of be the shiny new toy and everybody gets caught up in the excitement of trying it and using it and saving some time, but not really understanding like, and I don't think any of us fully understand the implications of using it. So I'm glad that you're here to talk with us about that today.
Elizabeth Goede: I was just speaking, yesterday to a room full of HR professionals and I was sort of talking about this idea of, fomo. There's just a lot of FOMO that's sometimes is like, it's pulling us. Into places that maybe we're not even ready to go yet. But we feel like if we don't go there, that we're gonna be behind.
Dawn Andrews: Yeah.
Elizabeth Goede: And especially when, you know, I mean, obviously you know that my team, this is all we do, is we work in artificial intelligence and we work with highly regulated industries, right? So, department of Defense, aerospace finance.
Legal healthcare, those types of spaces, the spaces that really can't afford to make those types of mistakes. And yet I had a patent attorney in another, speaking engagement I was doing who was had been using deep seek.
Dawn Andrews: We're gonna hold the deep seat conversation just for a minute. Okay. So let's set it up for everybody. With the basics.
So when we're talking about AI ethics and security, what are we really talking about? Like how does it show up for your everyday founder and entrepreneur? Maybe not just these larger, enterprises.
Elizabeth Goede: When we think about security, even outside of artificial intelligence, It's like, with everyday things, right? If you haven't had it happen within your family, you know, somebody who has had it happen, which is, you know, someone uses some form of deep fake and, gets you to give them money or click on something that you shouldn't click on, and then they've captured things.
No matter what we all really need to think about, whether it's in our professional life or our personal life, how we are training ourselves to be more skeptical at first blush with new things that are coming out. And even things that have been out that maybe have taken a different toll.
So when we think about artificial intelligence, and we think about like language models, right? We think about like ChatGPT, and, Claude, Gemini, copilot, those types of things. If you are using them where you have just gotten sort of like your own subscription or you're on the free version.
You are safest if you think about the information that goes into those tools. Should be anything that you would be willing to put any public social media profile or a public website.
Dawn Andrews: Good. Okay. So from that starting point for everybody listening out there. This is where we're starting from and then we're gonna go deeper into the conversation. So I would love to know, because you are my girl, but you are also the AI whisperer.
Tell me how that came to be like, gimme a little background.
Elizabeth Goede: Yeah, so, you know, I've always been naturally curious, so I've always wanted to sort of be in like the latest tech that's out there. In 2012 I co-founded a tech company. We would now call it an AI tech company, but back then we just said, you know, we worked with algorithms 'cause we did, so we matched chemical components of wine to people's flavor profiles.
I got to work with this incredible team of people. We had to actually like, figure out how to get our own data set. Then we had to figure out how we actually created and developed the, algorithms to be able to create the output we needed it to.
And so once you sort of have that experience, right, once I had that experience and that genie sort of out of the bottle, you're not putting it back in. And so, staying in this space, so Mo most people don't know, but generative artificial intelligence actually came out in 2017. And much more guardrails around it in 2017 because, um, it was for the most part, you know, you had companies that created it in a way that they were controlling how you use it.
But in late 2022, we had chat come out with an actual chat model that was sort of open.
For lack of a better way to describe it. I mean, there are still guard rails on all of these tools unless you're getting into API connections, which is a very technical term that maybe we'll get into later. you sort of had a choice at this point in time. The trajectory of artificial intelligence was not climbing at this rate anymore. It was basically starting to go
Dawn Andrews: like a hockey stick.
Elizabeth Goede: It was totally in that hockey stick. And I would say that hockey stick was standing almost completely erect, right? Totally straight up. you know, we sort of looked at it and we said, okay, so we can either sit back and hope somebody else decides that, we're gonna behave safe, secure and ethically, or we can take the lead on it. and we decided we were gonna take the lead on it because we knew what we thought was right and wrong, and we would advise other people on what we thought was right and wrong. And that didn't always, align with everyone else's.
Dawn Andrews: Hey, lovies, I'm jumping in here real quick to ask you if you're tired of being the bottleneck in your business. If you're editing everything at 10:00 PM because your team can't capture your voice, you need to be at AI in Action. March 19th and 20th, 2026 in Grand Rapids, Michigan,
I'm speaking on voice architecture. How to use AI to scale your leadership presence without losing what makes you you. You'll walk out with a working custom GPT trained on your voice. Ready to use Monday morning.
Join in person or virtually, but grab that early bird pricing now, girl, because it won't last. The link is in the show notes. If you wanna stop being the bottleneck and start scaling your voice, come join me at AI in action. March 19th and 20th, 2026 in Grand Rapids, Michigan.
Dawn Andrews: Well, I'm super glad that you did, let's have that real talk moment because you obviously experienced it in the very early days of AI being open. Put yourself in the smallish business owner shoes. What do you feel like are the biggest ethical or security risks that those kind of business owners face, especially if they're service-based businesses that are handling client data or creative work, things that might have IP concerns around them.
Elizabeth Goede: Yeah, You hit the nail on the head. That was exactly where I was gonna go. I mean, we have to be judicious in understanding where the values are in our business. and when I mean value I don't mean the values that we had put on the wall.
I'm talking about the value of our business as far as, what actually, allows us to monetize our business. We absolutely have to be careful that anything that is considered a trade secret, things that we would consider to be confidential and we wouldn't wanna share out there.
We don't put in because most of these models are learning models, and so therefore that information can come back out. even simple things like us recording this conversation right now. I know we have an agreement on how this is going to be, but if I recorded with someone else that I don't know, I don't know how they're gonna choose to use my likeness now. So there are different types of agreements that you even need to have in place to protect your own likeness. you have, just also think about how much you can open up. To your, systems now. We have things like Atlas that, is an agent that came out from ChatGPT, and it wants to have access to anything and everything, including your credit cards and your account information.
The more that you open up to that, the more that that's susceptible. If they have a data breach. even if it is the tool that you want to give that information to.
Dawn Andrews: Yeah.
Elizabeth Goede: You know, when you think about that from a business standpoint, and then think about like all the files that you have that maybe are confidential, that you have with your clients, that you have with yourself.
And so just think about when you decide you wanna give like Google Drive access to ChatGPT, are you okay for chat BT to have access to everything that's on
Dawn Andrews: To crawl all of that.
Elizabeth Goede: You crawl all of it, your emails.
Dawn Andrews: Yeah
Dawn Andrews: I think there's something, so here's a real world example that we can talk about. So here, Elizabeth and I are recording this podcast and I have a podcast agreement that my guests sign and pre ai, there's a clause in it that says, you give me permission to use.
Our conversation and the video from our conversation to promote this podcast. So meaning Elizabeth and I are both showing up on screen. I can cut and paste that and put it up on Instagram to drive people to come and listen to this particular episode, et cetera. So pre ai, the images locked. what Elizabeth and I said is done.
It's said the picture is what the picture is. Now in the world of AI that can be used to train a model and I can put Elizabeth and I riding elephants through the desert. And if I've signed an agreement that says that this can be used any way anyhow in perpetuity to market.
This episode, I've opened myself up to somebody using my likeness in an un preferred way to be polite about it.
Elizabeth Goede: Frankly, either one of us could, our voice could be used to endorse something that we didn't want to. Whether that's a product, a service, a person,it's something that I work with. I meet with lawmakers whenever I can because we absolutely need some policy put in place much more expediently than, than they did with social media. They took 12 years to figure out how to do that. We don't have that luxury, in this space right now.
Dawn Andrews: Yep.
Elizabeth Goede: About where you are Don in Hollywood, and I think about, you know, you have actors that are established, if they decide to sort of sell their likeness in a way, they could command a very large amount for a studio to purchase that. Think about somebody who's just starting out.
Think about people who were on TV shows that didn't get royalties that went on to make studios hundreds of millions of dollars later and they got five grand. This is like taking that to an exponential level because now it's not even just can you capitalize on the next movie that you go to and you get more money and the next movie that you go to or TV
Dawn Andrews: Raising your fee. Yep.
Elizabeth Goede: Now it's, they can just make you into what they want you to be. And I think it's a really interesting time.
I look at it sort of from two ways, you know, if I'm sorry that I'm going down this Hollywood path. So just
Dawn Andrews: Well, and I'll, there's a little something to talk about with that too, so keep going.
Elizabeth Goede: Yeah. when we look at how we consume content now.
Dawn Andrews: Yep.
Elizabeth Goede: We're blowing through it, even a 20 episode season, we can blow through that in a weekend.
Dawn Andrews: Yeah.
Elizabeth Goede: Something that used to take us 22 weeks now takes us two days. So on one hand, you look at it and you say, okay, well this is actually really awesome, because now we can, as a studio, we can produce as much content as we need to sort of satisfy this need
Dawn Andrews: To keep people on our streaming platforms.
Elizabeth Goede: keeping people there. But then, you know, reflect back on three minutes ago in this conversation, you're like, but you know who's really gonna lose here
Dawn Andrews: Usually the creatives.
Elizabeth Goede: Yeah. It's the talent.
Dawn Andrews: Speaking of, so there's an actress named Tilly Norwood that is one of the first AI generated. Actresses to be signed by a major agency in Hollywood. This is not an actress who is allowing her likeness to be used by ai. It is a completely fabricated entity. And it was created by Xicoia , I think is the way they say it. They did it as a test and it was so compelling that now they have been signed as technologists to create the performance of this actress in future endeavors. It is just at an experimental stage. There's not necessarily evidence that is going to be a big deal, but the bottom line is it's being pursued with vigor and cash and time and attention.
So I think that all creatives everywhere. And this was part of the big strike that Hollywood went through, a few years ago. Need to be thoughtful about how they engage with ai, not just themselves, but with any partnerships that they're in, with any jobs that they're hired for. And the same would be true for you as a business owner with all of your ip, your frameworks, like what it is that you do.
You have to be thoughtful about how you protect yourself.
Elizabeth Goede: actually Dawn, if you don't mind, I'd love to piggyback onto that. So when we think about copyright.
Right there, there's sort of like two parts I guess. one is copyright and one is truth of advertising, right? there's also laws, right, So the last time copyright laws were updated was in the seventies.
So the reason why stuff that's created in artificial intelligence can't be copywritten is because there's not like that tangible physical attribute for it.
Dawn Andrews: Yep.
Elizabeth Goede: So that's part of it. Other part is because you have models that were created and those engines were created by companies that own them.
if you don't actually look at the terms of use of these tools, you might not actually have rights to anything that you decide to put in there because you've now given your rights to that tool. So that is from your likeness, that is from content that you put in.
in particular, I think a large audience, I think for you, Dawn is in the US
Dawn Andrews: Yeah, but why? I mean, I do have folks all over the world, but like, let's take that detour and talk about tool. You know, you and I had some behind the scenes conversations of like, this tool Yes. This tool. No, Give me a, in fact, you created a tool to assess the tools.
Elizabeth Goede: Yeah.
Dawn Andrews: Like, most of us at least here in the states, are familiar with chatgpt and Claude maybe second. It's sort of the Coke, Pepsi, you know, model.
Elizabeth Goede: Yeah, I would say copilot, is probably more known also just because so many people within, well, within an enterprise setting. It's a Microsoft based product. If you're an enterprise, 99% are Microsoft. and so from that standpoint, you sort of got that.
I don't think that people in the original, the early days, necessarily thought of copilot the same way as they thought of Chatgpt. But I think it is important to understand where Anthropic came from.
Dawn Andrews: Cool.
Elizabeth Goede: So Anthropic actually sort of came by way of chatgpt. and the reason why I say that is because OpenAI, which was originally a research institution.
Before they decided to monetize it, they were originally, that free enterprise like research, right? It was this idea of a bunch of minds coming together and, and seeing if they could create something. Then chatgpt came out and there were some very specific directions that OpenAI was being taken in at that point.
So now you have the head of security for OpenAI leaves. You have one of the senior level and, and one of the lead engineers decides to leave and they leave for ethical reasons. They first think, ah, you know, I'm gonna maybe not. Not, not stay in this AI space, but I think everybody could see that like that freight train was running and there was nothing was gonna stop it.
and this is like in the early days when I say that this is like even before chat BT really became a speaking point. So this is like in, so, they actually decided, no, we're gonna create the constitutional AI and we're gonna create a company called Anthropic.
And, we're gonna release a chat model called Claude. Now, the difference between those two companies was one was a learning model first and foremost, and the other was a language model. So once Claude took sort of their copy of content and in all of the companies, all the big companies, because you get really greedy for data, they have all had their own, billion dollar payout that they've had to do for taking data they weren't supposed to.
Dawn Andrews: That's already happened.
Elizabeth Goede: Just to be very clear about that. No one's perfect. There is not one perfect one out there. However, the difference is that because Claude wasn't a default learning model, when you put your information in there, they would give you all kinds of disclaimers. That's said Hey, we're not gonna learn from this.
It is private, if you're gonna discuss something that is really personal, we want you, as soon as you're done getting the information you need, please delete it. We don't store it, we don't keep a copy of it here. Unlike Google, that was like indexing chats. So Claude since, about two months ago, roughly, from the time that we're discussing this now, they did come out and say we've, we've now released a new component, which is called Claude Code, which is the ability to really get some coding and deep types of information out there because we don't have enough information on that. We wanted to be able to learn from that. So now we're gonna give you the option to turn it off because we are gonna start learning. They sent like 50 emails on it, if you're a subscriber.
They spent a lot of money in advertising to tell everyone to do it. People still don't read. I talk about it every single time. I can, certainly go in and turn that off if you're putting personal information in there. But I think when we look at the mindset of these two companies learning first versus privacy first, disclosure of still giving you a way to truly have privacy.
There was a lot of things that have come up too. the architecture, the way the things were built, how the tools were built, stuff could be publicly indexed, that also happened with chat GBT earlier this year.
for anything that was shared. Well, for anyone, when you thought you were sharing a link, you thought that it was unique to that link and
Dawn Andrews: Sharing a link to a chat? You mean like something you've created in chat GPT? I just, I wanna be like clear with, and also when Elizabeth is saying meaning the computer, if you will, the model is. Learning from you. Not that it is a learning model and you are the student that it is teaching.
Elizabeth Goede: Thank you for making that distinction.
Dawn Andrews: Ooh. This conversation's getting deep and we're just getting started. Elizabeth has given us so much to think about from understanding the real risks of AI tools to knowing the difference between learning models and privacy.
First models like Claude, but here's what I know about you. You're not here just for the theory and the history. You wanna know what to do about all this. How do you actually protect your business?
What steps can you take right now? And how do you talk to your team about using AI responsibly? Well, that's exactly what we're covering in part two of this conversation, which drops next Tuesday. We're getting tactical, talking about AI policies, protecting your ip, choosing the right tools, and even diving into a lightning round of questions.
So make sure you're subscribed to, she's That Founder so you don't miss part two. And if this episode opened your eyes to anything, even just one thing you didn't know before, share it with another founder who needs to hear this and come join us in the AI for Founders community on LinkedIn where we talk about this stuff all the time Until next time, stay strategic, stay protected, and keep being that founder. See you next time lovies.