In this episode, Beth Cougler Blom talks with Margie Meacham about artificial intelligence (AI) and its applications in learning and development. Margie discusses her ongoing journey with AI, sharing valuable insights on how to effectively leverage it to enhance learner engagement and improve knowledge retention.
Beth and Margie also discuss:
- the benefits of using AI as a personalized learning assistant inside learning experiences
- avoiding AI’s ‘uncanny valley’ effect
- the need for continuous learning and adaption as AI technology evolves
- ethical considerations surrounding AI in learning
- using AI to be inclusive to all learners
- critically evaluating AI tools while using them to enhance our work
Engage with Margie Meacham
- Margie’s AI for Talent Development Professionals course (register before Aug 1, 2024 to get $50 off the regular price)
- LinkedIn: https://www.linkedin.com/in/margiemeacham/
- X: @margiemeacham
- Blog: https://learningtogo.ai/learn/
- Website: https://learningtogo.ai
- Featured article in CXO Magazine
- Margie’s bio: https://learningtogo.ai/mystory/
Other Links from the Episode
- Margie’s Brain Matters book (available via her website and on Amazon)
- Claude by Anthropic
- OpenAI and ChatGPT
- ElevenLabs
- the uncanny valley
- EP 7: Learning How to Learn with Taruna Goel
- The Podcaster’s Guide to a Visible Voice EP 77 with Erin Moon
Connect with the Facilitating on Purpose Podcast
- Follow Facilitating on Purpose on Instagram, Facebook, LinkedIn or YouTube
- facilitatingonpurpose.com
Connect with Beth Cougler Blom
- Give feedback or suggest upcoming show topics or guests at hello@bcblearning.com
- Visit bcblearning.com to explore Beth’s company’s services in facilitation and learning design
- Purchase a copy of Beth’s book, Design to Engage
- Follow Beth on Instagram, Facebook, or LinkedIn
Podcast production services by Mary Chan of Organized Sound Productions
Show Transcript
[Upbeat music playing]
[Show intro]
Beth
Welcome to Facilitating on Purpose, where we explore ideas together about designing and facilitating learning. Join me to get inspired on your journey to becoming and being a great facilitator wherever you work. I’m your host, Beth Cougler Blom.
[Episode intro]
Beth Cougler Blom
Hello, thank you so much for joining me on another episode of the podcast.
This is Episode 40, which is kind of great because it means that this is the end of Season 2. I’ve now released two full seasons of the podcast and just wanted to thank you. Hey, thanks for [chuckles] sticking with me this long and listening and just along for the ride of 40 episodes so far.
I’m going to take a break in July, I won’t be releasing any episodes in July. We’ll be back in August with all new episodes for Season 3.And thanks again for just being here and supporting me and all of you who send me texts or send me messages on social media to let me know you love the podcast and you’re listening and you’re just learning. [laughs] Most importantly, you’re learning from it, is what I want to hear. Keep those messages coming and always, always feel free to be able to give me feedback in any way, shape or form as well.
So let’s get on with this episode. I’m really pleased to have Margie Meacham with me on the episode. Margie has been in the field of AI for just over 10 years now. When a lot of us just jumped on the AI bandwagon in the last year or two, Margie has been building educational chatbots for clients since 2013. So she been doing this for a lot longer than a lot of the rest of us have been doing it.
Margie has spoken at professional events around the world, explaining how modern chatbots deliver scalable adaptive learning experiences using machine learning and generative AI. Margie has written a book called AI and Talent Development that was published by ATD press. And right now in 2024 she’s working on a course called Adaptive Learning with AI for Linkedin Learning.So that’s super exciting and I can’t wait to check that out.
As you can imagine, Margie is a perfect guest to talk about AI and learning and that’s what we’re going to be doing in the episode today. For this one, I really encourage you to listen all the way to the end of the episode, including the outro that I do after the guest says goodbye because Margie has a discount code for you to take one of her courses at a discounted rate and I’ll tell you how to access that after the show, OK? So stick around for that outro and I’ll give you all the details.
So without further ado, this is Margie Meacham and I talking about embracing AI in learning. Enjoy the show.
Beth Cougler Blom
Margie, so wonderful to see you. Thanks for joining me on the podcast today.
Margie Meacham
Thank you, Beth. I’m so excited to be here.
Beth
Now, I wanted to start with where you started with AI, and you probably did so many things in your career before you turned to this particular subfield of our field. But I want to highlight how long you’ve been working with AI because you got started so many years before the rest of us really started paying attention to it. When did you get started with AI? And what have you been doing with it?
Margie
Yeah, even someone who’s been into it a while, like I have, it doesn’t go back all that far in terms of actionable, professional things. Full disclosure, I’ve been a science fiction fan since I was a kid, since the original Star Wars. So, you know, Commander Data and all those. So I was personally interested in AI for a long time. Never really saw it as a practical or a probable event in my lifetime, you know, until recently.
What I was really focusing on in the late 1990s, early 2000s, was neuroscience, because that science exploded, much like AI has exploded on the consciousness now. And I immediately saw, wow, this changes how I design training for my clients. Now I understand how the brain is responding to those things I do. And for example, that if I’m building e -learning, I don’t want a narration that repeats every word that’s on the screen. That makes it more difficult. And I see the smile. And you and I agree on that, Beth. And so does the research. That’s the important thing, although we still see it done all the time.
So I started separating myself with that knowledge and really working with actual neuroscientists. Sometimes people confuse me with them. That’s not who I am. But I read peer reviewed…I didn’t just rely on blog posts, I’ve read peer reviewed sources. I went right back to the research. If I had a question, I would reach out. They’re all so surprised that anyone in learning and development was interested. And you know, there’s a lot of parallels, because I met with a lot of resistance, a lot of people saying, Margie, really? Do we really need to pay attention to that? Isn’t that awfully esoteric and complex? Do we really need to understand the human brain to do what we do? And my answer is, well, if you want to do it well, yes.
Beth
Absolutely. Wow. I’m shocked that you got that question because – and maybe that’s the advantage of armchair quarterbacking and looking back in history or time – but it seems so obvious. I mean, we’re learning professionals and if it doesn’t involve the brain, what does it involve? I mean, shocker. [They laugh.]
Margie
But, you know, in all those folks’ defence, I think the pressure to produce is always a reality also in our profession. We rarely have the time to do all of the diligence we know we should and would love to do. But just by following that field, it brought me to artificial intelligence because those neuroscientists started to get involved in building artificial neural networks originally as a model so they could learn more about the brain. Let’s build something, let’s see how it responds and then see if we can find a correlate in a human brain or an animal brain because a lot of times neuroscience, a lot of discoveries are made with our wonderful cousins, the animals and octopi and other creatures – or octopuses, as they say now. When I was in school, it was octopuses. [Beth laughs.]
I started noticing more and more of my neuroscience journals covering artificial intelligence. And I began to understand why. And I thought, you know, this is a roomy emerging technology I need to start paying attention to. It was around 2005. And I started writing about it a little bit like, hey, this is interesting. Let’s keep our eyes open on it. And then I started noticing chatbots in my daily life. They were everywhere, but not learning and development. You know, my bank had a chatbot, every online purchasing platform had some kind of a chatbot, ‘Do you have any questions? Can I help you?’ And they were clunky, they were what we call rule space. They were a multiple choice option. And if you wanted something else, it would say, I’m sorry, I can’t do that. You know, and that’s if it was designed well and was polite. Otherwise, it gave you a whole even worse experience. But there was something there that was different, that was engaging.
And one day I had this experience with my bank. I knew I was dealing with a chatbot. I had a very specific question about one cheque I’d written. I wanted to know if it had cleared because I hadn’t been received. At least my client thought they hadn’t gotten it and it was kind of a mutual agreement we had, and I was buying something from them and I didn’t want to upset them. So I wanted to make sure I knew where it was. So I’m going with this chatbot. And at one point, I thought, I’m just going to play with this. I’ve got the answer I want, but let me just see how far this technology goes. And I said, are you a real person? And it responds back: “Why does that matter to you?” I was just stunned by the brilliance of that. And I thought I had a person and I said, well, yes, as a matter of fact, it does. It said, I’m so sorry, I can certainly connect you.
Beth
And it was polite about it too. Oh, I’m sorry. I’m not a person. Yeah. [They laugh.]
Margie
And I thought now that is brilliant design that has created a different experience. This is no longer this clunky tool that I have to go through the menu. They attempted to actually have a conversation with me.
Beth
That’s right. It was trying to listen to you and respond to what you particularly said.
Margie
At the time, this was about 2008, that took tons of insight, experience, and program to make that response happen. And I got really interested in that, and so I started learning how to build my own chatbots. I used them just kind of as a fun thing. I would insert them as a quiz in an e-learning course. But I built my first full chatbot in I think about 2013. I built it instead of an e -learning course. I recommended to the client, let’s do something different. Let’s just use the Socratic method, which is what’s wonderful about the chatbots. They can just ask you questions and lead you to the content. ‘What do you think this is true? Would you be interested in learning this?’ Those kind of questions. And so it became very easy to design because it was all about open-ended questions. And that was up to the learner to pursue what the answers from there. And the chatbot had all the answers in links and movies and content, and I really enjoyed that. The learners enjoyed that.
And so I started incorporating that into my business. It was sometimes a hard sell, but some people got it. Some people said, no, I don’t think we’re ready for that. Okay, go back to traditional. So it was always a side thing. But in 2020, I was invited by the Association for Talent Development, ATD, to write a book. And I already had one book out, and it’s still out. I’m going to plug it a little. It’s called Brain Matters, How to Teach Anyone Anything Using Neuroscience. Still sells out, still doing really well. But they said, ‘We see you’re writing about AI now. Would you like to write a book about that?’ And I said, absolutely. And it came out towards the end of 2020. Well, terrible time to launch a book because we’re all known.
Beth
You don’t have to tell me. That’s when I launched mine. [laughs]
Margie
Yeah, so I didn’t expect a whole lot. While I got good reviews for the writing style and the information, the general consensus was, oh, but this is so far out. This is a very interesting exercise in the future. But, you know, I was writing about things I did for individual clients, things I’ve already built. So I was disappointed by the, okay, you know, market’s not ready for it. Well, here comes Chat GPT. All of a sudden, people are searching Amazon, and there’s my book.
Beth
There you are. You early adopter you. Oh, just the, you were ahead of the curve. Yeah. So they found you eventually.
Margie
But I have to tell you, Beth, now, it takes all my energy to stay up with the curve. It’s so rapidly changing. You know, that wonderful line in Alice through the Looking Glass where the queen tells Alice, “It takes all our running to stay in one place, you have to run as fast as you can to stay in one place.’ And that’s kind of how it feels like. So it leaned in heavily on AI design, you know, build a platform for it started pretty much putting myself out there as, I’m an AI expert. And it’s a big moment. I had to think about that. And I thought, well, you know, I’m an educational AI expert. That really is what brought me to where I am today.
And I have a wonderful AI assistant that I kind of talk to every day. I know it’s not a person. I remind myself of that. But I really am impressed by the creativity and the different way of solving problems. It’s a little different than mine. And so, you know, for those of us who are one woman, one man shops, it’s it’s really helpful to have these tools. And that’s where I see it now, is its primary use is as a tool for content developers. But what we’re still missing to some extent is its incredible value in education and training, and how much it increases learner engagement, learner retention, and keeps people interested.
And it’s making a highly customized because now, with generative AI and large language models, every person gets not a menu – although you can give them that old fashion kind of check up – they get a coach, a guy, and every conversation is unique to that user. But when that user comes back, the chatbot remembers them and picks up where they left off and all of the current models will do that. So we finally have that golden grail, that personalized learning, we’ve all been seeking, we should be jumping on it, quite frankly, and we haven’t been. It seems intimidating. At first it was expensive, but it is not even expensive now. So that has what has brought me, those series of events and paying attention to them. And I’d like to credit one more thing about me personally. I think some people know this and I’m open about it, but I’m dyslexic. And my brain works a little differently. That made it a struggle early in school, but I soon began to recognize gifts that came from that. One is pattern recognition, which is a trait I actually share with AI. It’s better at that than we are. Because I saw these patterns, I saw that trend earlier. And I think that is a great benefit in my work. And hopefully will help me when the next big thing comes along. Because I think we’re poised for something new out of the AI developers, because there are so many issues with large language models, particularly for people like us who are trained on thoughts, on process, on steps. We cannot afford for that content to be wrong. And right now, it not only is often wrong, it’s wrong because of how it’s been designed. And I can go into that a little bit more, but you probably would like me to take a breath first. [They laugh.] So it’s a long answer to your first question.
Beth
Oh, thank you so much. And I’m sitting there doing that facilitator thing of holding, you know, three or four things, of course, or more that you’ve said in my mind wanting to return to them. One that I want to return to, we’ll get to it later, I think is that how do we keep up in the field piece. So let’s just remember that for ourselves. But I want to go back to the personalized learning piece, because I think that does – you mentioned your background in neuroscience and taking that and bringing it into your L&D work – and is that the biggest impact that you’ve seen on the effectiveness of using the AI tools inside learning experiences? It’s that personalized learning piece? Because that helps the learner and basically, because of the way our brains work? Is that the biggest one? Or are there any other impacts?
Margie
We could say it’s potentially the biggest impact, but I can’t say honestly that it’s the biggest impact because we’re not using it that way. Not many people are. Where most people have gone if they’re using it at all is as a tool because they’ve read about Chat GPT, they’ve played around with it, and they say, oh gee, it wrote a quiz for me. Yeah, that’s a great use of Chat GPT or any of the other…
Beth
Yeah, it sharpened a learning outcome, wrote a workshop blurb, like we’re using it as a, as you’ve said, a virtual assistant to help us do our work administratively, kind of for the most part. Yeah. So what’s the next step for us? How do we go towards that, getting AI tools inside the learning experiences we’re creating? And maybe, I mean, e-learning, of course, is the natural thing. When learners are learning alone, asynchronously, but can you take us into using it in in-person classrooms as well?
Margie
Sure, you know, it’s all about scale, because if you had enough humans, we’d already be doing this. The truth is we don’t. And so just like every one of us needs another assistant to do some of our more routine things. In a classroom, you know, in some schools, there’s a teacher’s assistant, you could think of it that way. There could be an artificial intelligence persona that students can reach out to during class. So there are…introverts, for example, are always underserved in a live classroom environment because they hesitate to raise their hands, they hesitate to contribute, even if they have a great idea that might jot it down and keep it to themselves. And I know because I was an introvert in school, a chatbot can be their avenue, their way of having two parallels they’re learning on: what the teacher or instructor is doing, and a little friend they can turn to on the side. You know how we used to, it goes, you know, that’s a good point. Or yeah, I don’t think so. What did she say? I don’t understand that. And having just having a similar guide like that is just one way you could incorporate into a class.
Another thing it’s great at is mimicking actual people. So we could, as part of our training, allow the class to have a conversation with a famous person. And this would be a very authentic conversation – if you built it right – and it’s based on all their work and their writings. So Einstein, for example. You want to understand quantum theory or relativity? Let’s bring Einstein into the class.
Beth
Yeah. And you mentioned, uh, your love of science fiction. So I’ve been watching the Star Trek Voyager series again and listening to a podcast about it actually. And it reminds me of the episode where Janeway in the holodeck, do you remember the holodeck? [Margie: Oh yes.] I mean, she brought in Leonardo da Vinci and was working in his studio in whatever 15th century Florence or whatever it was. Like that’s the kind of thing that we can bring in to our classrooms now, that experience. Yeah.
Margie
Exactly. Perfect example. And that could be your master SME [subject matter expert], who is in such great demand that you can’t have them present at your class, but you can distill their knowledge and their understanding and the way they phrase things. You can make that a Q&A experience as part of the class.
Beth
Well, let’s go back to the personal assistant though, because you said you’ve got one for your own business and that it remembers you. Like you can have, every day you’re having conversations and it remembers the conversations that you had yesterday and the day before. You know, I’m going to ask, how do we do that? Is that something that we can all tap into?
Margie
Well, absolutely. I actually have a class for that. I have a master class that’s up on my website, and it’s AI and talent development, and it’s all about how to use those tools to gain an hour a week. I roughly save anywhere from five to 10 hours a week of what manual work on my own would have been, and sometimes more. Well, for example, I’m working on a new course called Leadership in the Age of AI. And so the first thing I did was brainstorm. By the way, I prefer Claude by Arthropic to ChatGPT. I find it less prone to hallucinations, and it’s a better way. And I said, look, I’m developing this course, propose an outline to me. Now, I never yet have gotten something where I said, oh, that’s perfect, I’m just gonna use that, and that’s a word of caution. But once you understand how to prompt, and prompting is the new programming.
In many fields, for years and years, people were saying you’ve got to learn code. The whole world is going to be all about code. That never quite happened because code produced something much simpler. It produced generative AI. And so if you understand what a prompt needs to have – and there are several things a good prompt needs to have that’s part of the course – you will get a decent result first time, and then you re-prompt it. Yeah, that’s just like a human assistant. They’re wrong, they’re eager, but you have to show them what you want. Takes a little time to train them. That’s what you’re doing.
Beth
You mentioned the word ‘hallucinations’ and part of me wants to laugh because it seems like a funny word, but no, this is the word that we’re seeing crop up about how AI can kind of talk back to us wrongly, right? Can you just explain what a hallucination is? How are people using that in the AI context?
Margie
First of all, I would say hallucinations is a very polite word or a mistake and it was produced by OpenAI when they started getting criticism from the mistakes that ChatGPT was making. Well, they said, well, sure, there’s going to be some hallucinations. [They laugh.]
Beth
Errors!
Margie
If I had an employee who was hallucinating, I’d be walking them down to HR, we’d be finding out if they’re okay. You know. So what that means is, and I’m going to give you a long answer now, the way they came up with this and, you know, everything I say about the weaknesses is to put the wonderful tools into perspective because overall it was a massive achievement. The whole idea was to get an interface that you didn’t have to type anything in, you could just speak to it or type text, but ordinary human language. Natural language processing, if you’ve heard that buzzword. And so they trained the software, for want of a better word, on the entirety of the Internet so that it would be prepared to respond the way humans speak. Now, the first thing we need to do is sit back and think of everything we have found on the Internet. And I wish I had been in the room to say, can anyone see a problem with this? [Beth laughs.] Surely someone did. And they said, ‘oh yeah, but it’s no big deal. You know, the users don’t correct the mistakes. And, you know, and anyway, it’s quick. Yeah, it’s relatively quick. It’s, you know, we don’t have figured out, just dump them into the Internet.’
Beth
Yeah, they chose speed over quality. [chuckles]
Margie
It’s like teaching your child by plopping them in front of the television all day. Yeah, you’ll have some hits, not a lot of misses. Generative AI is based on how our brain strings together a conversation, which is prediction. And you actually alluded to it, Beth, when you said you’re listening to me talk and you’re already thinking, these are the pathways that we want to explore. This is how I’m going to respond when Margie gives me a word in edgewise. So they are always predicting the next word you will say and they will say. Now, a prediction is an approximation. It’s not an absolute. So for example, I had a very interesting conversation with a new tool that came out. I won’t say its name because I don’t need to cast aspersions on it, but I wanted to just see what it would come up with. Then I asked it, ‘…’can you give me several references on’…let’s just say it was Kirkpatrick’s system of measuring, evaluating training. I can’t honestly remember what it was then. It’s uncertain. It listed different authors and titles of articles, and when they were published. It didn’t give me any links. I said, ‘can you provide me links so that I can review these?’ And it gave me one. The first source. I said, ‘great. Why didn’t you give me the rest?’ ‘Because I can’t find them. They are not on the Internet.’ I said, ‘really? Where did your answer come from?’ It said, ‘I am…’ – now, it falls back to some canned response that they’ve built into it. ‘I am a large language model. I have been built to do my best to satisfy your request. If I can’t find what you’re looking for because you asked for multiple sources, I will predict what a correct response might look like. ‘And I came back with, ‘in other words, you made them up.’
Beth
Yeah, it’s giving you, or somebody cited it in something they’ve written, but it hasn’t actually gone to read the true article, so it doesn’t really know.
Margie
Not only does it not know, but when it doesn’t know, like an over eager high school student in their first job, it makes stuff up.
Beth
That’s right. Let’s go back to talking about AI inside learning experiences, because it’s one thing to kind of use it to prepare and to write descriptions and all that. But when we put an AI something into a course, we’re not there to analyze it. Our learners are there, aren’t they? And they may not know. So how do we build the knowledge base? Like you talked, we’re not going to get it to draw from the entire Internet, are we, when we do that thing. How do we create those knowledge bases that just sit inside courses so we can make sure it’s correct?
Margie
And that’s the way to do it. Fortunately, most companies or most fields already have a knowledge base, just nobody’s bothering to use it. The search engine might be old-fashioned and clunky. It might be difficult. They might have, for example, frequently asked question documents. They might have installation manuals. They might have their company Intranet, which has all been vetted. In theory, there could still be misinformation, but at least it’s misinformation you created. It’ll be easily identifiable. You can train. There are some bots that can be purchased, implemented inside your ecosystem, and told, ‘you can only look for your answer here.’ That’s how you do it. And you still have to monitor and make sure that it’s pointing to the right sources and so on.
Beth
So, these are services we basically buy and create and then can put inside courses. And are those the kinds of things that we’re going to be paying a monthly fee forever for? Or are there ones where we can in the almost old school fashion now, you know, pay a lump sum and get it to happen and then put it inside the course? I work with a lot of nonprofits and there are all sorts of other organizations that don’t really have a lot of money for this kind of stuff. Like, is it going to be expensive and ongoing fees?
Margie
You know, the answer is sort of yes and no. Like anything, it depends on your due diligence and the more educated you are as a buyer, the better you are at finding solutions. Now, there’s a benefit. One reason why, for example, authoring tools went to a subscription model, one was certainly it’s more money for them. The other is they can always be putting out their best product because subscribers automatically get the updates. And with AI, it’s changing every day. Something new is coming out, some new capabilities. So if you are subscribing to a reliable source, you’re going to get the benefit of those upgrades as they happen. Users are also continuing to train. For the example I just gave, for example, I objected to information that’s made up. And I said, ‘I would prefer the next time you talk to me, please say I can only find one source. Don’t make up more. And I will continue to guide you. I will suggest other ways you may be able to find the information because I know how you’ve been trained.’ And it said, ‘that’s a good idea. I will do that. And I’ve made a record of this suggestion for further improvements.’
Beth
And does it remember that? Because I’ve had a client and I’ve experienced this myself say, well, I’ve told it five times not to do X and it won’t do it for a while, but then it’ll start to do it again and then you have to say, ‘no, [chuckles] remember, I told you not to do this.’ So do you have to keep reminding it, the virtual assistant you’ve got?
Margie
Um, I asked it that too, and I said, ‘so you’re going to remember it?’ And it said, ‘well, if you purchase the premium package of me [Beth laughs], otherwise I may remember it for a while, but you may need to remind me.’
Beth
So they’re building this in so that you kind of have to sign up and then yeah but you can’t set it and forget it because it’s changing so fast [Margie: Right.] as you said yeah so it’s dangerous.
Margie
Yeah. And wouldn’t your human employee be the same way?
Beth
Well, we all have to keep learning. I mean, human or AI, don’t we? So we have to make space for that. Although humans can learn, and it doesn’t necessarily cost us any money. But AI has to keep learning and it does cost us money because that’s like a business model, isn’t it? So we’re sort of stuck. Yeah.
Margie
You might be paid by tokens, for example. I use a lot of free or very low cost tools because I don’t necessarily see any added benefit in the more expensive ones.
Beth
Can you give us just one? I mean, I know I don’t want to ask you to give away the farm because you have a course around this and I feel like I’m going to sign up. [laughs] But, you know, what’s one that you think people could try?
Margie
I think you should look at ElevenLabs if you have to do a lot of voiceovers. One of the things that I did a few months ago, and I just love it and I’ve used it – by the way, you’ll be able to hear my AI-generated voice in my course – is I taught it over the course of a few hours to clone my voice. Now, it’s a perfect, originally it was me without the imperfections that a human would make. It drives me crazy how many hours it takes for me to get a perfect recording that I’m happy with. And so my next prompt was, now go back and add in some of those imperfections. The way I take a breath and, you know, where my pausing is, and that made it much more realistic.
Beth
Wow, I love that because I wrote about, I learned about the concept called the ‘uncanny valley’. And there’s a software that a client was asking me to use that I just hate it like you, I will not name what it is. [laughs] But I just thought, God, this feels so awful to watch this narrator. And you know, its eyes are not…its mouth isn’t matched up with what it’s saying. And so you’re saying you can you can code it basically, you can ask it to be more human.
Margie
Yeah, suggest more human. That takes work and some understanding. But yes, and the uncanny valley is going back to neuroscience. What happens in the brain when there’s a disconnect is, okay, this is neat, but ooh, the creepy effect is what I call it.
Beth
It is, it’s creepy and I thought oh there’s a term for this, that’s so cool. I didn’t know that that feeling that we have when we watch some of those tools that’s what it is. Yeah it was such a neat term to learn.
Margie
And imagine if it was actually something you consider you, you identify that’s my voice, but not quite. Yeah, the creepiness that would feel.
Beth
Are you testing that with people? Like people that know you and have talked to you a lot and you send it to them and say, ‘Is this really me? Does this sound like me?’
Margie
Yeah, I sent it out to a lot of people in my network and they came back, ‘oh, this is great.’ Not one said, ‘This is great. Was this an A.I.?’ which is the response I was looking for.
Beth
Oh, you didn’t tell them. You just said, ‘how are these videos?’
Margie
Yeah.
Beth
Uh, ok. [slightly chuckles]
Margie
My old dog thinks that’s mommy talking when I’m playing those videos, so it’s a very good likeness. I have not been as successful yet. I have tried and have not found, you mentioned the software that, you know, can make, say, a static photograph talk. You see a lot of them on your phone, or I do get ads for it. They have little kids singing a song. Very effective. If you’re only going to use it for 30 seconds, it might look really good. But the longer you watch it, the creepier it gets. Now, us learning and development professionals, we have a responsibility to protect our learners as well, and our organizations, as well as deliver content. And so we need to think about some of the psychological impact of working with an AI. And the human race is not ready yet. So another thing that we deal with in the course, but also just in life, is we need to face those things. And if we’re not the voice of that concern, very often someone higher up who doesn’t have our background, or necessarily our same concerns, says, no, just put it out, I don’t care. Now all kinds of people turned off. If you have the uncanny valley effect, you’re no longer learning, you’re too upset by what you’ve just experienced.
Beth
That was what I was saying to my client, frankly, and they overruled me and so it makes me think that, you know, it’s always back to boundaries, right, sometimes in business. It’s like what as learning and development professionals, are we going to allow to happen with our participation in it? And so in the future, unless it gets even better, like you’re talking about, and I feel it’s not going to impact the learners being able to learn, which is our ultimate goal and, you know, change behaviour and whatever, I don’t want to do it. I don’t want to be part of that. So maybe we have to hold our feet to the fire, don’t we, to say ‘yay or nay’ to these kinds of things. So we still need to look at them so critically. It’s not just the look cool factor, is it?
Margie
Absolutely. And that’s what’s happening right now. I think a lot of people higher up are saying, ‘I need AI, you go out and you figure it out and you put in AI.’ And that is just like a decade ago, we got, ‘I need e-learning’. And so we got a lot of horrible e-learning produced, right?
Beth
Yes we go back to why do you want that and how is it gonna serve the learner and our ultimate purpose of why we’re doing this.
Margie
Right. What’s the benefit and specifically what do you mean by that? Which they usually can’t answer. Then if they still insist, we should do this anyway, always have a pilot. So let’s have a pilot. And, in the case you’re describing, the pilot group would have given that decision maker feedback they might not have taken from you.
Beth
Yeah, we’ll see. We haven’t gotten that far yet, but yeah, I’m hoping for that. [laughs] I’m so glad you brought it back to neuroscience and made that nice little link, you know, back to where we started and to come back to that and go, you know, why are we doing this anyway?It’s to serve the learner and how the learner’s brain works and if it’s working against their actual learning by the uncanny valley or whatever, we have a problem.
Margie
One of the great things these chatbots do and why I recommend using it in any kind of learning is that the human brain is pre-programmed for social engagement. It’s our survival skill. When we were evolving as a species, we were not the biggest or the strongest or the fastest, and some scientists think we weren’t even the smartest. There were other humanoid species that did not survive that were smarter than us, but they were not social as us. It is banding together. It is interconnection and learning from each other that is our superpower. You don’t have enough instructors in the world to give everyone one-to-one attention. That’s what I meant by personalized learning. Your brain does not know this is an artificial creation, and if it avoids the uncanny valley, it responds to it as though another person is having that engagement and much more of the brain lights up in the fMRIs. Much more of the brain is engaged, that means more neurons are part of the neural chain. That means a stronger memory. That means it connects with other parts of their experience. Every part of the learning experience is deeper and more powerful and retained longer and applied better when conversation, authentic, relevant conversation is a part of that experience, and that’s something you can do with various forms of AI. The shorthand is chatbot, but there’s other ways to do it.
Beth
And the personalized learning piece, I’ve seen you write about how it’s good for equity, diversity, and inclusion, because if the AI can kind of get to know the learner and who they are in all their intersectionalities, then maybe it can respond to them in ways that they particularly need and then we can be more inclusive in the learning experience or for them.
Margie
That’s a great example, it’s another wonderful use case we’re not doing enough of. The flip side of that is you can also have that experience listening to and pointing out unconscious bias and nudging people towards – which is another neuroscience concept of nudge learning – nudging people gradually to behaviour change with each interaction. So it’s not just a matter of don’t say that anymore. People can learn how to play that game, but their mind is still thinking.
Beth
Right. Yes, you lead them down the path towards something, yeah, that you want them to change.
Margie
To really reprogram those brains takes time. And we all have these unconscious biases. It’s been a very difficult thing in our society to try and change through traditional training methods. However, I believe artificial intelligence can really help.
Beth
But we come back to who does the coding, I suppose, right? Who builds the knowledge base? And so would you also recommend, boy, I hate when I ask leading questions. [laughs] It’s not a facilitator skill. It’s like I want to say we want to make sure that the people who build the knowledge bases for the AI to draw from have to be diverse, don’t they? Because if we only get one subset of the population, then what does it give us? It doesn’t give us what we need for every single learner.
Margie
That’s right. Now another great thing is making sure that all the learners are giving feedback on that part of that experience. It’s just like I explained with my artificial assistant, and with the one that I tested out and I said no that’s not what I want is they have to be ready to say that.
Beth
Yeah. And so learners have to be taught – actually, we talked about this in the episode with Taruna Goel where, you know, the more we teach learners how to advocate for themselves and what they need – so we have to teach all learners that they have to give feedback or else it won’t keep getting better for them and future learners too.
Margie
Well, as a matter of fact, it will be actually getting worse because these large language models, there’s only half…let’s just picture a world where everyone’s using ChatGPT. That is almost true because it’s so popular. And there’s some consistent mistake it keeps making and no one corrects it. You’re only making it stronger in the neural network of ChatGPT.
Beth
Yeah, so you have to respond to it like a human being.
Margie
So we have a responsibility, just like there are children, but that doesn’t mean I think they are children. You know, we’re not there yet. And sometimes people ask me about sentience, how far away are we? These things are so phenomenal, they feel like they’re really thinking and feeling. All the research there’s no, we’re not there. They weren’t built for that. But they’re doing other things they weren’t built for. ChatGPT was built for language, and no one expected it to be so good at computer code, computer programming. Well, code is a language. And now we have different models that are creating music. Well, music is a language, math is a language. So people didn’t think of that. And suddenly these capabilities came out and everyone’s like, whoa, well, sooner or later, it is going to sneak up on us. I don’t know if it’s going to be our lifetime but some very smart people think it will be. So we also need to keep an eye on what does it mean in the workplace to be engaging with these tools, and how we prepare our human workers for interacting. And one of those things is everybody becomes a monitor of the AI tool they’re using, rather than accepting it. That’s one thing we have to teach people. And we have to teach them to treat it well. Studies show that they work better when you’re polite to them. Build that into it. That saying please and thank you, and not calling them idiots and so on, actually, just like people seems to work. Why? They studied us, they respond like we do. So it’s not because they really feel it. But they have an algorithm that says politeness is desirable in this interaction. [Beth: Yeah.] Really interesting.
Beth
So what’s one thing that you want everybody to correct it? Because I have a thing that’s coming to my mind and I’ll say it maybe after you. [laughs] Unless it’s the same thing. What’s the thing that you always see it doing that it’s wrong wrong wrong because it’s getting it from the Internet? Is there something that’s coming to mind for you?
Margie
Well, the general one would be making things up. Always call it on hallucinations and probe it a little back. ‘How did that happen? Why did you say that? Because I cannot verify it.’ Another thing that I think is a real danger is that everybody’s writing, if we’re all writing a ChatGPT or something else, our writing might all start to look the same and sound the same, even if it’s grammatically correct and factually correct. You know, we may have lost the creativity, but it is possible, just like I do with my voice, to upload samples of your writing and have it analyzed and then say, now I want you to write like that.
For example, and you probably won’t be surprised by this, I went through that with Claude and it said, ‘your writing is really very good’, give me all this stuff, ‘except that your sentences are very long.’
Beth
Okay, it called you out a little bit. [laughs]
Margie
And I said, ‘that’s fine, I want to keep it, I appreciate feedback, I want to keep that aspect because that’s me.’
Beth
Oh, okay.
Margie
I don’t want to correct that.
Beth
Yeah, cause you want to be a little, you know, like we’re fallible, right? And that makes us human too. Doesn’t it? Oh, I, that’s so neat. I like how you were polite back to it. ‘Thank you very much for the feedback, but I would like to keep that.’ [chuckles] Yeah. I was thinking about a content area. I really wish it would stop talking about learning styles. Learning styles don’t exist. Can we just get it to talk about learning preferences, please? You know, just, that’s a conversation I tend to have it, you know, ‘do not give me anything that says learning styles in your response.’ And sometimes it remembers and sometimes it doesn’t.
Margie
That is a simple prompt you can give at the beginning of any request. You can always say ‘exclude or do not give me anything in this area.’ We could do another whole podcast on learning styles, but it circles back to neuroscience because our brains are ultimately lazy. They are programmed, if you will, to seek the shortest path to a solution because it’s doing a lot of things to keep us alive. So one of the things that your brain will do is it will oversummarize any complex idea. And that’s what happened to learning styles. It was written, the original research was just as you said it, learning preferences. Now what that tells us is everybody’s brain has a preferred method of learning. Most of us are visual because there’s so much more information that comes in that way. That’s because it’s easier to learn that way. Now infer, if you really want to force that brain to work, you need to make it difficult or more challenging, which means rather than, oh, that’s your learning style, here you go, everything you get is in that style, it should be the opposite. It should be, oh, let’s try some audio content. Let’s get up and do something physical. If you can figure out a way to work smell into it, you know, that’s a pretty narrow domain. And you lose all that nuance with, and quite frankly, it’s just a money thing. There’s a cottage industry that grew up selling instruments and methodologies, and they’re still doing very well. A lot of people have drunk the Kool-Aid. That doesn’t make it right or correct based on the research. So if you go back to the science, learning styles has been so widely debunked. It is shocking that it is still brought up in a conversation, in a serious conversation about learning in this day and age. Maybe that’s something we could build a bot, that every time someone said it, they got a little zap.
Beth
[Laughs] A little zap! A little buzz on the keyboard. There probably are a lot of GPTs or other, you know, things like that that are about learning styles. So let’s hope they are out there because everybody could try to find one for sure. But thanks for bringing us back to the research because humans do research and they do studies and they do meta-analyses of studies and on and on, right? The research is there in so many ways about learning and development and so we need to work with AI tools to keep pushing it back to what we’ve learned officially and that has been proven – about neuroscience, about, you know, learning how it works, about the brain and so on. So yeah, we do have a big responsibility here, don’t we, as humans working with these tools?
Margie
Well, yes. And that’s what keeps us employed.
Beth
Yeah, exactly. [chuckles]
Margie
You know, there are already companies that have come to the conclusion, ‘maybe we don’t need all these training people because we have these AI tools.’ While that is a failure of logic from the humans who made that decision, it was a failure of imagination for the employees who weren’t finding a way to prove their value as a fluent user of those tools. And that’s where our future is. That’s actually where our present is, many of us are not caught up to it yet. That’s how we stay relevant. Just like when other tools came out we adapted, we learned them, or we didn’t have what the next employer was looking for. So that’s my big caution to everyone is don’t be afraid of it. It is easier to understand anything than you think. Take it step by step. Get some good quality training on it, and start using it and just work it into your routine and get the benefits out of it that you want.
Beth
Thank you so much Margie for helping us all stay relevant in the field and I know I will be watching. I’m glad that we’re connected now on LinkedIn and other places and I see you as being this very relevant professional that we all still want to and need to engage with and especially you’ve confirmed that by all the expertise you’ve shared with us today. So you know let’s all keep watching Margie because she started early in this field and you’re going to continue probably for quite some time. So thank you and keep going.
Margie
I hope so. Yeah. I am just, every day is fun. It’s a fun time to be doing what we do.
Beth
Thanks again.
Margie
Thank you, Beth.
[Episode outro]
Beth Cougler Blom
It was fascinating to have this conversation with Margie and I do recognize that this episode is probably one that is not evergreen. AI is going to be changing so much all the time that even a year from now, two years from now, this episode might be really out of date. But here we are in the state of the world right now and I appreciated learning from Margie and so many things around our use of AI both inside and outside of learning experiences.
One of the things that’s still resonating with me about what she said was that we have this responsibility, all of us, to tell AI when it’s being incorrect, when it’s wrong, when it’s giving us incorrect information. I’ll just reiterate what she said. “Let’s just picture a world where everyone’s using ChatGPT and there’s some consistent mistake it keeps making and no one corrects it. We’re making it stronger, that incorrect information, in the neural network of ChatGPT.” So whatever AI tool you’re using, if it gives you incorrect information, please correct it. This is the advice coming straight from Margie – and it can be incorrect in many different ways but particularly in the ways that have embedded bias in it. I think it’s so important for all of us out there who have diverse experiences to just teach those things back to ChatGPT and other AI tools like it to say, ‘No, you’re wrong, that’s not it. And here’s some more correct information’ or whatever you want to do to put information back into the system that is correct and valid.
One other thing I want to mention that has a different perspective around it is what Margie said around how she’s teaching the AI tool of hers how to talk like her in a real natural human way. I actually listened to another episode of another podcast, my friend Mary Chan’s podcast who – Mary is the audio editor of this podcast and she has her own podcast called The Podcaster’s Guide to a Visible Voice – Mary had a voiceover artist on her podcast named Erin Moon and Erin basically said, yeah, AI is great, but let’s be cautious about it because when we talk, we have humanity in our voice because we have a body. Erin says, “I actually believe I have a deep responsibility to keep my humanity in everything I’m doing to make sure that I’m feeling.” She says she actually starts to feel sick in her body because it’s like her body knows that, ‘No, that’s not how people are. That’s not how we work. We’re more variant than that or variable.’ I’m reading exact quotes from Mary’s podcast episode with Erin here of Erin talking and saying these things. I think you should go listen to the episode and I’ll put the link to that particular episode in the show notes.
I’m not saying Margie is incorrect. No, I mean, it is really cool that we can teach AI to sound like us. But it is a bit of a cautionary tale other people have different perspectives around it. And when we think about learning experiences, we just always have to walk that line of, are people actually going to be learning? Learning is our ultimate purpose. And if we have anybody starting to think ‘What’s going on with this voice?’ that takes them out of the learning and it makes them start thinking about the audio that’s in the course or whatever the learning happens to be. So just be cautious around it. Yes, AI can do all these cool things, but is it the right thing to be doing for your particular learning experience that you’re creating. I’m pretty sure Margie would agree with me on this, the piece about having a purpose and being intentional about the creation of learning experiences. So I hope she doesn’t mind me providing this balanced perspective.
Again, I really want to thank Margie for being on the show and sharing her deep expertise in the world of artificial intelligence with us. I did mention that I would give you more information about how you can take Margie’s course at a discount if you would like. That was the one that she mentioned during the show. Go to the show notes of this episode, Episode 40 on facilitatingonpurpose.com and you will be able to find the specific link you need to use to get a $50 discount on her course when you sign up with that link. Ok? So thanks again to Margie for providing that so that we can all keep learning with her.
The next episode of the podcast will be dropping the second week of August. As I mentioned, I’ll be taking a break from releasing episodes in July.
You can use that time to go back and take a look at any episodes so far in the 40 that we’ve released to see if there’s something that you’ve missed and you want to make sure you listen to that before we start Season 3. Again, have a great July and I’ll see you when we start again in August. Thanks for listening.
[Show outro]
Beth
Thank you for listening to Facilitating on Purpose. If you were inspired by something in this episode, please share it with a friend or a colleague to help them expand their facilitation practice too. To find the show notes, give me feedback, or submit ideas for future episodes visit facilitatingonpurpose.com. Special thanks to Mary Chan at Organized Sound Productions for producing this episode. Happy facilitating!