Change, Technically
Ashley Juavinett, PhD and Cat Hicks, PhD explore technical skills, the science of innovation, STEM pathways, and our beliefs about who gets to be technical—so you can be a better leader and we can all build a better future.
Ashley, a neuroscientist, and Cat, a psychologist for software teams, tell stories of change from classrooms to workplaces.
Also, they're married.
Change, Technically
You can learn with AI
Can people learn in the AI era? Ashley and Cat think so. We talk about Ashley's experiences teaching programming and co-designing WITH students (not against them) to create shared classroom norms around AI, and about the metacognitive skills that Cat is sharing with the software teams and developers she works with to bring a "dynamic textbook" approach to using AI to build understanding, not degrade it.
Cat's Learning Opportunities Claude Skill, with a scientific reference list to the effects we also talk about in this episode, can be found here: https://github.com/DrCatHicks/learning-opportunities
Cat also wrote a recent piece about the complexity of measuring the impact of AI in Software Organizations: https://www.fightforthehuman.com/how-not-to-measure-the-roi-from-ai-in-your-software-organization/
Learn more about Ashley:
Learn more about Cat:
I've been having so many conversations with developers and they're all so worried about their minds right now. So. One of the reasons you and I wanted to do this podcast was to talk about metacognition and whether people can learn while they're using assistance, while they're using automation.
Ashley:So I love this point about like technology not always being oriented towards learning, and that has been the main r ub with AI and education is like the general use AI tools will just give students answers, right? And that's not necessarily what you want for learning. So if we're gonna take a tool, like a chat bot, and we're gonna turn it into a tool for learning like you're a learning scientist, what are the principles that we need to build
Cat:Yeah.
Ashley:so it's week two of my programming class this year, and I decided that we needed to have a really frank conversation about the use of AI and you know, and specifically LLMs and in the context of programming. My class is an intro of programming class. I've been teaching it since 2022, but AI has changed quite a bit in the past four years, and I think we need to talk about it really explicitly with students and students feel any number of ways about it, and I was actually just really curious to know what they would think too.
Cat:Mm-hmm.
Ashley:So. I walk into my class and I say, look, we're gonna talk about AI and I wanna ask you what you think is the appropriate use of AI in a few different class contexts, you know, for your own learning, for homework assignments, and for the final project. So on the board I put a list of. Different levels of AI use from one to 10. One being I never touch ai, 10 being I basically give a prompt to AI and it does the whole thing for me. And in between is stuff like it writes code snippets, or it explains code back to me or debugs things. And I ask students for the three different course aspects, what level they think is appropriate, and then I tell them what I think is appropriate.
Cat:Hmm. I think that's key. You didn't tell them first, right?
Ashley:I didn't tell them first. Yeah. And on the first day of class I said, we're gonna talk about this at some point. You know, and I certainly mentioned AI and how that's changing things and sort of my broad motivation for the class and what I wanted them to learn this quarter. But I didn't tell them exactly like how I felt about it and what constituted in my mind, you know, inappropriate use in the class and. I thought this was really interesting. One, because I invited the students to talk to each other first, which is like, it's such a teaching move. Like you, it's called like think, pair, share, you know? So you think about the thing, you, um, you pair up with someone, you talk about it, and then you share out, and we all talk about it as a class
Cat:Classic you not giving them the answers, but forcing them to have opinions.
Ashley:100%. Yeah. Yeah. Let's like, let's air out, let's like kind of like put out what you think first and then, and then we'll pull it all together and, um, you know, we were largely on the same page. I mean, there's a lot of variability in students' opinions and I didn't ask them to like share back out to me'cause this is a class with a hundred students in it, right? And so. Um, sharing out to the room about their personal opinions about AI is kind of a lot to ask in this context. This is only something I would do in a smaller thing, but I had an anonymous survey on the board where they could tell me what they thought. And so I have this distribution of values from one through 10, and I can see where students sit and it's anonymous, so like there's no reason they should tell me something, you know, to try to make me feel good. I, I loved doing this because it made me feel like. At least we had talked about it and as their instructor, I could stand in front of the room and say like, look, I'm not gonna like tell you never to use it, because that's just doesn't make sense. Like there's actually a lot of ways in which it could be useful for you, but here are the ways that I think it could get in the way of your learning. Or here are the ways that it could help. And that as a professor, right? That's always the question in my mind is like, what is the best thing for student learning?
Cat:Yeah. I love that you're breaking it down and breaking it apart and thinking, well, this is a complex situation and can we ask. About different moments where it might be appropriate or more appropriate. And especially because you're talking about teaching people to code, they're looking out into the world. You know, all of the people I work with are now getting licenses from their workplaces and being told, you must upskill in this immediately. It is just kind of the height of hypocrisy to look at students and say, you can't think about this. You can't worry about this. And I know something you are worried about too is students could be all over the place. And some students might be really scared, you know, of doing the wrong thing. Right. Whereas other students are just. Face planting into ai, using it for everything. Sometimes with some negative consequences for their own learning, right? But it suddenly means you as a teacher have this equity situation where you have to make sure everyone's on the same page as much as possible. Blanket bans don't tend to do that, you know, they don't tend to be very effective with students. Um, so I love that you had this experience. How did it feel like in the class? Did it feel like everyone came together and could commit to a shared, you know, norm about this?
Ashley:Yeah, that's a good question. I mean. You know, as a psychologist that self-reported behavior and actual behavior are different things, right? But. I think like, personally speaking, first of all, it felt really awkward in the very beginning, right? Like I've, I've read so many essays on other educators being like, this is how I talk to my class about ai. And I, I literally was like, probably every instructor's doing this in their class, and probably students are like, like, I'm getting another soapbox talk about AI or something. So I had this like very. I had this self-consciousness about like, oh, am I just another professor being like, okay, like, don't, don't you use it, kids, it's gonna get in the way. Um,
Cat:but, that's not what you were doing,
Ashley:but that's not what I was doing and I felt like the moment I asked students to weigh in, change that entirely, right? It was like, oh, I actually want your opinion because I know you're not all in the same place. And some of you might use it for a lot of stuff, and some of you might, you know, rarely touch it or be afraid to touch it. But I wanna know what you think. And I, that for me, changed things quite a bit. And then to see that the students were actually in their self-reported, you know, use of it or what they thought was appropriate were actually in line with me. I was like, cool, okay, this is really interesting and like, great and here's what I think and here's how I think it could be useful in X, y, and Z ways. And, um, I felt like the students felt a sense of relief. Like that it had been aired out, you know, um, and the, the fear and a lot of the like frustration. In education right now around AI comes from the fact that like people maybe aren't being super explicit about it. And that's, you know, we talk a lot about transparency in education. There's this whole framework, um, called tilts that's about like how transparent your educational processes are in your classroom. How much do students know about what it takes to succeed in the classroom? And basically bad class classrooms are opaque, right? So I felt like this was a process of making this very transparent, you know, and, uh, and like. Taking the elephant in the room and like really like getting a good look at it.
Cat:Yeah, I love that. I remember when I was first teaching for the first time, I got some great advice from a mentor to never ask students to have telepathy. Never ask them to read your mind. Don't stand in front of them and say, I, and and say, I want you to guess what the right answer is, and I'm gonna judge you if you don't have it. You know that's not. Creating space that's truly supportive and educational and it just makes people feel like you're testing them and they're messing it up, you know? And I think in a world of lots of shifting expectations, lots of shifting technology, you're telling them, I trust you enough to say. You're grownups. I'm a grownup. I'm here to teach you programming and tell you what I think about what's good for your learning. But I'm also not here to, you know, be authoritarian in the classroom. Right. And the co-design of it, it, it makes me think about how you teach science as well, and you're like, let's get in. You're gonna do the experiment, you're gonna make a mistake, we're gonna have immediate feedback, we're gonna see if it's working. You know, so that's all you can do right? When things are changing really fast.
Ashley:Yeah. And like, just to be honest, this is the role that I wanna have. Like, I, I didn't become a scientist and a professor so that I could also police people's behavior. Like that's just, I have no interest in that. You know, I wanna empower people to be their best selves and to learn. Right. And that's a different thing than being like, you know, to, to just become a police person. And that's, I don't want that. Um, yeah. Yeah. And I, and I think the process of asking students to think about their own learning and what it is that they stand to gain in this class is really important too. And, and there's so much evidence, and you know, this, of course, there's so much evidence around the role of metacognition, right? And, and the same could be true with our use of AI is like the thing that. Makes a really effective AI user is the person who has metacognition about what they're doing with ai. And same thing in the classroom. The best learner is the person who has metacognition about their own learning.
Cat:I would say metacognition matters across all of our tool use, all of our use of any technology, our use of computers, our use of code at all. You know, there's a ton of elements of metacognition. I've been getting really a lot more into it since I wrote my book and I wrote a chapter in my book about what is it? That you should care about if you really care about your organization having high quality problem solving. And I tried to myth bust some elements. In this chapter I talk about how, you know, if you fixate on nothing but individual production, you know that is gonna be a very. Brittle model of success. As soon as you know, the world changes and like there's a robot that can outproduce you all the time, no matter what, you know, suddenly the quantity of code you can churn out does not signal the same thing it used to signal. And so what, what can help people move forward in this? And I've been having so many conversations about AI since. December, January, right. And there was kind of the Claude code, you know, revolution explosion, which, um, I love to feel like we were slightly ahead of that. That was a really cool feeling in our metacognition models. But I've been having so many conversations with developers and they're all so worried about their minds right now. So. One of the reasons you and I wanted to do this podcast was to talk about metacognition and whether people can learn while they're using assistance, while they're using automation. I want to put out there like plant a flag on this. I think that not only can you learn there is a possibility for these to be incredibly effective learning tools. Now, I don't think they're necessarily built that way all the time. Now, I don't think the way that we design tech products is always ideal for this. But I want to be in the business of figuring out what helps build technology that works for people's minds. And how can you be an empowered user of it, you know? And it's certainly present in my world, like I cannot just act like it's gonna go away. And I'm not sure that I want it to go away. I find it really, really exciting actually. I have always loved. What technology can bring in terms of supercharging people's cognition and supercharging our problem solving and getting rid of some of the toil and drudgery that has like built up around software development. And I know it's really messy right now. You know I'm, I know it's really messy. Believe me, every day people are in my inbox telling me how terrible their code bases are. But I do not think people's minds are terrible. I do not think people's minds are going to melt if they use assistance. And so I've been really enjoying trying to get smaller in this moment and getting in person with people in this moment and talking about what makes for good problem solving with ai because it's ev in everybody's life right now in my
Ashley:Yeah. Yeah. And so I love this point about like technology not always being oriented towards learning, and that has been the main. r ub with AI and education is like the general use AI tools will just give students answers, right? And that's not necessarily what you want for learning. So if we're gonna take a tool, like a chat bot, and we're gonna turn it into a tool for learning like you're a learning scientist, what are the principles that we need to build
Cat:Yeah. Yeah. I love it. Okay. First of all, you could not get me out of the chat. Fast enough. Okay. I get so bored, so fast chatting back and forth with a chat bot. But then when it came to working with something like Claude Code and the idea of agents, I think the fact that it can put you in a much more interactive space where you are delegating tasks, going and doing your own work, manipulating traversing files, thinking about the different pieces of work that need to come together. It just felt a lot more like doing computational work to me, and I think a lot of people experience that shift emo kind of emotionally. They felt a little trapped in like the chat interface and we need something more powerful. But there's something so incredible about the fact that we do have all of this preexisting. Information and context. And now we have these technologies that can take in many, many forms of information and kind of synthesize with them. And obviously there's jitter and hallucination and all kinds of things to be aware of. But I started using Claude Code on my own research projects that were all wrapped and finished and publicly available, you know, so there was a, a full project with all the code and everything, and I was just curious to learn. If I were a novice trying to understand like Kat's big repo of our code and serial mediation models, could I do it with Claude code? I quickly decided that the same principles that had always been the things that worked in educational technology were the things that would work here. So I built a Claude skill to encode some of these principles to say, I want you to stop and have me answer questions about the files in this code base. I want to, um, slow myself down sometimes and not assume I understand every connection I want to. You know, do things over time, um, take advantage of what we call the spacing effect, where you kind of come back to a problem again and again. I think working with AI can sometimes push you into these like massive cramming sessions. So there's all kinds of interaction modes that I realized I could inject into how I was using these tools. And I found those very good for my metacognition. Um, it was a really interesting. I did it for a couple days, and then I built this skill based on what I had done, and then I had you use it, right? So you tell me did you think it worked?
Ashley:Yeah, so I used the skill this morning and kind of similarly, I used it on a project that has like a pretty big, well, I would say like medium sized code base with it. And it's for a paper that I'm working and some data that I'm analyzing, and I had just started implementing some slightly different statistics in it. So. I pulled in the skill that you wrote and, um, after re-analyzing some of the data, Claude prompted me and was like, do you wanna do a learning exercise? And I was like, yeah, sure do. Let's go. And um, it was really interesting, it asked me some questions. About the choices for the statistics that we were running. Um, and then you said, let's prompt it to ask a little more questions about the code itself. And so then it prompted me to open one of the scripts, um, that we had worked on and look at a few lines and justify back why we had given a particular argument for one of the stats functions. And it was great'cause I was like, okay, this is something that. A reviewer would ask me, right? Like, oh, why did you choose this? Kind of multiple comparisons, correction versus another one. Um, and it made sure that I was knowledgeable about where things were in the code, which, you know, I think is really important for just going back in later for small tweaks and stuff that I don't really wanna rely on Claude for. Um, yeah, it was really, really interesting. I think. Your point about it slowing you down a little bit. Like obviously this is like, I had to opt into this, right? Like I could have just like kept like, go, go, go
Cat:Yeah. I, I even wanna challenge you kept saying, it made me, you were the, you were in charge, like you were making yourself slow down. You were
Ashley:Hmm.
Cat:go read the thing. You were building your knowledge. Okay. So it's, it's not doing anything, you know what I mean? It's like, it's, you're playing with the probabilistic. Play-Doh of like the latent space of your project that has these decisions in it. And the decisions are documented across these ways. And we have this technology now that can kind of, you know, yes, with some jitter and unpredictability and this, you know, but it kind of, it lets you play with this, I don't know, I keep thinking of it as Play-Doh and you have this meaning making mind and that has always been what makes us good problem solvers. You know what makes human cognition amazing at dealing with problems and so you are. Using it, like I called it a dynamic textbook. How could I get Claude to understand, as it were, to, to set up this design where it would generate small dynamic textbook moments for myself? I think one of the things that's really, really cool about this. Is that there's this possibility for people to personalize like their tools to their own minds, and in doing so, learn how their minds actually work. Because we have had tools created by big companies. To take advantage of these patterns that kind of work for the most people at the most profit. That's not what my Claude skill is about, right? My Claude skill is about me learning my project. My stuff would never be profitable probably, but boy will it work for me.
Ashley:Yeah, that's really interesting. And I actually saw in the documentation for the skill that you had written, like, oh, it's possible to. Orient this towards a particular thing you wanna work on. And you know, the immediate thought I had for me was like, okay, like I am a proficient. Python user. I don't need tips about Python or I don't need to be asked about, you know, the sort of basic Python stuff in this code. But actually I'm new to R so the R scripts that we wrote together, I might actually want a little bit more handholding, sort of like invitation, um, around like how the R code is written and I actually think I might modify it in that way.
Cat:Do you know what? And you know what? You should try, babe. I, I have in my Claude MD files when I'm experimenting and I'm, I'm looking at a project. When I set myself up to try to learn, I set up like I am good at R I taught myself R I don't know, Python, and I've struggled to learn it.
Ashley:Hmm.
Cat:and the way that you can pit your own, you know. Domain expertise against the sort of more fluid generative stuff and play one off each other and play the vetting and the verification off of the stuff you don't know. Now you're thinking like an expert, because experts are really good at doing this lateral transfer, checking across different domains. This is core to the schemas that experts form. They form this kind of higher level understanding, right? And so they're able to kind of. And learn the lower level skills a lot more efficiently because they understand how to quickly say, oh, this is the schema. That's the schema. Or like, check or refine it, you know? So these could be schema building tools. I really think that, I think we've barely scratched the surface on that.
Ashley:Yeah. Yeah. And I love, I I just love this too. Like you, you're basically are like, yeah, you're taking the time to like think about what it is that you need. Like again, it's metacognition. It's like what do I need to get out of this
Cat:Thinking about thinking. Yeah.
Ashley:Yeah, you're thinking about thinking and that does take a little bit more time in the beginning and like, I'm totally guilty of this. Like I will just like jump head first into a thing and start coding without really thinking about it first. And like I actually think this invitation to like. Create a project, create context, think about the steps that are needed. You know, all of that is like exactly what I would tell my students to do. I'm not like the best practitioner of this, but all of that is like this stuff of setting yourself up for success. It takes a little bit longer in the beginning, but then, you know, ultimately probably pays off later.
Cat:Yeah. Yeah. And the, all this realm of learning science is so helpful here because there are also very common patterns of misconceptions that people have, and you're tapping into some of them actually, because people always. Rely on the feeling that they're moving fast rather than the actual explicit checks of their own learning. And so you'll see students, even professionals, right? You'll see people, I will see developers that I work with sometimes, you know, think fluency is my guide, that things are working. That's not necessarily the case. Like sometimes good learning feels more full of friction and it feels like you're making more mistakes. On the other hand, some people kind of internalize the opposite misconception. Like, everything should feel brutal. Everything should feel awful, and if it doesn't feel awful, I'm not learning. And then you get the people who say, I need to stay up all night, because that's the only thing I've ever internalized as working hard. It's the only way I've gotten myself to work. Um, and, and they don't realize it's actually an inefficient strategy. Like cramming for a test is really inefficient compared to like actually small, targeted, frequent, learning interspersed with breaks. And those people will often just not even believe you. Like they'll have these studies where they show people the difference for themselves on their own performance, and people will still not believe it at all. So I think that you have to think about. Learning about yourself, learning about, you know, your own little mental traps and, and doing the experiment. Like this is a moment for people to do their own experiments about what works for them. And I am worried about some of the defaults in these tools to always make things feel very fluent and always make everything feel very complete. And sometimes we learn more from doing incomplete steps of things, When I wrote this skill, you know, it was also just a demonstration to myself that I could actually break those defaults pretty easily and then make a much more comfortable home for myself and what, what I'm good at doing.
Ashley:I totally agree. I mean, I think taking away the element of Claude or whatever, you know, LLM you're using, like being the absolute source of truth, you know, like having it be in dialogue with you and you giving it information. Like I, I like the breaking of that kind of norm in it. The only norm that I kind of wish we could break a little bit more is. I wish Claude could tell you how certain it was. Right. This is like a whole thing, like these models are not built to be able to assess their own certainty. It's just
Cat:You see a lot of tech product default thinking in that. Yeah. However. I'll tell you some strategies I use, you know, and it includes, so a really good example, okay, is a lot of people are using. An AI tool to say, what does the science say about X? So what does the science say about X? It'll do a lot of common things that you and I both try to correct for people, right. It'll over extrapolate from a single study. Um, it will not differentiate between weak and strong evidence. It'll sort of say, oh, well there's a statistically significant finding, so there's this relationship. But you know what was really interesting, and again, I don't think we're anywhere near. The actual scaffolding building that we need to do for these tools, but just as a glimmer of what could be built here is when I added, again to my CLAUDE.md, I added like a statistical research rigor section and I set some things like I want you to assess the strength of evidence. I want you to, and I go read, you know, every study that obviously it, it, it talks about, um, and assess it myself, which is just like a learning journey for me. But it's really quite remarkably good at actually getting nudged towards defaults of meta science. If you do put those in, that's just not like the default. In the world. But when I say I want you to consider alternate hypotheses, which is something I have, I want you to consider effect sizes, not just statistical significance. These are like the trigger words that I know to probe the latent cultural space of my particular discipline. They might be different words for different people, but those are like my words that I know. If you have a research paper and they're thinking in this way, they're gonna be a step forward. They're gonna be more sophisticated. And I get way richer results. Like I've seen how my, some of my friends and my patient health groups are like, asking health questions of AI. I think I am getting such better results because I understand how to bring in the language and the roles that I know and do that. Yeah, it's really
Ashley:great. And now I think that we should write a skill that's like for
Cat:Okay, let's do
Ashley:research in, no, seriously, like research in biology or health. Right.
Cat:Yeah, I love
Ashley:for my. For my book research, I've done something similar, which is like, I, um, want more meta analyses. I want things that are well cited. I also, because I found a paper that I didn't know was really controversial, I ask, I also ask it to like, look for controversies about papers now
Cat:yes.
Ashley:to tell me if there's a controversy like that I may not know about.
Cat:I'll tell you the funniest thing I put in my Claude MD files that I want it to remind me to look at the diversity of authors if, um, if at all possible, which is obviously not always possible. Um, but, but now I'll be like looking at something and, and I'll just get the snarkiest, like, well, this is a highly male area, you know. Because I'm looking at software research and they're like, well, you've done a good job of looking at the research, but I gotta tell you, every paper you're reading is written by men. But the, again, it goes back to like the personalization possibilities and I feel like we've just barely scratched the surface on like shaping this to our own pro-social desires. And I would, I would love to think more about. The strength of evidence side of it, because I do think if we're, if this is gonna become an interface through which people try to access the information of the world, you know, and scientific information, we, we can do a better job of scaffolding people in that
Ashley:Yeah. Yeah,
Cat:yeah.
Ashley:Yeah. And, and people have built some like research specific tools with it, but. I actually like the process of defining the rules for yourself because that in itself is the learning. Yeah, like, like exactly this question. Like if you're in a patient group and the patient group is like, I'd like to know if this medicine is safe or something. The question is. Not only is the medicine safe, but what questions do you need to ask to know if the medicine is safe, right? And that that's like teach a person to fish, right? That's the stuff of what is gonna level up humanity, right? Like that's the thinking, like that actually is the critical thinking, and those are the tools that you need then to ask that question in another context entirely.
Cat:The health stuff is near and dear to my heart, as you know, because, I mean, I'm in these patient chats and people are using it. Everybody is using AI and I know that there's people who will listen to this maybe, or, or just say. Don't use it, you know it's wrong to use it. Well, I don't know. I'm in groups with people who have been waiting five years with no answers from any doctor, and they're just trying to survive, and I feel really, really passionately about that. And like they're so cut off from. Scientific information, and there's not enough of me.
Ashley:Hmm.
Cat:There's not enough of me sitting. I can sit in the WhatsApp groups I'm in right now, which I do, and tell people why they're getting hoodwinked by some horrible white paper. That's just a piece of marketing bullshit that's put out by some private practice doctor who's trying to sell some stem cell injection thing that isn't even, you know, remotely. Well evidenced. I mean, that is just, the world is full of that. And frankly, what they're getting out of AI when I help them prompt it better is far higher quality. It's far higher quality than what they were getting before from their Google searching. So yeah. Sorry to soapbox about that for a second, but just when I see people just say, just don't use it. Just ask a doctor. I'm like, what doctor? Where? You know? Are you there paying for the doctor? People don't have access.
Ashley:It's the same thing in education too, and I mean like on my campus. Class sizes keep getting bigger without the same instructional support and like Yeah. In my ideal world, like I, you know, I teach really small classes. I know everybody intimately. I can talk to them one-on-one and sit side by side with them and code. Right. But like, that's just not happening. And I, I'll do what I can to like try to improve things at the university at a systemic level, but that's limited. Right? But what I can do is I can say like, this is an amazing tool. You could. As you've built your skill around, right? You can customize it to your own learning. You could tell it, look, I've got functions down. No problem. I'm good with functions. You need to quiz me on all of this other stuff. Right? You could customize your own tutor. And I think, I mean, I think it's amazing. I think it's, it's incredible, right? It's, it's everything we've wanted for. It's the Star Trek of education in a lot of ways,
Cat:At any rate, who's gonna imagine the Star Trek of education, if not. You know, people like us,
Ashley:Well, totally like, and I, I,
Cat:Yeah.
Ashley:I want us to be involved in this process, right? Like, like at the forefront of it, because the way the tool is built is, you know, not necessarily for educational purposes, but there's a way to use it and a scaffolding that can be done to get it there and to use it in that way. And like it's a cool all purpose tool that we can hone for learning. And I do think it's possible.
Cat:If nothing else, we have our own communities in which we can help people protect their minds and be empowered and take action and make choices. You know, I think about this. I just went to San Francisco with a dear friend and it was a really interesting moment to force myself to get in person with people
Ashley:Hmm.
Cat:I'll the first to admit that sometimes, i, I live a life of intellectualizing and theory and you know, big data sets and all this stuff, and it was a really intentional choice to go to the city I used to live in, to feel what it felt like to let myself be a full human, you know, and try to just. Be in dialogue with people, and I met with all kinds of people. I had lunch and dinner every day with different people all over the map on their position about this technology and how they were using it and who they were. But the thing that just cut across all of it was this shared sense of community, and you know that we want to, we want to figure out something better for tech than the calcified dinosaur, hostile place. We're not those kinds of people and we don't want to build that. And how can we build something better together? We had this thing that we called a Claude Crafternoon, and we posted a, an event for it, and we said, if you're new to this and you feel a little bit left out, come sit at this cafe with us and we're gonna just talk about it. We're gonna get you ramped up, but we're gonna get you feeling at least like this is a world you can have access to and. I did not know what was gonna happen. And people showed up that were complete strangers and we had a lovely craft afternoon and we had an en engineering leader who was wondering about how to lead a team in this moment. And we had someone who had never programmed before who was thinking about. Can I use this to teach myself? And the number one thing that I saw across these people, we closed down the cafe. We sat out there until it ended, you know? And the number one thing I saw was every single person who showed up wanted to learn. This was the opposite of like the social media brain rot. Everybody's awful, everybody's lazy. This was the complete opposite.
Ashley:hmm. They wanted to learn and they wanted to connect. Like it's not that they wanted to do this in a dark room all alone. They wanted to like learn in community with other people and. I think about the same thing with education. I think there's like so much dark storytelling about education right now, and for sure there's stuff that sucks, right? And there's ways in which education is changing. Sure. Like beyond the scope of this particular topic, but to come back to the classroom. On that week, two day, you know, whatever. Like after we talked about ai, I gave them, you know, some functions or something to, to work out, and people had their laptops side by side, and they were coding, they didn't have AI open. They were asking each other questions. I saw high fives, like when they have completed the task, right? It's like, we're not replacing. Everything that we do like, and in your case, we're not replacing communities at work and the ability to be together and learn. We're just not replacing that. We're, we're augmenting it with something that is a really cool personalizable tool