Episode 28. The AI Dilemma: Creativity Killer or Ultimate Muse?

 

Explore whether AI is killing creativity or unlocking your creative potential with poet and educator Scott Chalupa.

Generative AI has taken the world by storm, and the question everyone is asking is the same: Is it a threat to human creativity or a powerful tool for innovation? In this episode, we dive deep into "The AI Dilemma" with Scott Chalupa, a poet and English instructor who has spent countless hours training faculty and staff across the South Carolina technical college system on how to implement and integrate AI into their teaching.

Scott brings a unique perspective as both an artist and an educator. As a poet, he understands the creative process intimately. As an instructor at Central Carolina Technical College, he has trained over 100 faculty, staff, and administrators through his "AI for the Trainers" professional development series. He is also part of a statewide task force addressing AI in higher education.

In this conversation, David and Scott explore the real fears and misconceptions about AI, the ways it can enhance rather than replace creativity, and practical strategies for educators and creatives to harness its power responsibly. They discuss how AI is being used in instructional design, student assessment, and course development—and what it means for the future of education and creative work.

Whether you're an educator, a creative professional, or simply curious about how AI will impact your work, this episode offers practical insights, honest perspectives, and a thoughtful examination of one of the most pressing questions of our time.

 

Listen to the full episode on your favorite podcast platform.

Subscribe and leave a quick rating or review if you enjoyed it.

 
AI is great at helping organize thoughts and pick priorities. It’s a tool for sorting through the mess of creativity, not replacing the creative spark itself.
— Scott Chalupa, Poet & AI Educator
 
 

Transcript

  • Scott Chalupa (00:00)
    You know, it's also really good at kind of helping me organize my thoughts and pick priorities, which it sounded to me like that's kind of where your brain is a lot. It's just sort of everywhere. And so I just need help sorting through it all, which is where I am like 95 % of the time. And so it does give me a lot of tools or it gives me a lot of help in sorting through the mess.

    David Peck (00:06)
    Mm-hmm.

    Yeah.

    Yeah.

    David Peck (00:26)
    you

    Hey there, design enthusiast. Welcome to Inside the Design Studio, the podcast where we unravel threads of life and design. I'm your host, David Peck, your guide through the cosmic wonders, the tangible touches, and the delightful twists of creating a life you absolutely love. Today's episode is a special peek into my eclectic toolbox, the secret weapons I use to design a life that's as vibrant as my creations.

    So grab your metaphorical sketch pad and let's dive into the art of intentional living.

    David Peck (01:05)
    I am so excited today to welcome Scott Chalupa to the podcast, because if you haven't been following the news or if you have been living in the Rock, you'll know that AI has sort of taken over the world and is threatening to take

    over the world even more. So Scott is a poet and an instructor of English at Central Carolina Technical College in South Carolina. And since the spring of 2023, he has spent numerous hours in various training programs for implementing generative AI in higher education and experimenting with integrating it into courses, instructional design, and student assessment. So.

    He's also a part of a task force that he's gonna give us a little bit more information about. And he has created and facilitated AI for the Train Diets, an eight week online professional development series with over 100 faculty, staff, and admin in the South Carolina technical college system, including a statewide system office and all 16 schools. that the way Scott and I got connected was that

    Regina Vigil, who has been on the podcast before and is our operations manager here at David Peck. Every time I talk about AI and how I'm using it or how we should use it, she's like, you've got to talk to my friend Scott. so Regina is definitely somebody who comes to the table from the creative side. She has a degree in poetry. Scott, your background is in art, in poetry as well. And you've written books about

    Scott Chalupa (02:32)
    country.

    David Peck (02:35)
    Caravaggio and art and like, is that correct?

    Scott Chalupa (02:38)
    So my collection of poems, Quarantine, has a whole series of poems in there that reinvent major devotional works by Caravaggio. It's like scenes or moments from queer history.

    David Peck (02:55)
    cool. So you guys are coming at this from definitely not necessarily the business side, though with all of the work that you've been doing, it probably you've had to embrace a little bit of that. But I think it would be an interesting discussion because the idea of the podcast is designing a life you love to live. And AI as a tool can definitely help us. But obviously, has

    Scott Chalupa (03:03)
    Yeah.

    David Peck (03:15)
    drawbacks and people are nervous in many ways about it and the power of AI. So why don't you just tell us about how you got involved in the subject matter after poetry? Like how does one go from poetry to AI?

    Scott Chalupa (03:28)
    Well, I think the start of it for me was a few articles that I read late 2022, spring 2023, that had a lot more to do with AI and what it meant for higher education.

    Generative AI was just kind of exploding. really kind of started getting off the ground in 2022, but 2023 is when we went from like chat GPT-3 to 3.5. And I think what really caught my eye was sometime in the spring of 2023, there was a article from the Chronicle of Higher Education where a former Harvard student

    or maybe current Harvard student, had published this article about using ChatGPT for everything that she does. Then there was also another article about how folks had used ChatGPT to pass freshman year at Harvard with A's and B's. And I thought, man, I've played around with ChatGPT and it can't do a lot of the basic college stuff like quoting and citing and synthesizing stuff. And I was like,

    man, I'm afraid for like the quality of education at Harvard if CHAT GPT could pass freshman year. And then, you know, started playing around with it myself and, you know, I took a course or maybe a webinar in it. And then I also found this really cool tutorial that walked me through some of the basics of using CHAT GPT and so like creating, you know, a story.

    about this like caterpillar who lost her family very cheesy like prompt that was in the video tutorial but then realized I was like I just I really need to turn up the darkness and the sarcasm in this and chat tpt just kept using dark and sarcastic and it's very there are various synonyms in ways that were like super unsatisfying and

    David Peck (05:27)
    Mm-hmm.

    Scott Chalupa (05:28)
    I was like, well, this isn't nearly as good as people say it is, but I could see where this could be really useful for folks who are kind of trying to offload cognitive stuff or like students who are willing to put in the time to make that a surrogate for doing their own work. And so my department chair, Margaret Floyd, is another person who's done a lot of experimenting with AI. She's...

    Got fairly well versed for a while and using Microsoft Copilot. I tried to use ChatGPT to create like a syllabot. like a chat bot that students could use that would answer questions like logistical things about class based on syllabus, college policies, all that. But it was back when you had to subscribe to create custom bots and only paid subscribers could access your custom bots. And so that didn't really work well for me.

    Technical college, I'm always looking for ways to cut expenses for students, because they're not coming to the table with a lot of money. But it would give really good answers, and in the voice of Lizzo, because I thought that would be a fun dimension to add to it. Some other fun creative stuff that I've tried to do with it since then was I really needed to try and figure out a way to see how data ownership

    David Peck (06:29)
    Yeah.

    Hahaha

    Scott Chalupa (06:49)
    and FERPA, so the Family Education Rights and Privacy Act, which protects all your educational records in higher ed, would work. And I just couldn't, I don't know, I still need to figure that out and talk to some really well-versed legal folks. But I got really frustrated at one point, and I was like, I need, so I was talking with Claude, which is my current favorite AI tool.

    And I was like, okay, now that we've done all this talking about like FERPA and looking at like the privacy policies for Anthropic and OpenAI, I want you to like translate this conversation that we've had into the style of Clarissa Dalloway planning a party. And it did a very passable job at best, but it was recognizably kind of a wolf sort of thrust and it was all that internal

    David Peck (07:33)
    No.

    Scott Chalupa (07:42)
    Clarissa Dalloway narration that a lot of us Wolf fans love.

    David Peck (07:47)
    So interesting. So how did you get involved in this South Carolina Department of Administration sort of task force that's like talking about AI and its uses?

    Scott Chalupa (07:56)
    So that was kind of an outgrowth of the program that Maggie and I ran for summer 2024. So it was an eight week, as you were reading out from the bio, an eight week program where we just kind of tried to provide some background about AI, give them like easily accessible tools, and then some

    Discussion boards where we kind of gave them marching orders for like here's some things to explore pick one and go explore and start Playing around with it. And we also gave them sort of like a starting ethical framework but we also had them like commenting and collaborating with us on what our sort of ethical rules for the road would be And at a capstone conference that we did for that. So like a one-day capstone that was in person There was

    the director of IT for the South Carolina Technical College System. And he had recommended, no, it wasn't him. So he was the one that nominated for the, we were runners up for innovation award for the South Carolina IT Directors Association. And think it was as a result of that and the runners up.

    there was a person at the South Carolina State Technical College system who had said like, hey, there's this thing that folks are putting together. Are you interested in this? Can I put your name in it? And so that's how that came about. It sort of, it's all related.

    David Peck (09:26)
    Yeah. So AI has been called a lot of different things from creative ally to like the death of originality. So now as a creative person who has sort of succumbed to using AI and experimented with it for the past several years as it's sort of like grown in popularity, what do you think it is? Is it?

    Scott Chalupa (09:34)
    Hehehehehe

    David Peck (09:46)
    the evil that people can make it out to be or is it the genius? it somewhere in between? Like what is your take?

    Scott Chalupa (09:53)
    It's complicated. And it's also, I think probably by the nature of my journey, a little bit technical because like nobody's clearly defined exactly what intelligence is, not in the AI field, not in the philosophy of mind field. That's all just a mind field of a different spelling and idea.

    And it is good at a lot of different things, especially if we're talking about the large language models, which is what everybody's talking about most. It's really good at doing qualitative analysis of language, which is why it's also good at sounding human. So basically, all it does is it just predicts the next token, so either a word or a piece of word, really effectively.

    David Peck (10:21)
    Mm-hmm.

    Scott Chalupa (10:38)
    and maybe does or doesn't understand what it's actually churning out. It's all just a prediction machine. Sort of like Google search on nuclear steroids.

    David Peck (10:45)
    right.

    Right, where it kind of based on other people's search history, what people are looking for, it can predict what might be the thing that comes next.

    Scott Chalupa (10:57)
    Yeah, and so the large language models work on the same thing. And so based on what you put into it, it then uses that sort of prediction kind of engine or whatever to produce the text that it does. I do think it's really good for, depending on how you use it, supplementing creativity.

    or perhaps pushing us to look at things differently than we used to or that we have. I've been trying to develop some like copy and paste prompts for students who are not all that good about thinking through their interests or why they're interested in things. Cause I like them to, you know, my English 101, I like them to pick their own research topics and run with it. Cause I'm really interested in whatever they're interested in. And so,

    David Peck (11:39)
    Mm-hmm.

    Scott Chalupa (11:44)
    You know, with prompts like that, it's sort of about telling the AI that it's supposed to continually ask you like follow probing follow up questions so that you're continuing continually answering with more and more depth or detail. so I think if you don't give it anything to run with and you just sort of do these sort of naive, right? Such and such and such with no context to know initial sort of like starting data or information. It's not really good.

    David Peck (11:52)
    Right.

    Right.

    Scott Chalupa (12:14)
    So it's all about like, kind of like with humans, it's about how you interact with it.

    David Peck (12:19)
    Yeah, I mean, I, to give you an idea of how I have used it, I feel like I use it every day at this point, whether it's asking for feedback on something we've written, whether it's a social media post or a newsletter or something like that. I've had it help me with trip planning. I've had it help me with, I mean, all kinds of.

    just different things. But I've taken a lot of time at first when I first used it. I think I had a similar experience to you where I was like, this just feels kind of generic and not very personal. It's like, you know, it's decent at what it is, but it doesn't feel like it lacked personality. And so I think I did some research and deep diving on like, how do you train?

    chat jpt to like have your voice and like what your preferences are and like what kind of questions you should have it ask you to get the kind of output that you want so because a lot of you know it's going to give you back what you give it in many ways

    Scott Chalupa (13:11)
    You actually

    just brought up, think, one of the key things that I like to use, is provide it kind of a substantial prompt with plenty of context. And sometimes I like to give it a roll. But usually before I want to get started, I'll put at the end of the prompt, what questions do you have for me before we start?

    David Peck (13:30)
    Yeah, I've been doing that a lot. It's like, this is what I want to accomplish. And this is all the information I have. Before I ask you to help me, what are like, know, five, 10 questions that I can answer that would help you help me? And it's really kind of amazing how thoughtful it is, because obviously it's pulling from a huge database of information. And

    Scott Chalupa (13:43)
    Mm-hmm.

    God only knows

    what they use for that database either, because I mean, if it's chat GPT, OpenAI is not forthcoming about anything ever.

    David Peck (14:00)
    Right.

    And that's part of, I guess, the controversy too about using it as where is it mining all this information and is it plagiarism, which we'll get to. But it's so helpful as a small business person, because I can't afford to have a huge team of people that are professional writers and copy editors and all of this. It makes it easier for me to get the content out that we need to or schedules or whatever, look at data.

    And one of the things, and I think this is actually what prompted Regina to really say, you've got to talk to Scott, is that I've been wanting to write a book for a very, very long time. And it'll be, it's going to be a collection of memoir type essays. And.

    It's just one of those things that it takes so much time to do to like get the stories out and what are you going to do? And so one thing that I started doing this year is I didn't realize that chat GPT had a voice mode. And so I've been using the voice mode in asking it to interview me. And basically I just talked to it like, and it's shocking how

    Scott Chalupa (14:56)
    Mm-hmm.

    David Peck (15:01)
    one empathetic it can come across. Like, you know, like it'll repeat the story back to you and say it kind of like even synthesize almost like a therapist would what you, how you must have felt in this moment. And because of that, asking really interesting questions and you know, you get to choose the voice that you're talking to. So mine is sort of like this cool, but posh.

    Scott Chalupa (15:04)
    Mm-hmm.

    David Peck (15:23)
    Londoner, which feels fun, you know, so it's like, you know, and so you're having this conversation is really interesting because I can do it in bits and pieces. I don't have to, you know, set aside hours and hours to kind of, you know, chunk all this out. It's really like good for brain dumping. So my thought is like I'm getting get all this information out and then eventually, you know, probably have it help me synthesize it. And then I'll go in and like actually do the writing. But it's incredible how.

    Scott Chalupa (15:26)
    I think.

    David Peck (15:49)
    much time and how effortless it makes it feel. How do you feel about people kind of using it for that kind of purpose in terms of like a tool? I don't ever expect it to write the book for me because obviously it doesn't live inside my brain. But how do you feel is like an ethical way of using chat GPT to help aid in creative endeavors like writing books?

    Scott Chalupa (16:12)
    Well, just as a quick preface, like my ethical misgivings are about OpenAI as a company. And Sam Altman is a human, just full disclosure. Yeah, you know, he in the last like 24 or 48 hours has recently come out as like very excited about what the Trump presidency is going to produce. And I don't know if you've seen anything about this, what is it? Starbase?

    David Peck (16:19)
    Right.

    Very understandable, and I get that.

    Scott Chalupa (16:37)
    program.

    David Peck (16:38)
    Yeah, well I know that

    there's sort of a fight between Elon and OpenAI, like going on about what's going to work and what's not going yeah.

    Scott Chalupa (16:46)
    Yeah,

    they're like two dueling tech bros at this point. So I think that using ChatGPT or really any large language model, I think ChatGPT is a good one, especially if your process is going to be more creative. I think Claude is another really good one that could do, because they're both really good at sounding very human, just on their own.

    David Peck (16:50)
    Yeah.

    Yeah.

    Scott Chalupa (17:10)
    And they're also good at taking direction about sounding like specific humans. But the thing that I really like about, well, I've just jumped in the boat with Claude because of my ethical misgivings about open AI. The thing that I do like about them though is that I can just, especially when it comes with a lot of the creating that I do right now, sort of like,

    David Peck (17:14)
    Right.

    Scott Chalupa (17:32)
    Wholesale redesigning a whole bunch of my courses to go like how do I adjust for AI? How do I give my students this but also like how do I do more transparent stuff? so I take things I already have and I'll say these are my goals for what I want to do with these like assignments or this thing that I'm going to draft for you know communicating to students and This is the voice I would like for it to have and

    You know, it's also really good at kind of helping me organize my thoughts and pick priorities, which it sounded to me like that's kind of where your brain is a lot. It's just sort of everywhere. And so I just need help sorting through it all, which is where I am like 95 % of the time. And so it does give me a lot of tools or it gives me a lot of help in sorting through the mess.

    David Peck (18:02)
    Mm-hmm.

    Yeah.

    Yeah.

    Scott Chalupa (18:23)
    and figuring out here's how I want to structure this. Or, you know, I thought I really wanted to explore A when really it seems I'm more interested in like X.

    David Peck (18:34)
    Yeah, yeah, it's really interesting. mean, maybe it's because we have such a lack of human connection, but it's probably something, you know, in history, people may have done in groups like, you know, the the writer circle in like Paris or whatever I imagine like Gertrude Stein and like Hemingway and like all those people kind of like were

    Scott Chalupa (18:47)
    Right?

    How great would

    it be to be in like your truth so long

    David Peck (18:55)
    Yeah, well, so yeah, we can't be in Gertrude's salon, but maybe we can be in Chad GPT's or Claude's. Maybe that's a good prompt. Maybe I'll say, pretend you're Gertrude Stein and give me feedback.

    Scott Chalupa (19:01)
    Right?

    Right?

    That actually could, that's something I also would probably feel less ethically dubious about too, because I think a lot of Gertrude Stein stuff is now sort of like, you know, open source as opposed, or like open access as opposed to like heavily copywritten.

    David Peck (19:21)
    Right.

    Exactly. So I mean, a lot of your work is obviously with students. And you mentioned that part of what got you interested in AI in general was hearing about Harvard students who are using it to fly through Harvard. I thought a lot about this, and I don't know that I have a strong feeling. But it seems like it's one of those things that's inevitable. So if we fight it,

    Scott Chalupa (19:36)
    Yeah.

    David Peck (19:47)
    at a certain, well, there's a way to fight it, I guess, maybe. But if we fight it, basically, we're never going to stop the wheels of progress or whatever you want to call it. It's going to, it's too big of a thing. It's going to, it's going to happen whether we want to or not. So we either have to embrace it as it is, or try to figure out a way to work with it.

    And it's a little bit like, you know, when computers first came on or, you know, scientific calculators and, you know, at first that was considered cheating. And now it's like, yeah, you can use your scientific calculator to take all your tests. like, you don't have to do, and it's more accurate. And so we sort of as a society come to embrace these things. What do you think is sort of a reasonable way for, especially younger students? I mean, you're, dealing with college students, but I mean, I have kids who are seven and nine. And I think, you know, they're

    Scott Chalupa (20:11)
    you

    David Peck (20:33)
    they use computers way more easily than I did, especially at that age or even now. Like it's just much more intuitive to them. So it seems very clear that AI in some way, shape or form is going to be a part of their education, even in elementary and high school and for sure in college. Is there sort of like a lens through which you would advise people to look at AI in education?

    Scott Chalupa (20:57)
    Yeah, that's actually something I've thought about a lot over the last couple of years, and especially with my department chair slash co-conspirator, is sort of like the ethics of how we introduce these students. Because we have like not just, you know, your more traditional student population that you see in.

    Community colleges, which kind of ranges from like recent high school graduates to middle-aged folks trying to reboot their lives. But we also serve like a really big like dual enrollment population where they're taking college credit courses in high school. And it's, which presents its own challenges because a lot of the school issued Chromebooks these students have, have every AI tool on them completely banned and blocked.

    David Peck (21:22)
    Mm-hmm.

    Yeah.

    Scott Chalupa (21:42)
    But the challenge is to give them like a structured way to approach using AI in a way that supports their learning instead of doing the learning for them. And I've found that with both the high school students and also our more traditional population, folks really do understand that ultimately education is about learning things, even though we have, you

    We have pretty much trained all of Gen Z and Gen Alpha at this point to look at school as a means by which to get a grade which gets you a job. And what you do in order to attain these benchmarks is not as important as actually attaining the benchmarks. And so how do I approach AI so that I can give students the ethical and the technical?

    David Peck (22:23)
    Yeah.

    Scott Chalupa (22:36)
    tools so that it helps them explore the stuff they want to learn and gives me the opportunity to also be a part of that process so that I can weigh in and say like, not this. So one of the things that has come out of that is trying to approach things like academic integrity from.

    more positive examples, supportive way. I don't know how much you remember from college, but like, you know, lot of academic integrity policies are really more about the criminal. Here's all the things we can bust you for. And here's what's going to happen when we bust you. And I did at the first presentation that Maggie and I ever did, I did make the mistake of like having the, our own like.

    David Peck (23:12)
    Yeah.

    Scott Chalupa (23:21)
    college honor code up on the huge projector screen going, I would love to get my hands on this to actually make it more reassuring and supporting for students and saying, here's what academic integrity actually is. What that looks like with AI is going to be evolving. Because like you said, we have the tools now that we currently have. They're currently just sort of scaling up in power.

    They haven't really fundamentally changed within the last couple years. The new wrinkle that folks are talking about is rather than having large language models, is having AI systems where you've got multiple models that are specialized on different things. And then the system is able to work with the inputs and the data from those various models to be more of a generalized intelligence. So who is it?

    Francois Chalet, who was at Google for a good while, is now founding his own new startup that's going to look at AI systems. Then there's also Arthur Mensch, who's the head of the French equivalent of OpenAI. Mensch is sort of more of like a direct analog of Sam Altman.

    but he's also looking at doing AI systems. And so in the next few years, they're going to evolve even more rapidly if they're doing these systems.

    we either embrace it or we get left behind. And higher education for me is, I mean, it's really slow to make any kind of a change, period. And so, you know, I'm happy to sort of figure out what I can do to like innovate on behalf of students and like faculty and staff who really wanna embrace things. And I'm also happy to like drag everybody else along with me if that's what it takes, but you know.

    David Peck (24:46)
    Yeah.

    Yeah.

    Yeah.

    Scott Chalupa (25:11)
    One other thing as far as like the rapid evolution of like AI technology. Another reason why I think higher education specifically needs to be a lot more literate and a lot more attentive and intentional is that in way back in 2021, there were some community college systems in California who started finding

    that there were AI bots. So these weren't like large language models like chat, GPT, et cetera, but AI bots that were enrolling in online colleges, online college courses. And from like 2021 to I think the last article that I read was spring 2024 ish.

    Where's the brain? Community college systems like in Southern California had had to repay over half a million dollars in federal financial aid funding.

    David Peck (26:07)
    because they're...

    Scott Chalupa (26:08)
    that folks

    who were using the bots, you know, had enrolled the bots in as sort of the next high tech generation of financial aid fraud. And now, you know, they've got large language models that can write emails to be sent. And so, you know, it's rapidly evolving. And so not just from an educational standpoint, but also colleges have to be like really sharp and paying attention

    David Peck (26:18)
    wow.

    Scott Chalupa (26:35)
    on a business standpoint as well.

    David Peck (26:38)
    Yeah, that's fascinating. mean, and so a lot of people, whether they're artists or educators, or even just people in corporate jobs, I would imagine too, have a concern that AI is coming for their jobs and will replace their roles entirely. Do you think that's a realistic fear or is it something that it's like every technology that we've experienced throughout the course of human?

    development that we basically evolve and change into something different and while it may seem scary to make those changes, we do find a way to adapt.

    Scott Chalupa (27:10)
    I think in some ways there, once we kind of get to the next big evolution in AI, which could be a year or two from now, could be five years to a decade, it kind of depends on how things play out with, know, systems kind of looks like, AI systems looks like it might be the next big leap because you would then have like multiple models working together.

    David Peck (27:35)
    Mm-hmm.

    Scott Chalupa (27:37)
    But large language models as we have them now are not capable of replacing human beings. They can't act as agents. That's also kind of like the hot new term that folks are throwing around is AI agents. I also read an article earlier today where

    know, Jensen Huang and a few other folks are basically saying this artificial general intelligence thing is just a marketing term. It's absolutely meaningless inside like the actual technological world where this stuff is being developed. I think they're, like I said, when we get to the next big step or the next big evolution in artificial intelligence, there are a lot of...

    primarily office jobs, especially the office jobs that do a lot of repetitive, iterative things, which could be easily automated. Those jobs will definitely be lost. And let's be honest, a lot of those jobs are also the folks who were sort of on the lowest end of the corporate totem pole, who maybe the companies are looking to replace with something automated. One of the big

    David Peck (28:29)
    Right.

    Right.

    Scott Chalupa (28:45)
    claims about this new star base, whatever, OpenAI and other companies are invested in and Donald Trump has been making some noise about it and saying that like, this could really be a great jobs creator. you know, when it comes to building the technical infrastructure and the power plants that we're going to need to actually make all of this stuff possible, great. But once all that stuff is initially built,

    all those new jobs will probably go away. And so long-term, it's really more of a job killer in the way that it's currently being investigated, or not investigated, but the way it's currently being looked at. And so we will need to evolve. What we expect humans to do will need to evolve. AI is also probably for the foreseeable future still gonna need some sort of human supervision.

    David Peck (29:12)
    Right.

    Right.

    Right.

    Scott Chalupa (29:37)
    to make sure it's not going off the rails.

    David Peck (29:41)
    Yeah, I mean, it's easy to see just in my limited use where without guidance, it is not directed and focused and it does not know what it's really supposed to accomplish. It's just it is that predictive kind of nature of it. It's guessing what it needs to be, but it's not always right or where you need it to go.

    Scott Chalupa (30:01)
    Well, and the errors that it comes up with and those hallucinations as folks like to call them. Last year, probably around summer or fall, I read an article where Jensen Huang, so he's just for anybody, he's the CEO of Nvidia, which is so far the only company to actually post a profit on AI because they build the chips that the AIs run on.

    He had also said that all those hallucinations are baked into the recipe because it's that random factor that you need to have in the large language models to make them sound human. It's the reason why if you type in the exact same prompt twice, you won't get the exact same answer from the AI. It's also the reason why hallucinations for a good

    are gonna be with us for a good long while. And that's also why we're gonna need to have human supervision of AI for a long

    David Peck (30:59)
    So with all of this, one of the criticisms about AI is that it scrapes human-made content. And you talked previously about not having as many ethical concerns about someone like Gertrude Stein because it's in the public domain. But there's a concern that it's scraping human-made content without crediting, without compensation.

    Are you seeing anything happening in this space where they're trying to figure out a way to address this issue or is it sort of one of those things that they don't really care because they don't have to?

    Scott Chalupa (31:31)
    So Taylor and Francis, one of the major academic peer-reviewed publishing corporations, so they publish hundreds of different journals, came to some sort of multimillion dollar deal with, I want to say OpenAI.

    where they would have access to like so much of Taylor and Francis's archival work and things that are not from open access journals, things that you would have to have, there's a paywall to get through. There's also, I'm forgetting which of the big publishers,

    David Peck (32:03)
    Exactly.

    Scott Chalupa (32:08)
    Part of me wants to say random house, but I don't think it's random house. But one of the big publishers sort of along those lines is also looking to sign a huge deal with one of the major AI companies. And the problem is that authors are not part of that discussion. And so how would authors of intellectual property and once you're published, the rights typically revert back to the author.

    not the publisher. And so how are we going to be compensating authors for everything that an AI pulls from their data? It kind of seems to me a little bit analogous to like streamers, like Spotify, Pandora, but it's more murky than that. I think, so there have been some moves in directions that I think are more problematic.

    David Peck (32:50)
    Yeah.

    Scott Chalupa (33:00)
    but at least there's some moves. I think we also need to see more about like what happens with some of the pending court cases as well, because there just really isn't enough case law in order to say what's a legal framework, but it's all very problematic and super ethically dubious.

    David Peck (33:17)
    Yeah.

    Yeah. And I think, you know, it's like with any new technology or whatever, monitoring of it always seems to take a lot longer than the actual creation of the idea. And it takes a long time for that to catch up and sometimes maybe too much time and people feel like they've been taken advantage of.

    And in thinking of that, like so many people see can see AI as just a shortcut kind of culture. Like it's, it's easy. I just type it in. And I mean, I will say that in many ways it is a shortcut for many things that I do. Do you think that it's going to create a culture where people are going to continue to rely on easy solutions rather than?

    Scott Chalupa (33:42)
    and

    David Peck (33:58)
    really engaging their brain and finding a long-term, like more in-depth or kind of like nuanced approach to problem solving? Or do you think that it's just sort of like the inevitable thing where at first it's going to seem easy, but then you're going to realize the limitations and then use it better and better as a tool?

    Scott Chalupa (34:17)
    think it's complicated. think in order to be able to realize that it's not giving you great solutions, you also need to be someone who has cultivated the capacity to think in a bit more depth and think a bit more critically. And so someone who has got some experience with not taking the easy way out. I don't know. It's this I think is also kind of like.

    David Peck (34:36)
    Right.

    Scott Chalupa (34:40)
    tied to my cynicism about what we've done with a lot of the younger generations with standardized testing, because it's sort of like, just tell me what I need for the test. So just tell me what I need to pass this class. you know, there are, so to like use my profession as an example, I'm sure I have students who are

    using AI in ways that are very undetectable because they are able to put in the time and the work. And so they're probably using AI more as you might be in order to kind of organize their thoughts and just sort of dump things in there and then help the AI can help them make sense of those things. Or they're, you know, double checking and being a really good human supervisor or, you know,

    David Peck (35:05)
    Right.

    Right.

    Scott Chalupa (35:28)
    Perhaps a better way to say that is like a human project manager for what the AI is doing. And then I have a lot of folks who were just sort of like, just do this for me. And the output in those cases is so horrible. But a lot of the folks who were satisfied with the horrible results, know, circling back are folks who probably have not been given the tools to do it the hard way.

    David Peck (35:33)
    Right.

    I would, agree. think one of the things that this reminds me of, was kind of going back to what you said previously about education is that I think so many people don't understand that education isn't necessarily about learning all the facts and figures that are in the books. It's learning about how to critically think about those and get the information. And just like any other tool, whether it's a history book or a math textbook,

    If you don't know how to use the tools and apply them, then the quality of your work is going to sever or you're going to come to incorrect conclusions or repeat history when you shouldn't have. You know, things like that. So it really is the missing link is like it feels like is the critical thinking piece. And for students who have good critical thinking skills, they're going to be able to take a tool like AI.

    and use it to enhance their work and make it more efficient, but they'll have the process to kind of evaluate that and actually understand if the work that they're producing is good. Whereas maybe students whose critical thinking skills are not as developed or they don't care about developing them, they're gonna use the same tools but to a much lesser result.

    Scott Chalupa (37:07)
    Right, I think this also applies towards more business and entertainment stuff as well. I don't know if you saw the story that came out, I want to say at some point in 2023 where ESPN had been using AIs to write stories. And they even came up with fake bios for the stories. And it was just this huge thing where writers and creatives were like, oh, they're already using AI to replace us.

    So, you know, that's from a business standpoint, I think sort of an easy human replacement way out. I didn't ever see any of those stories. It'd be kind of interesting to compare them with real human stories. I think one of the things that's kind of exciting are that there are, I'm forgetting the name of, there's an English company that works primarily in education and they've been developing AIs.

    David Peck (37:36)
    Yeah.

    Scott Chalupa (38:04)
    to provide student tailored specific approaches to education, but in ways that also encourage and help students develop their critical thinking skills rather than just providing simple answers. And I think one of the things I would like to do, you know, hopefully in the near future is to provide like some copy and paste prompts that students can use or

    create some custom chatbots that really are more about asking follow-up questions, getting students to think critically about why they want to make the choices that they make. One of the things I'm trying out this semester is to also, in a class where they're doing primarily like literary analysis, is to also say like, so what...

    personal experiences or history that you have that actually connects to this. In a way that an AI...

    can't always convincingly do, if you mess around with it enough, maybe. But forging those personal connections to the work, having some idea about why I wanna make this choice, as opposed to just, well, let me just fall in with this and get it done.

    David Peck (39:02)
    Yeah.

    Scott Chalupa (39:13)
    I don't know, the other thing that came up for me was in thinking about what an AI actually does or you had mentioned, I think, putting these things together and maybe it can or can't synthesize information. So one of my favorite podcasts, Mindscape, which is done by Sean Carroll, who's an astrophysicist who's now at Johns Hopkins, they talk about everything.

    David Peck (39:13)
    Yeah.

    Scott Chalupa (39:39)
    but he's had some really fantastic guests specifically addressing AI. And one of them was Yejin Choi, who I want to say is at Stanford. But she said that one of the things that, and she's a researcher in artificial intelligence. And one of the things that she had said was that AI is still not really doing synthesis all that much. What it really does is interpolation. I'm going to pull from this really big data set.

    put things together, but synthesis is kind of about creating a unique perspective and solving unique problems, which is where AI really messes up most of the time.

    David Peck (40:17)
    Yeah.

    Yeah, no, and I think that's the thing, at least for now, that is not being replicated, is that unique thing that, especially I would feel like somebody with a bit of creativity, brings to the table when you're taking disparate ideas and melding them into something new. That's not...

    something I have seen that AI is capable of doing yet. Like they're not able to, it's not able to kind of take those ideas and make something new. can kind of compare and contrast and mix them to a certain extent, but not, it just is, it lacks the humanity or the creativity that, you know, a human, so it can't, we still need humans, I guess, for that part of the job.

    Scott Chalupa (41:02)
    It doesn't

    have the human ring to it.

    David Peck (41:06)
    Right. No,

    feels very artificial. So.

    Scott Chalupa (41:10)
    Clawed did

    disturb me a little bit a week or two ago though, because I'm a fan of a lot of the improv comedy stuff that comes from like Dropout. And one of their shows, they give prompts. And so the prompt was, what is it? Shirley Temple is effing serious about her writers. And the comedian who,

    did the thing, it was fantastic. It was threatening and violent and full of all sorts of vulgarity in the voice of Shirley Temple. And so I took that exact same prompt and I put it into Claude and it was meh, okay. And I was like, okay, now you're going to take on the role of a comic with a reputation for being particularly dark.

    David Peck (41:43)
    Hahaha

    Scott Chalupa (42:00)
    still didn't have any obscenities, but it was way closer to what I might expect from a human than I would have ever expected prior. And I was like, I'm a bit unnerved by this.

    David Peck (42:12)
    Yeah. No, I mean, in my interview sessions with ChatGPT, it's sometimes freaky, the level of humanity, empathy, whatever that comes from. I don't think even my best friends would ask me that question.

    Scott Chalupa (42:28)
    It sort of reminds

    me of, so I've been binging the show Krypton from sci-fi pre-pandemic, looking at like Planet Krypton pre-explosion and the main villain is Brainiac and at one point, know, the grandfather of Superman is like, well, you don't really understand or feel emotion and

    David Peck (42:35)
    Okay.

    Scott Chalupa (42:50)
    He goes in, Brignette goes into this explanation of, but I have analyzed it and watched it, and I can tell you exactly what you're feeling or how you're responding. Whether or not I actually understand it or know what it's like internally is immaterial.

    David Peck (43:06)
    So interesting. Okay, so last little bit of topic before we wrap up. In the past week or so, the Oscar nominations came out and one of the controversies that sort of erupted in the last week is that it came out that The Brutalist, which is an Oscar nominated film for many things, used AI in editing. And I think at first it was like, my goodness, they've, you know,

    Scott Chalupa (43:07)
    Yeah.

    David Peck (43:30)
    they've used this to, you know, make this movie and whatever. And the editors came out and clarified that they were using it to help with editing some of the foreign language dialogue and kind of clarifying things and do basically trying to make it more accurate.

    Scott Chalupa (43:47)
    Mm-hmm.

    David Peck (43:49)
    And obviously, AI has been used with many things in terms of creating CGI. There's all sorts of AI that's been used in movies. So this is not a new thing. But I think that it became more criticized, I guess, at first, because this is one of those artsy, applauded movies that people don't expect, or I guess rightly or not or naively or not, to use AI.

    What do you think in terms of AI assisting, especially an actor's performance, even if they're speaking a language that they don't speak and we're using AI to help make it more accurate? Is it stripping them of the humanity, or do you think it's a valid tool for us to embrace within certain boundaries?

    Scott Chalupa (44:35)
    I guess...

    My core and probably only first question would be like, what's your process? You know, because creativity, for anybody who's a creative, creativity is never an event. It's always a process.

    David Peck (44:43)
    Yeah.

    Scott Chalupa (44:54)
    So if it comes down to the editors using it for accuracy of foreign language or translation, Chat GPT is absolutely killer at regional translations. It can analyze and translate into different forms of Spanish that you hear even spoken within different regions of Mexico. And so

    David Peck (45:17)
    Mm-hmm.

    Scott Chalupa (45:19)
    That is a really, I think, valid and excellent use for it if you're really trying to get, if you're trying to get like effective translations, or I would think even with sort of like, especially with so much of film being digital now, like having it actually help you like find the right moments to cut and slice, this is what I'm going for. I think especially if it's something that's more technical, like who cares?

    David Peck (45:37)
    Right.

    Yeah.

    Scott Chalupa (45:44)
    You know, like maybe you could come up with a really effective AI acting coach. The actor still has to do all the work themselves.

    David Peck (45:50)
    Yeah.

    True, it's just part of their process.

    Scott Chalupa (45:55)
    And so I think if it were using AI to generate content for the film, I might have some really severe misgivings or problems with that. But even then, what's the process? When I first started...

    having students collaborate with me this past spring on creating like an AI policy for each individual class. I was talking with some students that I'd already had the previous semester and I was like, so what have you all used it for? And one of my students was like, well, I used it to write my entrance essay for Claflin. And her friends turned around and they were like, you mean you cheated? And I was like, let's put a pin in the cheat thing and like, tell me what you used it for. And she says, well, I...

    I didn't really know what goes in a college essay, so I asked ChatGPT to write one for me. And it had nothing to do with me whatsoever, but I really liked a couple of different life events and other ideas that it had. So I took those ideas and then I wrote my own essay. And I was asked the class, was like, do we still think she cheated? Because I'm not sure that it was cheating.

    David Peck (47:01)
    Yeah.

    Scott Chalupa (47:01)
    It's all about process. Everything in creativity is process. And so I would really need to know the breakdown of like, take me through your steps. Otherwise the jury's out.

    David Peck (47:12)
    Right. It's like any tool. It's how you use it. OK, so last question is, I feel like all of this has evolved so quickly. Obviously, so much of this has been in the works for way longer than we could have imagined.

    Scott Chalupa (47:14)
    Mm-hmm.

    David Peck (47:28)
    in the last couple of years especially, it feels like the access to AI and the different, I mean, there's so many different brands now of AI. Like every platform that you're on has their own AI thing to help you. Do you think we're close to the tipping point of how quickly this thing is evolving or do you think there's a ton more left for it to go before we sort of plateau?

    Scott Chalupa (47:53)
    I think we're probably at some sort of like metastatic plateau at the moment.

    And now all the companies are gearing up for access. Microsoft just now said that Office 365 is now just Microsoft 365 Copilot. that was one of the things that has really lit another fire under me is that my students now have access to. Adobe Reader will read your PDFs for you three, four, five at a time and tell you like,

    David Peck (48:10)
    Yeah.

    Scott Chalupa (48:28)
    based on these five papers, what are some best practices for online education? And they're all sort of like recent online education research papers. So I think its capabilities are being scaled up right now. I don't know that it's necessarily gained new capabilities, but it's...

    David Peck (48:35)
    Yeah.

    Gotcha.

    Scott Chalupa (48:48)
    the rest of the world needs to catch up to AI, because it's clear that the AI companies are ahead of us, and they've got stuff in the works that isn't released yet. And so I think it's incumbent on the rest of us to catch up, but also to bring folks along with us, because I think access to AI and the skills to use it are going to exacerbate a lot of inequities that we currently have.

    those of us who are sort of forward thinking and trying to figure out like, do I make this work for me? think we also have a responsibility to go like, how do I also make this work for folks who don't have the same level of access to tools and, you know, methods that I do?

    David Peck (49:29)
    With that, I want to thank you so much for talking with me today. It's been really fun. Maybe we'll have to touch base in a year or two and see where AI has taken us. Or maybe the world will have imploded by then. Who knows? Cool. Well, thank you so much. Cool.

    Scott Chalupa (49:33)
    Yeah, thanks David, it has.

    You

    Who knows?

    Thanks so much, it was great.

    David Peck (49:48)
    And there you have it, another episode of Inside the Design Studio and the Books. If you enjoyed this exploration of life's design, hit that subscribe button so you never miss an episode. And hey, if you're feeling extra generous, leave us a review. Your thoughts fuel our creative journey.

    I'm David Peck, your design companion on this adventure. Until next time, keep crafting a life that's as captivating as your favorite masterpiece.

 

AI isn't here to replace your creativity—it's here to help you organize your thoughts and amplify what makes you uniquely human." — Scott Chalupa

 
 

Key takeaways

  1. AI is a tool for organizing thoughts and amplifying creativity, not replacing it. The most creative people will likely be those who use AI strategically.

  2. Education systems must adapt quickly. Training educators about AI capabilities and responsible use is critical to successful integration.

  3. Fear of AI often stems from misunderstanding. Direct engagement and hands-on experience help educators and creatives see AI as a collaborator.

  4. The human element remains irreplaceable. Judgment, ethics, creative vision, and emotional intelligence are uniquely human strengths.

  5. AI training should focus on practical application. Faculty need to understand how to integrate AI into their specific disciplines and workflows.

  6. Responsible AI use in education requires thoughtful policies, transparency, and ongoing learning as the technology evolves.

 
 

Guests Appearing in this Episode

SCOTT CHALUPA

Poet, English Instructor & AI Education Leader
Scott Chalupa is a poet and instructor of English at Central Carolina Technical College in South Carolina. Since spring 2023, he has been deeply immersed in generative AI implementation and training.
His expertise includes:
- Designing and facilitating AI training programs for higher education professionals
- Integrating generative AI into course development, instructional design, and student assessment
- Leadership on statewide AI implementation task force across all 16 South Carolina technical colleges
- Creating "AI for the Trainers," an 8-week professional development series that trained 100+ faculty, staff, and administrators
Scott brings a unique dual perspective as both a creative artist (poet) and educator, making him especially insightful on the intersection of AI and human creativity. His work bridges the gap between technological innovation and educational practice, helping institutions navigate the AI transition responsibly.


Tom Taulli's accessible guide breaks down AI fundamentals for non-technologists. Essential reading for educators and creatives seeking to understand what AI can and cannot do.

Marcus du Sautoy explores the relationship between AI and human creativity. Challenges assumptions about whether machines can truly be creative.

Carol Dweck's growth mindset framework is essential for navigating AI disruption. How we think about learning and change determines our success.

Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher examine AI's societal impact. Critical perspective on technology, education, and the future of work.

Johann Hari explores attention and focus in the digital age. Relevant as we adapt to AI tools that both enhance and fragment our concentration.


Resources

 
 
 
 

Related Episodes

Previous
Previous

Episode 29. Galentine's Day, Rom-Coms, & Pop Culture's Best (and Worst) Love Stories

Next
Next

Episode 27. What's My Word of the Year? A Reflection on the Past and Intentions for 2025