Podcast Season 3, Episode 7 Transcript
Babette Faehmel, Co-host: 00:00:08 
Welcome to Many Voices, One Call, SUNY Schenectady's very own podcast, where we talk
                     about the things that matter to the college community and for diversity, equity and
                     inclusion. This is actually a follow-up episode to one we did in October 2023, which
                     was season three, episode three, which happens to be one of our most popular ones based on the number of downloads.
                     So, in that episode we talked about what artificial intelligence could mean for education,
                     and we had a group of professors from SUNY Albany's AI Plus initiative with us. So,
                     what we realized then during that episode was that we really wanted to get at the
                     student perspective. So since then, Alex and I have sent out a small survey asking
                     students to volunteer their views, and then we invited some students and one of our
                     academic specialists to join us. So we get a nice diverse perspective.
 Alexandre Lumbala, Co-host: 00:01:09 
Alexandre Lumbala, the student co-host here, getting over a little cold, so my voice
                     is gonna be a little weird, and we have a couple - or three other guests today, less
                     than usual. I'm inviting them to introduce themselves. We can start over from Lonnie
                     to my left.
Lonny Davenport, Guest: 00:01:25 
How's everyone doing today? My name is Lonnie Davenport.
Maura Davis, Guest: 00:01:30 
Hi everybody. My name is Maura Davis and I'm an academic specialist here at the college
                     with the TRIO program, and before that I worked in our Learning Center for about eight
                     years as a professional writing tutor.
Wesley Rush, Guest: 00:01:42 
Hi, my name is Wesley Rush. I'm a business administration student here in Schenectady;
                     Schenectady-native and glad to be here, thank you.
Babette Faehmel: 00:01:50 
Yeah, and you're a repeat guest.
Wesley Rush: 00:01:52 
Yes, ma'am.
Babette Faehmel: 00:01:53 
You've been with us last episode too. Lonnie, What's your field, what's your major?
Lonny Davenport: 00:01:59 
I'm studying communications.
Babette Faehmel: 00:02:01
Communication. Okay, that's very fitting for AI, wouldn't you say?
Alexandre Lumbala: 00:02:04 
Oh yeah, 100%, that's nice.
Babette Faehmel: 00:02:09 
Well, okay. Um, today there might be a lot of like paper rustling because we have
                     our survey results right here in front of us and one of the first questions we asked
                     was, basically how do students feel about the – well - the term generative AI? And
                     I have to say it was about evenly split between dread and hope, and it has like with
                     a slight lead actually, of those who felt either dread or a mix of fear and hope.
                     So, Lonny, what would you say when you hear AI? Is it dread, hope or indifference?
Lonny Davenport: 00:02:50 
It's a little bit … it depends on how AI is being used. For education purposes, I'm
                     curious to learn how we can help students, how we can advance students, aid them in
                     teaching, make their learning experience a little bit better. However, there… what
                     does scare me is the private sector AI, or the private, um, you know, the apps that
                     kids can get a hold of. I'm a father, my son's 11, and I've had a couple situations
                     where I've had to delete some of his AI apps off of his laptop and his phone and tablet.
                     So, I am one that is mixed. I'm more worried about it from a safety perspective.
Babette Faehmel: 00:03:51 
Yeah, how old is your son?
Lonny Davenport: 00:03:52
My son's 11. He'll be 12 in May.
Babette Faehmel: 00:03:56 Okay, I see, I find those um that on that question of apps, I'm really, I'm really interested in that, like how … how did… how did you get those apps? I mean, were they just basically on the phone? Were they just preloaded?
Lonny Davenport: 00:04:09 
Yeah, you could download them. You can go to the, you know, depending on your phone,
                     the Google Play Store.
Babette Faehmel: 00:04:14 
Yeah, yeah, so you need to go and seek them out.
Lonny Davenport: 00:04:17 
Yes, you have to go and find them. He heard them from I'm sure there's other kids
                     that have these apps. He's in middle school, so dealing with outside influences now.
                     So he went and searched for them.
Babette Faehmel: 00:04:32 
Would you say that most of his friends have these apps?
Lonny Davenport: 00:04:36
I wouldn't know. About most of. I wouldn't know about whether his friends have apps.
                     I just talked to my son and when I asked him, you know where'd you hear about it?
                     School. Kids in the hallway. You know what I mean? Maybe eavesdropping or hear something.
                     Somebody talking too loud, and he looks it up for himself. So um, but I would imagine
                     it is popular.
Alexandre Lumbala: 00:05:02 
No, Alex here. Just to give like, uh, you know, closest to, just really close to high
                     school experience… Apps that usually like I would use, or I would hear about, like,
                     that's very common. Like, … and this is in a school environment where our phones were
                     like I guess we were just moving away from the button phone you know, so, like you
                     wouldn't even expect apps to catch a lot of fire, but they did. And it was apps that
                     probably kids also should not be using that were catching a lot of like heat. Um,
                     what did I…. what else did I want to say? You wanted to point, Babette that, um, the
                     apps are seeked out. But it's also true that some of these apps are advertised to
                     the children. Like you might be playing a game…
Babette Faehmel: 00:05:48 
I figured! I figured! Because I think actually kids are picking up the stuff much
                     more quickly than - well - people my age do. And also by the time they are ready for
                     college, well, I mean like, [Lonny] obviously your kid is going to be a while, but
                     I think people who are graduating from high school right now, they are already getting
                     pretty AI ... I don't want to say savvy necessarily, but they are definitely using
                     them, knowingly, or not.
Alexandre Lumbala: 00:06:18 
Technology. Social Media
Babette Faehmel: 00:06:20 
Yeah, exactly. So, Maura, how do you feel about AI?
Maura Davis: 00:06:27 
I think I'm on the fence, like Loni. Um, I tend to be excited about advancements in
                     pretty much anything. I am a child of the 80s. I remember the war against video games.
                     I wanted a Nintendo and every adult in the world thought this is the beginning of
                     the end. This is going to rot their brain out. Um, so there is a significant amount
                     of dread, I think, as an educator, when I see what AI can do. The dread comes in when
                     I think about my students not being able to do these things and to rely on AI for
                     these things. So, I think it's more of a dread of the lack of human interaction and
                     engagement and learning from what AI could maybe give us more than the technology
                     itself, if that makes sense? Because I see a lot of, a lot of really wildly cool possibilities
                     available and I feel like a lot of us are just too timid to start exploring. What
                     can this look like? How can this be a really good agent of change in school? Because
                     we don't want to make a misstep. So I feel like I'm with a lot of educators where
                     I'm really eager to explore what AI can offer. I'm with a lot of educators where I'm
                     really eager to explore what AI can offer, but I'm also very scared of sliding down
                     a slippery slope into. Is my introduction of AI or my condoning AI, or finding ways
                     AI can be used? Is there going to be some unforeseen way that this limits my students
                     and actually detracts from their education?
Babette Faehmel: 00:08:06 
Yeah, totally
Maura Davis: 00:08:08 
The AI having the skills and my students not ending up with those somehow because
                     of the interaction with AI.
Babette Faehmel: 00:08:15 
Yeah, yeah. I think, Lonnie, you had a … you were about to say….
Lonny Davenport: 00:08:17 
Yeah to piggyback on what Maura was saying about trying to find the balance between
                     using AI as a tool instead of just completely leaning independent on it 100%. From
                     an educational perspective, I love doing research. I love... I'm a child of the early
                     90s so I remember Encyclopedias… Britannica… that's how I was raised doing research.
                     I didn't have a computer when I first started in school. So as an adult there is a
                     an excitement to the journey of research just for me. And AI can take that excitement
                     away. It takes away from the journey. It can…
Alexandre Lumbala: 00:09:08 
It's, it’s a new way. I remember watching movies or just interacting with older folk
                     when I was younger and seeing the … I guess …relationship with you know books, libraries,
                     encyclopedias, and as a kid I thought when I get older that's how I was going to be
                     doing my research. But I quickly realized that's not the case. That's obsolete. It's
                     going to be internet, google searching and all that kind of stuff yeah, yeah.
Lonny Davenport: 00:09:32 
And even that's fun. It's just finding that balance between, like I said, using it
                     as a tool to advance whatever work you're doing. But not losing the…
Alexandre Lumbala: 00:09:40 
But not losing the previous art that …
Lonny Davenport: 00:09:45 
The authenticity, Exactly, that’s…
Babette Faehmel: 00:09:47 
Yeah, I totally hear you. Um, I have to say that I mean, like part of what I fear
                     is that the, the adventure of discovery, gets lost, or the joy of discovery, because
                     that's also what research is, right? And I don't think that's necessarily the AI's
                     fault. I actually think that's I actually think that's the education system's general
                     fault, because I think in many ways AI… So, okay, full confession, I'm requiring for
                     certain assignments that my online history students, us history students, use AI and
                     I honestly, I don't see AI as very, very, very different from the textbook. The textbook
                     lays out facts and most of what the AI lays out in front of you are facts. Actually,
                     the potential promise is that AI has not been wedded by peer reviewers or whatnot.
                     It doesn't have this kind of publisher stamp of approval. So potentially anything
                     can be false. But that also should then trigger your critical thinking skills, right,
                     where you then double check or you ask questions like how much of this is legit? And
                     unfortunately, I fear from what I see, is a lot of helplessness, like, just like...
                     Not helplessness, but just like students seemingly thinking not all of them, obviously,
                     but some like I don't know how legit this is, and here is where I will stop. So, confusion
                     instead of investigation. And that's unfortunate. I don't think that you can blame
                     that on the AI, and also you cannot blame that on the individual student. It's like
                     how… what … It's our schooling. Maura?
Maura Davis: 00:00:35 
Yeah, this is Maura. I want to agree with that [whole]heartily, because I think that
                     it's on us as educators to be developing what we bring to the classroom and how we
                     are evaluating how people are learning, right? So, if we're giving them assignments
                     that either preclude AI, and say ‘you can't use this at all, just get it out of here,’
                     we're not really giving them the assignments for the world that we're in. And if we're
                     not giving assignments that do spark that will to investigate, that will to want to
                     fact check, I'm wondering what we and I don't think I have the answer, but I'm wondering
                     what we can do as educators to get people excited about research, to get people into
                     that investigative mindset of ‘let's see what AI tells us, and then let's find out
                     if it's true’. Where can we go to check, and how can we follow up? Because I do see
                     the same thing: Students kind of run into a wall and they're like, ‘well, this could
                     be true or not. Not sure if I'm going to use it in my paper … Crickets, crickets,
                     crickets. And that's kind of where people give up.
 Babette Faehmel: 00:12:45 
Yeah, exactly, and that's how we, um. That happens in so many other ways too, right?
                     Like misinformation, disinformation in the news, that kind of stuff. So, what do you
                     do then? Well, you go and find out, you find reliable answers, but how? Like, this
                     is like a skill that needs to be cultivated. But before um, so first of all, let's
                     put a pin in that, that inquisitive mindset, the slippery slope and the human engagement,
                     because I think we're going to get there, but, Wesley, um, we haven't had a chance
                     yet to ask you. Dread or hope, or indifference?
Wesley Rush: 00:13:19 
Save the best for last, right?
Babette Faehmel: 00:13:22 
Yeah, exactly.
Wesley Rush: 00:13:23 
My ideas are ‘pretty much dreadful.’ And I'm sad to say, because I see the advances
                     of technology, and I grew up in the 90s, and even since then, like, the advances have
                     skyrocketed. But I also know the negative aspect of just human nature. We tend to
                     abuse a lot of that technology. Um, and that's where I see AI going. Um, when it comes
                     to the apps, I see technology advancing so much that you don't even have to download
                     a separate AI app. It comes already programmed, and even the apps that I have on my
                     phone, I'm realizing that a new AI system is programmed in there that I did not download.
Babette Faehmel: 00:14:13 
Yeah.
Wesley Rush: 00:14:14 
And when it comes to the learning aspect of it, I see it as an extra step that's kind
                     of unnecessary. It takes away from the journey of learning. Even if I were to Google
                     something and get a quick answer, I'm not necessarily learning anything, I'm just
                     getting the answer. So as far as AI being taken advantage of in an academic sense,
                     it's just an extra step. That is really unnecessary because, kind of like Wikipedia
                     and all these other generated information platforms, they could be incorrect, they're
                     not fact-checked, they're not published. So if I'm using AI to get an answer and it's
                     giving me an incorrect answer, where am I? I'm taking unnecessary steps and I'm going
                     backwards in life. And I might as well just do the proper learning steps and the research
                     and understanding the process myself.
Alexandre Lumbala: 00:15:15 
So can I, can I…? I just want to. So, this is just to tell you what I hear, because
                     I feel like there's a lot of people who are going to hear that when you speak that
                     way. All I hear is, and it's probably a very closed mindset, and I'm aware of that.
                     All I hear, is the world is evolving, right? We've seen that a lot of the inventions
                     that the world has given us, has brought us to worse places, even though we thought
                     it was going to bring us to better places. Can we not take on this next thing? Is
                     what you're basically saying. Can we just continue living in the world without AI?
                     We'll probably be okay.
Wesley Rush: 00:15:53 
We would be perfectly fine without AI.
Babette Faehmel: 00:15:54
Yeah, we would be, perfectly fine.
Wesley Rush: 00:15:55 
And… I don't think …It's not the technology, it's us. It's how we use the technology.
Alexandre Lumbala: 00:16:00 
Yeah. So, you're saying, just like that tool that's coming, we don't need that, we're
                     good. But it's like as much as you say that, it's like I just feel like it's still
                     going to come anyways.
Wesley Rush: 00:16:10 
It is True, you're right.
Babette Faehmel: 00:16:13
I mean I am sure that AI … I mean what we are talking about here is generative AI
                     - these large language model chatbots. Um that's one form of Ai, that’s like, that's
                     what we are dealing with, like that's what as educators and as students, that just
                     um comes like across our desk and the computer screen. Um I'm sure there are amazing
                     uses in medicine and whatnot. But that's also, I mean, okay, So that can, that, that
                     should be able to happen. That’s fine. But it has been, also kind of, I feel, dumped
                     on us without anybody voicing the desire to have that?
Alexandre Lumbala: 00:16:49 
Yeah, you mentioned that, like the person who, um, released ChatGPT, didn't ask anybody
                     if they wanted it. He just…
Babette Faehmel: 00:16:56 
Oh! Mr. Okay, his name shall remain unnamed, but also Wesley. You brought up this
                     other thing and once again, that's back to the point that AI is not, is not entirely
                     new. The problem is not entirely new, because you mentioned Google, right? I mean
                     back in the days when we had card catalogs and whatnot and opened the encyclopedias.
                     That was cumbersome, it was very time intensive. It was very time intensive. However,
                     what we do now is that we essentially have outsourced our information needs to private
                     companies that are profit-seeking and, like I mean, Google's parent company is an
                     advertising company, right? So, why… what's wrong with us if we trust them? I mean,
                     like people were already saying before AI, like - I see that in my students - I did
                     research, I researched this. And, I was always thinking, like, actually, you didn't
                     research anything, you searched for something, you found an answer and then you ran
                     with it. And research is about questioning things, it's about being confronted with
                     material and then coming up with questions, and it's not about just finding answers.
                     That's already been happening. Lonnie?
Lonny Davenport: 00:18:16 
Yes, this is Lonnie. Um. One of the things that I brought up was .. Um. How there
                     are so many different private apps and how each one of us can come up with a AI app
                     and put it out. I think, if we are going to take steps to use AI properly in the education
                     system, I really think the education system should have one or two or maybe even three
                     different types of AI. That it's all... you know what I'm saying... It's all organized,
                     it's ran by one group, one company. The information is ran, and put in, by teachers
                     and professors. Like bringing up Wikipedia. Anybody can go on Wikipedia and add information,
                     false information.
Babette Faehmel: 00:19:14
Yeah, but it won't stay on Wikipedia for very long because they actually have evolved,
                     and they have a lot, they have editors. It doesn't stay as false information on Wikipedia
                     for long.
Alexandre Lumbala: 00:19:26
I like that you say that, because, it's a big controversy when anyone sources Wikipedia.
                     But then sometimes I'm reading Wikipedia, it's the first thing that pops up on Google,
                     it's the easiest thing to read and I'm, like, ‘it's not wrong.’ It's like yeah, even
                     though I'm sorry to say.
Babette Faehmel: 00:19:41 
No. And at least they have footnotes.
Alexandre Lumbala: 00:19:43 
But on Lonnie's point, I do hear what you're saying in the sense that it's easier.
                     It's like in business when you have a monopoly, if you have.
 Lonny Davenport: 00:19:50 
I didn't want to say the word monopoly, but that's what I'm saying.
Alexandre Lumbala: 00:19: 55
Monopoly has a negative connotation to it but, essentially, it's like if we have a
                     country where there's where water is scarce, we need one company so that we can just
                     control that one company and be more aware of what's going on, rather than having
                     all these independent companies that all mess with the water and we don't even know
                     what's going on.
Lonny Davenport: 00:20:11
One standardization across the board as far as the use in education, maybe differences
                     in the levels of education. But I think if we are to properly use AI, I think we need
                     to develop a system of standardization. That way we can eliminate, you know, the leisurely
                     uses for AI. You know what I mean. I'm sure there's tons of uses for AI, but we don't
                     want it to be a distraction. You know what I mean. Like when my son was doing school
                     online during the school he was on YouTube. It was such a distraction. Like certain
                     kids at certain ages, they can't you know stay focused. You know what I mean?
Alexandre Lumbala: 00:20:56 
Even in adult it happens.
Lonny Davenport: 00:20:58
So, if we can develop a system, where, it keeps young kids, or anybody who uses it,
                     engaged, and still advertises the research and the journey of research, and it's being
                     used in a positive way, I don't see anything being wrong with it. I just want to see
                     it become more safer, and, you know, more standardized as far as being used.
Alexandre Lumbala: 00:21:26 
Maura, you'd like to piggyback off of that?
Maura Davis: 00:21:28 
Yeah, absolutely, Because I heard something really interesting kind of emerge in this
                     discussion, especially about Wikipedia as well. I mean, as you said, that's the first
                     thing that pops up in Google most of the time and students are always like, oh, but
                     I've been told not to use Wikipedia. And I often tell them, well, let's actually start
                     here. Let's look at the article, because, as Babette said, there are citations at
                     the end of this article and often the Wikipedia is an amalgamation of a lot of scholarly
                     research and you can actually link to that research. So I said, let's find something
                     interesting and then let's check out the source that's listed at the end of Wikipedia.
                     And I'm wondering that kind of ties into what Lonnie said. Because it sounds like
                     you're almost suggesting like a Google Scholar of AI
Alexandre Lumbala: 00:22:19 
Yeah, yeah.
Maura Davis: 00:22:20 
Or something that kind of looks more like a library database where it is AI, it is
                     generative. But what's going in has been somehow vetted. And I often tell students
                     to start with Wikipedia but then go to these real vetted sources at the end. Or start
                     with Google Scholar and see what library databases you come back to.
Wesley Rush: 00:22:42 
Wesley here. I think the problem is that, as far as managing AI, as far as doing all
                     that, I think it's too late yeah. I think the information where AI is getting. It's
                     getting it from the internet, which any one of us can put anything on the internet
                     and express ourselves however we want to, and what I realized is that the masses rule.
                     So it doesn't matter if it's right or wrong, if everybody's saying it or everybody's
                     agreeing it or everybody's watching it, everybody's clicking it. That's the first
                     thing you're going to see. When you Google something, it's not necessarily the right
                     thing. It's what is being what's bringing the most attention. And a lot of companies
                     actually pay for that to get their spot on number one, and that's the problem, um,
                     as far as managing it. There's too much information on there to manage.
Alexandre Lumbala: 00:23:31 
I have to admit that I think with the guests, because I started off with a very, very
                     hopeful with AI, and then I started to realize as well, I don't want consequences.
                     I'm imagining what consequences could come from my optimism and in the room right
                     now I think, the majority of the people here have that optimism, and then for you
                     it's the fear or the dread. You said, right, yeah, so you might bring us all back.
Wesley Rush: 00:23:58 
Honestly, I think that's the problem with AI in general. It's the fear of the unknown.
                     We don't know what could happen or how far along this AI in general. It's the fear
                     of the unknown. We don't know what could happen or how far along this AI can go. And
                     history since 60s, 70s, 80s like what's the agenda on AI movies? Robots taking over
                     the world.
 Alexandre Lumbala: 00:24:17 
We might end up burning ourselves in the back.
Babette Faehmel: 00:24:19
I don't know if it's the AI that I'm so fearful of or if it's just the human propensity
                     of misusing tools that are at our disposal, no matter what, right? What I heard Lonnie
                     say a little earlier is that you really want human involvement and, moreover, ethical
                     human involvement in AI development is key to you, because somebody needs to be there
                     and make sure that, well, the biases are being addressed. Right? Algorithmic bias
                     is not a new thing, right? So we've known that all along. Is it better than human
                     bias? I mean, I cannot go open people's brain and correct the human bias in there,
                     but I can probably fix – well, I can't - but some people can fix the algorithm, right?
                     So, I mean, I think it's a mix. It's definitely a mix, but I'm glad that there are
                     no, well, I mean, none of us is a big techno-optimist just thinking that this is like
                     the best thing since sliced bread, and none of us is, like, trying to prohibit use
                     for all students just because that would be like an equity hazard, kind of like deny
                     people like developing skills. Right? Maura and Lonnie, you were about to jump in.
 Lonny Davenport: 00:25:42 
Yes, Lonnie, again. I was just saying we have to do. What we're doing here now is
                     kind of sit down and have multiple perspectives and take different, multiple perspectives
                     and figure out the pros and cons, and discuss things. You know? I, me, I naturally,
                     I'm always a neutral-minded person, so I always have this natural optimism but, like
                     Wesley said, we've been fed for so long. You know, through movies, through TV, even
                     through real life, we've been fed these horrors of what AI has done. And even when
                     you go back to the Facebook privacy issues that they were having, where they were
                     tailoring what they advertised to you based on what they searched, basically they
                     had tracking communication. That's also a form of AI. So it's been around and making
                     sure that it's being used properly by humans. That's the main thing, because AI is
                     still man-made, no matter how we look at it, but we have to be able to have the right
                     people and responsible people to control it and use it.
Alexandre Lumbala: 00:27:15 
The security is in place. Yeah, no, you're totally right.
Babette Faehmel: 00:27:22 
No, all the comments are really amazing. And a lot of these things that you all already
                     said, they were also brought up by the students later in the survey, right? I don't
                     want to lose sight of some of the other insights that the students who responded to
                     the survey contributed, so let's look at the next question. Alex what was the next
                     question?
Alexandre Lumbala: 00:27:43 
Okay, the next question was: ‘Thinking about the classes you took this semester and
                     last, which statement best describes how your instructors approached generative AI?’
                     Amongst the responses, the most popular was: ‘They did not allow it because they believe
                     it would negatively impact our learning.’ About five students went with that one.
                     The least popular was ‘They did not allow it, but they did not say why.’ ‘They did
                     not allow it because they believe AI had huge problems with bias and reliability.’
                     And then, you know, another one was, like, ‘They just did not mention,’ it or ‘They
                     used it openly in their teaching and preparation and asked students to use it.’ So,
                     it's like a lot of different professors are doing a lot of different things, but the
                     most popular was ‘it was not allowed and it would negatively impact your learning,
                     so don't use it.’
Babette Faehmel: 00:28:32 
Yeah, how do you feel about that?
Lonny Davenport: 00:28:38 
Like, this is Lonnie again. Like I said earlier, we have to come up with a monopoly
                     sort of say, I feel like…
Babette Faehmel: 00:28:50 
Do you mean monopoly or oversight?
Alexandre Lumbala: 00:28:53 
Oversight.
Lonny Davenport: 00:28:54 
Oversight.
Babette Faehmel: 00:28:56 
Okay. Because, a monopoly we have.
 Lonny Davenport: 00:28:59
I mean on, on AI. When I see, like the different type of, um
 Alexandre Lumbala: 00:29:05
… tools that every student is using can be so different.
Lonny Davenport: 00:29:08 
Yeah. The different type of apps, the different type of tools. So the perfect example
                     is we had one, I had one teacher, who said they had a student use ChatGPT to write
                     a paper. And they asked a question, and the AI generated an essay for him. But without
                     reading, they just printed it, and turned it in. And the information that was given
                     to the student from AI wasn't even in the book where he had to get the sources from.
                     So, the teacher was asking this person how did you come up with this information if
                     it's not even in the textbook?
Babette Faehmel: 00:29:42 
Uhum
Lonny Davenport: 00:29:43 
So, there's what I'm trying to say is, um, it finds an app or develop an app that's
                     going to streamline. You know what I mean?
Alexandre Lumbala: 00:29:53
That you can tell students this is the one that is build for students.
Lonny Davenport: 00:29:56 
Yeah, this is and that's what I'm imagining in my mind, but it's, it's like you said.
                     It's so hard it it's out of control. Like technology is like the Wild Wild West now.
Babette Faehmel: 00:30:07
Yeah, but okay. So I think I totally agree with you that we need some standards, I
                     guess. But what we have right now is a couple, a handful, of very highly funded companies,
                     um, being in control or having, like, the brunt of the control over AI development.
                     And so here I wish we had access to, I don't know, our own little expert on AI. But
                     the thing is that the development of the tool is incredibly expensive. Incredibly
                     expensive. It has something to do with, um, the chips that are needed and the energy
                     that is being dispensed and whatnot. Um, the computing power or whatnot. I mean, I'm
                     not a computer science person, at all! But it's not the problem, it's not that we
                     have no monopoly, the problem is that we have no control over, um, how this tool is
                     developed. There's not enough. I don't know, philosophically, and ethically minded
                     people involved in my mind.
Alexandre Lumbala: 00:31:17 
Yeah. I think, as we're talking about it, I'm seeing how it was a thing that wasn't
                     ready to be put out yet, but they were just like. I think, if we put it out, it will
                     speed up its development.
Babette Faehmel: 00:31:26 
We are testing it.
Alexandre Lumbala: 00:31:28
Yeah, they're testing it
Wesley Rush: 00:31:29
Especially if it adapts.
Babette Faehmel: 00:31:31 
Yeah, exactly. And it does. It is. I mean, I don't know if anybody aside from me is
                     using ChatGPT, but it has this little ‘like’ button now where you can say this is
                     a good response, I can use this. So we're training the tool that potentially will
                     replace us. So. Yeah. How would you like your instructors to approach AI in your classes?
                     Wesley?
Wesley Rush: 00:32:00 
Um. Me personally, I wouldn't care too much, um, if my professor wasn't so strict
                     on it, I don't think. Because at the end of the day, I think students are going to
                     use it regardless. And if your professor allows you to use it, I feel like you won't
                     be so tempted to use it unless you needed it. But then if you used it and you got
                     the wrong information, you'll realize that hey, maybe this is not a great tool for
                     me, or maybe I need to do it in a different way, or maybe it'll give me a different
                     perspective that I can use as a template. As long as you're using it responsibly and
                     not necessarily taking full advantage of it and being, you know, dependent on AI or
                     ChatGPT and stuff like that, I think the professors would be more willing to open-minded
                     to it.
Babette Faehmel: 00:32:51 
Yeah, I mean, I'm a big proponent of like, having students use it, but openly. And
                     I mean, what I kind of fear is that, I mean, okay, these AI detectors, they are their
                     own, separate topic. But what I feel is happening is that some students use the AI
                     without disclosing, even when they are allowed to do it, and that's a huge problem
                     and I don't know how to counter that, that tendency. And then the other thing that
                     I'm that I'm seeing is that some students use it just as another tool and they keep
                     questioning and questioning, questioning and searching and thinking while they're
                     using the tool, and I don't see them negatively affected at all. They're just using,
                     they're just learning how to use a new tool, and that's beautiful. And then, as we
                     already talked about, the problem is when students don't even know what questions
                     to ask, because they have never been encouraged to see themselves as producers of
                     knowledge, right? Maura?
Maura Davis: 00:33:55 
Yeah, to piggyback off that from an educator's perspective, but also thinking back
                     to my days as a student, and I'm not sure how feasible it is, but we've, we've got
                     this big overarching thing in our way. Now, right, this technology was so much knowledge
                     and it's so far out of our control, and what I would love to see happen is conversations
                     in classrooms between instructors and the students of can we make a plan together
                     of how we're going to use this for our learning experience in this class? Obviously,
                     we don't have all the flexibility in the world as educators to run a class any way
                     we would like, but I think, as a student, this is a conversation I would like to be
                     having with my educators, and especially as an educator, if I was going to be in a
                     classroom today. My position is a little different at this school, but it's a conversation
                     I do have with my students is what do you feel you're getting out of it? What would
                     be a way that we could use this and we could all be in agreement about our goals for
                     you, learning, for investigative thought, and and why don't I? I would like to see
                     them contribute to the conversation of how do you think you can use this? Because
                     I feel like when you force people to think, the first thing they're going to do is
                     try not to think right? But if I would like to see if we involve students in the process
                     of where is this going to fit in your education and how, I wonder if that would give
                     them the sense of agency to maybe make better choices about how they're going to use
                     it, because they were involved in the process of how it's available to them and where
                     it's going to sit in this classroom.
Babette Faehmel: 00:35:36 
I mean, I'm glad you're mentioning the relationship between students and teachers,
                     because one of our questions. What was the next question about human …
Alexandre Lumbala: 00:35:45 
So the next question was actually, ‘Do you think artificial intelligence will replace
                     human instructors and/or tutors and, if so, do you think it will be a positive or
                     negative development? Please explain your thinking so we understand where you're coming
                     from.’ And a couple of people who responded really went in-depth on their emotions.
                     A big thing was that, you know, ‘I don't think AI will ever replace human instructors.’
                     ‘It isn't 100% accurate.’ ‘It can't be taught the level of empathy and understanding
                     of how students' minds’ work, but they do believe that eventually it will get to a
                     point where it's like and the students specifically used Grammarly where it's a teaching
                     assistant in a sense, you know, and even though there isn't a physical teacher with
                     you. If you've ever used Grammarly, you have probably seen that it's a big help.
Babette Faehmel: 00:36:46
Is it?
Alexandre Lumbala: 00:36:47
I mean so, ah, okay, so maybe that's actually a personal perspective. Yeah, because
                     for me it is a big help. But, Babette, do you feel like it's not?
 Babette Faehmel: 00:36:56 
I feel that, um, writing gets so standardized and boring, um, there is no like I,
                     I, I increasingly appreciate the, the quirk, the quirks some students use like, it's
                     just they have their own unique writing style. They're like, they're used you, you,
                     you hear you like. Because I mean, I teach a lot, I teach a lot online. While reading,
                     I hear them talking, I hear a human, I hear a human brain, I hear human emotions and
                     I honestly don't care about the occasional spelling and grammar mistake. I teach history,
                     I don't teach grammar, I don't teach English. Yes, they need to learn these skills,
                     but what is more important is the thought. Um, and I think we get too much invested
                     in certain kind of standardized looks for writing.
Alexandre Lumbala: 00:37:48 
Me and Maura, we had this conversation up in TRIO.
Maura Davis: 00:37:50 
I was going to say, we were talking about this.
Alexandre Lumbala: 00: 37:53
And you mentioned that article, that was written fully in AAV.
Maura Davis: 00:37:57
Yes, and I have to bring that. The author's last name, I believe, is Shaw. I don't
                     want to misquote that article. That was written fully in AAV. Yes, and I have to bring
                     that... The author's last name, I believe, is Shaw. I don't want to misquote myself,
                     but he's a Harvard grad and an English tutor and he wrote a paper in AAV on why it
                     should be something that we include in academic English. Right?
Alexandre Lumbala: 00:38:14 
And just to define AAV; that stands for African American Vernacular English, I believe.
Maura Davis: 00:38:20
I believe so.
Babette Faehmel: 00:38:21 
Oh yeah…
Wesley Rush: 00:38:26 
So I'm giving you weird faces. Can you explain that? Like Ebonics,
Alexandre Lumbala: 00:38:28 
Huh
Wesley Rush: 00:38:29 
Like Ebonics?
Alexandre Lumbala: 00:38:30 
Yeah, it's Ebonics, yeah, like…
Wesley Rush: 00:38:33 
Slang?
Alexandre Lumbala: 00:38:34 
Slang! It's slang…
Babette Faehmel: 00:38:35 
No, no, no, it's not slang, it's a dialect.
Maura Davis: 0:38:37 
Yes, it's actually considered a dialect now.
Lonny Davenport: 00:38:39 
Wait, what's considered a dialect?
Babette Faehmel: 00:38:41 
So, okay, so well, what is it? African American English. So, based on..
Maura Davis: 00:38:48 
Vernacular English
Babette Faehmel: 00:38:49 
Huh?
Maura Davis: 00:38:50 
African American Vernacular English.
Babette Faehmel: 00:38:51 
Yeah, okay. African American Vernacular English. I mean, most ethnic groups have their
                     own dialect right, yes. And then you have kind of like the standardized. What is it
                     like standard American English?
Maura Davis: 00:39:03 
I would call it CNN English.
Babette Faehmel: 00:39:06 
Yeah, that kind of English. And every nation has their own standardized language,
                     and then they have the dialects, right, and these things are not… this is not random.
                     Obviously there's something about dominance and power and influence involved here,
                     right, who sets the standards? And I don't know.
Alexandre Lumbala: 00:39:24 
Because I just wanted to say, like it's something that's really, really, hard to define
                     and explain. As you see how it just made him go, like, what? And you also like, you're
                     like, what are you talking about? But basically, the guy is a Harvard grad, he's an
                     English Harvard grad
Maura Davis: 00:39:38 
Yup
Alexandre Lumbala: 00:39:39 
and you have to write a thesis. And he decided that he didn't want to write it in
                     regular plain old English. He wanted to write it like how I would talk to you or you
                     in, like outdoors, which is like yo, bro, this da-da-da-da-da-da.
Lonny Davenport: 00:39:53
So he just wrote it in common language.
Alexandre Lumbala: 00:39:54 
Exactly!
Maura Davis: 00:39:55 
He wrote it in common language.
Alexandre Lumbala: 00:39:56 
Whatever spelling he felt like was correct and whatever verbiage and stuff. And when
                     she told me that story I was like that's amazing.
Maura Davis: 00:40:03
It was incredible too, because the point of the actual essay was as you are all reading
                     this, you who are used to standardized English, you haven't been confused by me using
                     they rather than their and spelling it T-H-E-Y in their own language. He's like, this
                     isn't confusing native speakers, and that's kind of something I wanted to piggyback
                     off of is when we do invest too much in a model like Grammarly. For me and I am an
                     English teacher it erases what I like to call global Englishes, because this is the
                     lingua franca right now, like to call global Englishes, because this is the lingua
                     franca right now. This is a language that almost everybody in the world, to some degree
                     in each country, will use to speak and communicate internationally, in business. So,
                     I feel like there's a lot of embedded racism, maybe even up to xenophobia, and in
                     valuing one type of quote-unquote academic English. And academic English, let's be
                     fair, is very standardized to a white male model from a certain era, of what English
                     is supposed to sound like. So, as an English teacher, I want to hear vernacular voices,
                     I want to hear the influence from your culture, I want to hear idioms, I want to hear
                     things that don't look like standardized academic English, because I feel like that
                     is where real communication comes from. If we start to erase the culture…
Alexandre Lumbala: 00:41:23 
And the essence of education, which is to to learn the unknown. And if we just say
                     that we're gonna go American or standardized American English. Every single time.
                     It's like we're closing off a whole world. That and solely because of social problems
                     like racism and discrimination or superiority complexes, which is like you know.
Babette Faehmel: 00:41:46
And here monopoly builds upon monopoly and dominance build upon dominance. What's
                     on the internet? How many African-American vernacular or how many articles written
                     in African-American vernacular are on? The many articles written in African-American
                     vernacular are on the internet. You're not going to get this kind of rich, diverse
                     language from an AI. You will get more of the same, the way it is working and the
                     way in which ChatGPT operates right now. And I don't even honestly, I mean I have,
                     I have not prompted ChatGPT and to give me an African-American Vernacular, because
                     I, um, I already shiver to think, to imagine what comes up in terms of like.
Alexandre Lumbala: 00:42:33 
I have.
Babette Faehmel: 00:42:34 
I mean like just like thinking about some of the ways in which dialect is portrayed
                     in popular culture is not particularly … Um…
Alexandre Lumbala: 00:42:44 
So, I did that. I …
Babette Faehmel: 00:42:45 
You did?
Alexandre Lumbala: 00:42:46 
Yeah, so I did that. I don’t know if I told you. I don’t know it I told Maura, or
                     somebody. I told them …
Babette Faehmel: 00:42:48 
Share!
Alexandre Lumbala: 00:42:49 
I edited ChatGPT to speak more like me specifically.
Babette Faehmel: 00:42:54 
Oh good!
Alexandre Lumbala: 00:42:53 
So using Southern African slang, Jamaican, Nigerian and some AAV as well.
Babette Faehmel: 00:43:03 
How did you like the result?
Alexandre Lumbala: 00:43:04
It wasn't that great. It wasn't that great. No, but it tried, it tries. I think it's
                     still set to it right now.
 Babette Faehmel: 00:43:11 
Did you, did you like the results?
Alexandre Lumbala: 00:43:13 
I like a couple of them and then some of them. I tell it to stop saying I've never
                     heard that.
Babette Faehmel: 00:43:15 
So, you're training. You are helping with the training. Lonny?
Lonny Davenport: 00:43:19 
So this is where one of my thoughts was about the uses. To go back to the question
                     before how would you like your teacher to apply the uses of AI? I love the way my
                     son's school district is applying AI and it's used across the board across a wide
                     variety of students. So what they're doing is they have AI that will read you your
                     assignments. It'll read you your assignments while you're doing your work. It'll help
                     you read for you …
Babette Faehmel: 00:43:53 
Oh that’s great.
Lonny Davenport: 00:43:54 
…and that's one of the basic uses, like the first uses of it. But when you get to
                     the language thing, what questions… What when it comes to AI and speaking like African
                     American vernacular, you would have to implant a certain bias in that AI technology
                     to say this is what a black man sounds like, this is what a white man sounds like?
                     I got picked on growing up because I didn't speak slang. My mom and dad didn't speak.
                     You know what I mean? Slang. We didn't curse and dad didn't speak, you know what I
                     mean, saying we didn't curse. So, when I was around black kids, my peers, I didn't
                     speak the way they sound so that was something different, even though we looked alike.
                     So, when I went to a majority white high school, it was like all of a sudden, I wasn't
                     black, because I don't speak the way
 Alexandre Lumbala: 00:45:02 
According to the majority
Lonny Davenport: 00:45:03 
According to the way. Right. So this is where I got a little confused and where, when
                     we start trying to isolate AI to stereotypically think one person sounds like this
                     that's why the African American vernacular thing kind of throws me off, because I
                     know I'm very into my history, like not to get too off topic, but I feel that black
                     culture today is not the original black culture. So, seven years after 400 years of
                     slavery, we were in politics, we were in the um House of Congress and we were in the
                     House of Representatives. And the man who was in the House of Representatives took
                     the seat of the former president of the Confederate, of the United Confederate States.
                     You know what I mean? Yeah, so we, as Afro… you know as African-Americans, we have
                     rich history.
Babette Faehmel: 00:46:04
Yeah.
Lonny Davenport: 00:46:05 
There's these stereotypes of, um, black people time, but when you look at the history
                     of um uh black wall street in Oklahoma City, you can see them having, you can see
                     signs 7 am sharp. You know what I mean. You know what I mean. So, when it comes to
                     AI and assuming …
Alexandre Lumbala: 00:46:30 
Different personalities or something.
Lonny Davenport: 00:46:30 
You know what I'm saying. That's kind of scary to me.
Alexandre Lumbala: 00:46:33 
I hear you!
Lonny Davenport: 00:46:34
You know what I mean? Now, if you want to use AI as far as in language, as far as
                     … So my grandmother is, um, Geechee
Alexandre Lumbala: 00:46:43 
Okay.
Lonny Davenport: 00:46:34
They speak, um…
Alexandre Lumbala: 00:46:46 
They have a certain way…
Lonny Davenport: 00:46:48 
It's a mix Between old African language.
Alexandre Lumbala: 00:46:50 
And then English.
Lonny Davenport: 00:46:52 
English. And they mix it. So like Yard, like a yard chicken or a farm chicken that
                     we would have, it'd be a yacht chicken or a farm chicken that we would have, it'd
                     be a yacht chicken.
Alexandre Lumbala: 00:47:00 
Okay, yeah, I've spoken to a couple older …
Lonny Davenport: 00:47:05 
Or, or, a window seal, or window seal: “Put the cup on the window. It's not a they
                     don't. They'll take out window and they'll say, put it on the seal.”
Alexandre Lumbala: 00:47:10 
Aha
Lonny Davenport: 00:47:11 
You know what I mean?
Alexandre Lumbala: 00:47:12 
Yeah, I do, I do.
Lonny Davenport: 00:47:12 
If you want to incorporate that type of education, how can we use AI to bring back
                     to life the languages of our ancestors, the native American language.
Alexandre Lumbala: 00:47:23 
So for me…
Lonny Davenport: 00:47:24 
…preserve that…
Babette Faehmel: 00:47:25 
Well, you need human involvement once again, you need you need the kind of human who
                     cares about this, which is another issue with um, like diversity and equity, right?
                     I mean, who are these people who are, like, engaged in the AI development? Is Lonnie?
                     Is Lonnie Davenport there? Right? I mean? Is Alexandra Lumbala in there? And? And
                     Wesley Rush? I mean, it's just like um. We like this kind of this, this focus on,
                     well, what do we value? Who sets these standards? Is it just like corporate, like
                     profit motives, that that um, that drive the development, or is it a curiosity about
                     human diversity and our quirks and what makes us special and what makes us human and
                     imperfect? And I don't think that we are there right now.
Alexandre Lumbala: 00:48:11 
Where?
Babette Faehmel: 00:48:12 
We're not there…
 Alexandre Lumbala: 00:48:14 
We're in a very dangerous place.
Babette Faehmel: 00:48:15 
We really remind ourselves what makes us human and what is beautiful about humans
                     and what is, what shouldn't be sacrificed on the altar of productivity and efficiency.
                     Right? This is a German talking!
Alexandre Lumbala: 00:48:28 
Wes?
Wesley Rush: 00:48:29 
Um, I don’t, I don’t, I think we set that standard of how we can modify and change
                     the AI. Um. Bring it back to what I said earlier, I think the masses rule. So, if
                     enough people are speaking that way or enough people are searching those things, AI
                     will adapt to those specific things. I just think that, as far as right now, the more
                     interaction human interaction we have with AI, the better AI would adapt. Adapt now,
                     10 years from now, how would that look, you know? Nobody would know.
Alexandre Lumbala: 00:49:07 
So, so on your point on, on creating like, uh…
Babette Faehmel: 00:49:09 
You mean on Loni’s point?
Alexandre Lumbala: 00:49:10 
On Lony’s point. Yeah. On Loni’s point of, um, creating, uh, like ChatGPT, being like,
                     okay, this is how a black man speaks. If I I'm basically getting prompts from a black
                     person, I want to reply in this sense. That's not the intention there. The intention
                     is more so for ChatGPT to be able to be personalized to me, because when I did it
                     it's you go into the settings and then you explain to it in depth. And then every
                     time it would say something it would spit back out, I'd be like, no, that's not something
                     I've ever heard growing up. Maybe you want to use something more like this. It takes
                     a lot of time and it's probably never going to be perfect.
Lonny Davenport: 00:49:49 
Was it offensive, almost?
Alexandre Lumbala: 00:49:52 
Hm. No, I wouldn't take it as offensive because it's not a person, it's, it’s a machine.
                     It's just spinning back out things that it's probably heard most. You know what I
                     mean? So…
Lonny Davenport: 00:50:01 
But you don't think, like. So, this is where I'm getting at. It's, there is programming
                     in it. You know what I mean, if you can set up algorithms, it's almost like it's programmed
                     to do one thing, and then it grows, like tentacles, to do its own thing through what
                     you put in.
Babette Faehmel: 00:50:17 
Yes, generative, it's generative. It adapts.
Alexandre Lumbala: 00:50:20 
So if, for example, because my little brother never moved back, has never lived in
                     America. So his slang is evolving way differently. So if I want to teach him something,
                     I have to remove all my AAV slang that I've learned here. Or like, if I've ever spoken
                     to someone who speaks Geechee, I have to take that out and I have to speak to him
                     with whatever that is. If he was to have the ChatGPT that I have yes, you're right,
                     the ChatGPT that I have, would spit out what works for me. So, he would have to tell
                     ChatGPT, no, I'm not Alex, I am, I am me, and this is how it would work for me and
                     it would work for me. And then he would have to change it a little bit and then it
                     would. So it would evolve as more people, like, as Wesley said, as more people use
                     it. So, but it's it, you're right, there's a lot of people who might get it, get it
                     and be like why is he talking to me like that?
Lonny Davenport: 00:51:05 
Yeah, it seems like it's kind of the line of, you know, I'm talking to you this way
                     for comfort, versus this is how…
Alexandre Lumbala: 00:51:13
…everyone speaks.
Lonny Davenport: 00:51:05 
Yeah! In your demographic
Alexandre Lumbala: 00:51:15 
Thank you. Thank you for pointing that out. I never, you see, it's like, it's like
                     Maura said: the consequences of, like my optimism, I might not realize them until
                     someone finally goes, hold on, you know, one second, yeah.
Babette Faehmel: 00:51:26 
But you know, you bring up um, an important aspect here, because I think that's another
                     issue um, because you're just, uh, you are. You are basically like referring to a
                     use of ChatGPT that is, you know how to use it, you know how to change the settings
                     and these kinds of things, which is not amongst our students' common knowledge, yet.
                     Cause there is also a thing where instructors, and where all of us as a college community
                     need to do more, that we all develop these skill sets, that we teach an informed use
                     of AI, of AI tools. Where we foster awareness of the limitations, of the biases, of
                     how this thing works. Because otherwise we're going to let our students will leave,
                     they will go to a transfer school, maybe, or into the workforce and they will meet
                     upon people who know how to use them better, and then there will immediately be an
                     equity issue. And that's not okay either. And I think that's also what some of the
                     respondents to the survey talked about, like, for instance, they were talking about
                     how it's okay if AI writes banal things like office emails and resignation letters.
                     Perfect use for AI. Who wants to… There's no creativity and critical thinking. Necessarily,
                     maybe some that needs to go into these kind of banal office things. So wonderful,
                     let's use AI to speed that up, right? But then what happens with the time that you
                     save not doing these banal things, right? Because then, once again, does every moment
                     of our life need to be spent in productive labor? Or can we then also use that time,
                     please, to sit back and do creative thinking, right? And so, once again, how much
                     control will we be allowed to have over that? Wesley, this seems to be like something
                     up your alley.
Wesley Rush: 00:53:25 
Yeah, I'm just thinking about what you said and applying it to everyday life. Efficiency
                     doesn't necessarily mean better. Yeah, the benefit might be that you're saving a lot
                     of time, but the quality of the product itself? Fast food: you can make the best cheeseburger
                     at home, but you go to McDonald's. You might get it in two seconds, but it might be
                     the worst tasting cheeseburger ever. So, yeah, you might be saving time using AI.
                     So, yeah, you might be saving time using AI, but the quality of work might be not
                     as good as you taking the own personal time to do it yourself.
Lonny Davenport: 00:54:03 
I think what Wesley's pretty much getting at, is there is no substitute for human
                     work.
Babette Faehmel: 00:54:10
Right.
Lonny Davenport: 00:54:11 
There is no substitute for
Wesley Rush: 00:54:14 
Absolutely not. There's no substitute.
Lonny Davenport: 00:54:15 
No matter how much we can just use AI as a tool to help us individually. But I'm kind
                     of with him, just because of how I was raised in school, growing up doing research,
                     I've never used ChatGPT. My only experience with it was with my son. So, I can agree,
                     you can never take away the human element.
Alexandre Lumbala: 00:54:39 
So here's a student comment or perspective that we received from, he chose to allow
                     us to mention his name.
Babette Faehmel: 00:54:47 
Yeah, let's not.
Alexandre Lumbala: 00:54:48 
Okay, okay. So this student says. I also think it would be very helpful for students
                     to hear what a professor might feel when they believe AI has been used to produce
                     work. The professors I know are passionate about their subjects and have put in a
                     lot of hard work to gain knowledge. When I put myself in their position and think
                     about a student using AI, I feel disrespected. This leads to a plethora of emotions,
                     from anger to disillusionment, and I don't think many students consider that when
                     thinking about AI. The more we can understand each other, the better we will be able
                     to navigate through the change.
Babette Faehmel: 00:55:23 
Yeah, see, that comes back to what Maura was mentioning, what Wesley was mentioning,
                     Lonnie was mentioning, what you brought up. Like, the human component right? Because
                     it's not only, like earlier somebody mentioned, empathy and understanding of what
                     people are going through, but there's also how do we value other people's productive,
                     creative, intellectual labor? Now we think like, whatever I can just use AI. I mean,
                     there are generations of thinkers that have been contributed to this collective knowledge
                     that now AI is basically just grabbing of the web. And if students, or if young people,
                     if we consumers of information no longer value all that research, all the labor that
                     went into that, I mean what is lost? Maura, I think you wanted to say something.
Maura Davis: 00:56:22 
Yeah, because actually we had a conversation about that a couple of semesters, maybe
                     just last semester, where you told me about a program that you said try this out:
                     It'll write a story with you.
Babette Faehmel: 00:56:35 
Oh yeah, yeah, the AI dungeon.
Maura Davis: 00:56:43 
The AI dungeon. And I spent like two or three days playing with the ai dungeon with
                     all the different it'll do fantasy, it'll do horror, it'll do. You can give it a lot
                     of detail on. I want this style right, I want this tone, I want this level of, you
                     know, like intensity. I never got a good story.
Babette Faehmel: 00:57:00 
No, exactly!
Maura Davis: 00:57:01 
From the AI. And I think this. I think if people were to interact with AI that's trying
                     to do something that AI just can't do, which is be creative. It was flat, and I wrote
                     my half of the story, I started out with a zombie apocalypse and I wrote my half of
                     the story. I started out with a zombie apocalypse and I sat my purse, they gave me
                     a setting where my character walks into like a depot and there's six people there
                     and they offer me some supplies. So, I said, okay, great, I'll say yes. And then said
                     something about little, like, I had to play nice because I didn't want them to know
                     my true intentions. Right, it couldn't pick up on the fact. No, that I was trying
                     to write an unreliable narrator. And everyone in the story just trusted my character,
                     who then killed them all and took their stuff, um, you know. So I can't anticipate
                     an unreliable narrator, it can't anticipate a plot twist and it definitely can't produce
                     the same things on a creative level that we can. And I'm wondering if that's almost
                     something we shouldn't be bringing into a classroom and having people try: Look at
                     what this thing can't do.
Babette Faehmel: 00:58:10 
Yeah, exactly, exactly, exactly. Wesley?
Wesley Rush: 00:58:14 
Um. To follow up on that. Let's say it did produce something. Let's say it produced
                     some really good quality work. Is it yours?
Maura Davis: 00:58:23 
No, it's definitely not mine at that point.
Wesley Rush: 00:58:25 
Or is it the AI's? Like, you know, like, can you even get credit for that? Like, let's
                     say, even the AI generated pictures?
Babette Faehmel: 00:58:30
Wesley Rush: 00:58:34 
It was your thought, but the AI created it.
Babette Faehmel: 00:58:32
Yeah, yeah. So whoever made that AI program actually has the copyrights and the legal
                     rights to that picture or to that story. Those are all issues that we still need to
                     figure out. And right now, they're kind of, I don't know, being eked out in the courts.
                     Lonny?
Lonny Davenport: 00:58:54
You say, but not directly say, with AI: We've had all these thinkers in the past that
                     have worked so hard to get all this information and you have students. They just go
                     in, type in AI and get that information. Um. What I heard was AI will only go as far
                     as the human intelligence, pretty much because the AI can't plug in anything it doesn't
                     know right?
Alexandre Lumbala: 00:59:22 
And it can't, it can't give you anything, it's not asked necessarily yeah, I don't
                     care where it came from or who came up with this idea, that I'm just taking the idea
                     and running.
Lonny Davenport: 00:59:31 
So is it, is it correct for me to say AI can only go as far as humans allow it?
Babette Faehmel: 00:59:38 
Mm-hmm. Yeah, right now, especially for those, I mean for those AI models that are
                     released by, like, this handful of companies, that are involved, they are trained
                     to not go certain places. I think, I mean, there are safety standards built in there.
                     But I think the safety standards also account for some of the boredom that you will
                     experience when using these tools a lot. But I mean, yeah, like, removing those safety,
                     like I don't know safety standards will also probably result in unethical behavior.
                     Um. But Wesley, you had your hand up?
Wesley Rush: 01:00:20 
I was gonna say, um. I feel like using AI and stuff like that, is basically saying
                     that we know everything. There's nothing else to be researched. There's nothing else
                     to be discovered. And because if AI can only adapt and only know what we know, it
                     can't learn things that we don't know.
Babette Faehmel: 01:00:40
Oh my God, how boring. Yeah, exactly.
Wesley Rush: 01:00:45
So it's basically saying that we know everything. There's nothing else to discover.
Babette Faehmel: 01:00:46 
Oh my gosh.
Alexandre Lumbala: 01:00:48 
I wanted to say add one thing, because I think we're coming up.
Babette Faehmel: 01:00:51 
Soon, but I have one more thing that I definitely want to talk about.
Alexandre Lumbala: 01:00:54 
So you said how a lot of students in school don't know about the settings within ChatGPT,
                     and I was thinking back as well when I told the story for the first time, where I
                     actually learned how to do that. So, I actually learned how to do that on Instagram,
                     on Instagram Reels, which is what do they call it? It's an algorithm, right. It's
                     probably run by AI too. Whatever I watch the most of, it's going to start showing
                     me up.
Wesley Rush: 01:01:14 
Yup
 Alexandre Lumbala: 01:01:18 
And the apps that I use, like I download, it takes that data to tell my Instagram
                     what kind of stuff I might want to see. I try to make sure when it asks to track my
                     data I tell it not to, but for some reason my Instagram still knows me, so regardless.
                     So one day I was scrolling, scrolling through Instagram and it's like five powerful
                     ChatGPT prompts you might want to try, and one of the prompts was like tell it to
                     speak like you and give it some information about you and it will. It will probably
                     know how to speak like you or something like that. And then I went into ChatGPT, because
                     I was trying to be more responsible with my social media use, where the things that
                     I do learn, I actually use. Because I think that's the problem with we just scroll
                     forever and never actually use anything. So I actually got the information and use
                     it. But now that I think about it, AI told my AI to tell me how to use AI, which is
                     probably gonna go and tell the AI and it's like huh, that's scary.
Babette Faehmel: 01:02:21
Our life managed by machines that we don't control. Um, but, but, but but still, like.
                     What we probably also need is like some sort of I don't know collective learning community,
                     students and academic like support specialists learning from one another how to get
                     the most out of this tool, understand this tool, um, really understand like, get,
                     get as much knowledge about it as possible, and then and then talk, talk about the
                     ethics of using it, talk about the larger of using it, talk about the larger consequences.
                     Because one more thing that I really find important to mention is what was also kind
                     of, like, in that last comment that you read about how professors feel when they see
                     students using AI. What we might see is simply, Grammarly, or a student who really
                     aced the five-part essay or the five-paragraph essay in high school, and they sound
                     to us like this, sounds to us like kind of like I don't know sketchy or suspicious.
                     And then maybe we run it through an AI detector and it comes up flagged. And then
                     what does that do to this relationship between professors and students? Because trust
                     is an essential component of education, right? I mean, like now it's like and I catch
                     that in myself, right, there's almost always kind of like thought in my head what
                     is it going to do to my students if I teach them how to use this tool? What about
                     this essay? It sounds too good to be true. It sounds too good to be coming from that
                     student. Where are these sources coming from? And then also just like, honestly, also
                     plain unethical behavior, discarding the value of teachers and academic support professionals
                     because, hey, we're just a service provider and if somebody gets an assignment out
                     of the way easily, they will do that. I mean those individuals also exist. I mean,
                     I'm sorry, it's just, like, it’s also true. Unfortunately, and I just fear for the
                     future of the relationship between professors and students just because somebody felt
                     the need to dump this on all of us. Maura, how do you see that?
Maura Davis: 01:04:42 
Yeah, I think I'm definitely in the same boat, because I got to sit in the learning
                     center as a writing tutor during a lot of the emergence of AI and I got to talk to
                     not only students but professors themselves. I had a whole semester where a professor
                     on campus was sending students down to work with us specifically on plagiarism. What
                     is that? Right? Why did you get flagged for it? What in your paper flagged that? And
                     what I noticed was across classes, across assignments, was a deep suspicion that went
                     both ways and was ever deepening. I'm watching my colleagues look over papers together
                     and say, but there's just something not quite right about that. Did they use AI, right
                     and I think that we're suddenly in the face of AI. We're almost panicking in a way
                     that we wouldn't in the 90s. You could have sat at home with your older sister, who
                     already has a degree in what you're writing about, and you could have had human to
                     human help and it would have come in looking the same way, like you didn't quite write
                     it, but that deep suspicion…
Babette Faehmel: 01:06:02 
Yeah, exactly
Maura Davis: 01:06:03 
wasn't there, yeah, in the 90s we're constantly almost like holding each other. We
                     have this expectation that the other side is going to try to subvert their job somehow
                     that a teacher is going to take a shortcut by not really reading our paper, just running
                     it through, Turnitin, right? And that a student is going to take that shortcut by
                     just running it through chat GPT, and I think that speaks to a greater issue in education
                     that I'm not quite sure how to tackle. But the AI is exacerbating this lack of trust,
                     which is something I'm constantly trying to rebuild with students. You know, like
                     if you want to use Grammarly, let's use it together. Let's talk about why it did that
                     to your sentence. Let me help you and be part of the process. So that we can at least
                     stay honest with each other. I'd honestly rather have a student be honest with me
                     and leaning a little bit on AI, but still trusting me, than trying to do it all themselves
                     but without that faith.
Babette Faehmel: 01:07:07 
Right. Right.
Maura Davis: 01:07:07 
Because I think you can learn anything. Like Wesley said, and like Lonnie said, this
                     technology is limited by what we tell it, right by what it doesn't know any more than
                     we do. So, we can always learn whatever AI knows, because it doesn't know more than
                     a human. But what we maybe can't always cultivate in the aftermath is that trust between
                     educators and students. I would rather build that and have them openly be like yeah,
                     I just ChatGPTed this because I didn't know what else to do, because then we can talk
                     about what else you could have done. Right, we have a conversation, we have an open
                     door now.
Babette Faehmel: 01:07:46 
So, I mean. So, okay. So, the other day I walked into the cafeteria in the library
                     and there was this gaggle of students sitting there and I heard one. I heard I heard
                     them talk about an assignment that was up and apparently the assignment was like kind
                     of like a little daunting, was big, a big assignment. And then one of them said why,
                     just while I was walking by, like well, ChatGPT is coming, it's going to come in very
                     handy. And I was thinking like okay, yes, see, they are like, first of all, they're
                     all using it, um, they're not disclosing usage. But then also in a, it's like okay,
                     but how about if we as professors, with our students, would just use it, like learn
                     how to use it openly in ways that, where some of the like the grunt work is, like,
                     out of the way, and then we engage in more critical thinking and creative work? Right?
Alexandre Lumbala: 01:08:39
And you point out something really, really, good, because it begs the question as
                     to if the students are going to go out their way to risk their whole education, because
                     if they get caught for plagiarism they could possibly be kicked out of the school.
                     Is the work maybe just slightly too hard, maybe?
Babette Faehmel: 01:08:56 
Oh, my God, yeah, do you have too much things to do yeah I see Wes shaking his head,
                     but but bear with me because I feel like even as students, we can definitely say that
                     we didn't pass every single class, that we're gonna do our degree with, right? Babette
                     Faehmel and
Maura Davis: 01:09:10 
Nope.
Alexandre Lumbala: 01:09:11 
Yeah you know what I mean? Or we didn't get the best grade ever right in the classes
                     that I got the best grade, I probably learned the most in the process, that I got
                     the worst, I probably learned the least. But that was because, like there was things
                     that were challenging. I just had to say, hey, I'm not gonna do my best, I'm not gonna
                     get learn the most thing, the most here. Maybe it was because of time constraints,
                     maybe it was because of all these other constraints, but if I had a professor who
                     was able to, I was able to walk into and be like, honestly, here I'm just like can
                     you, can we do something? Can I? And maybe I would have learned something more. Maybe
                     I still would have gotten a bad grade, but I would have learned a little bit more.
                     I don't know, so you just make a good point. There is what I'm trying to say. Sometimes,
                     maybe not for me, the work isn't very hard, but maybe for the next person it's really
                     hard and he might just quit because you say it's too hard, instead of like trying
                     to learn something.
Babette Faehmel: 01:09:58 
Well, I mean, if AI is just another, if people use AI just as another form of plagiarism,
                     for plagiarism I mean, like, the reasons for plagiarism, um, like, I mean, they are,
                     they are complex, right? Sometimes it's just, people are overwhelmed with the, with
                     the workload, but also sometimes people are bored, yeah, right? And and why would
                     I risk my reputation? Um, for, for, well, like for this? Well, maybe it's because
                     I, I cannot make myself do this work right now. It's just like this. This has nothing
                     to do with what I want to be in my life and it's so boring and I just I'm just gonna…I
                     don’t know.
Alexandre Lumbala: 01:10:31 
I've heard that before.
Babette Faehmel: 01:10:34 
I mean mean that's another thing, right? And, unfortunately, that's also on my mind
                     a lot Like how boring am I to my students. But also then the other thing is how hard?
                     Because I mean I love my, obviously my discipline, but I also know that, like my assignments
                     are really long, and like my prompts are really long, and like my, my prompts are
                     really long, and that that students and then okay, but I am allowing, like AI use,
                     to get over that hurdle. Just just be, just be honest and just like, disclose your
                     usage and, and, and don't let this thing do your thinking for you. Right, it's just
                     the beginning, and then the human comes in. Lonnie?
Lonny Davenport: 01:11:20 
Yes, I think when you have, students tend to use AI as a safety net. And it's like
                     oh, two days for that assignment, oh, I'm just going to. You know, use it. You know,
                     I have professors that would rather me turn it in late
Babette Faehmel: 01:11:40 
Yeeaah!
Lonny Davenport: 01:11:42 
Than use ChatGPT and have it on time, and probably get a worse grade if I'd have did
                     it on ChatGPT than if I would have did if I had to turn it in late. Um, second thing
                     I wanted to talk about and this is just as an adult, as a student, as someone who's
                     been in life, I feel like teachers. There have been great strides as far as teachers
                     being accommodating towards students. But when you start talking about, is it too
                     hard? Or you start making excuses for students as to why they don't get stuff done:
                     I always tell I'll tell my teachers, I'll tell you know what I mean. Any professors
                     or instructors if you're teaching a kid, the best way to teach them is the way life
                     is going to give it to them, and life sucks Like. There's some ups and downs. Me and
                     Maura. Maura's practically my therapist. She asks and she knows everything that's
                     going on. Life is terrible, and if you go through college holding students' hands,
                     it's when they get into the real world. It's not going to be. You know what I mean.
                     ChatGPT in the real world might not be a viable option. You might actually have to
                     do some research, you might actually have to do some footwork, and if you don't know
                     how to do that, how is you know? How did ChatGPT benefit you?
Babette Faehmel: 01:13:13 
And how do you get through your, your day?
Lonny Davenport: 01:13:15 
Right, you know what I mean?
Alexandre Lumbala: 01:13:16 
I think you're right. Using the word hard, there was not the best one, like if the
                     teacher just gives the student a cop out, but I feel like so in my head. The way I
                     imagine it is that now the professional who is, you know, professional of education,
                     sees that the student actually is not having a hard time Like. I've had this before
                     because I tutor in the library. This student is actually brilliant. For some reason,
                     they just don't think that the ideas that they make in their head are valid.
Babette Faehmel: 01:13:46 
Yeah
Maura Davis: 01:13:47 
That is the number one.
Alexandre Lumbala: 01:13:50 
And sometimes they just need someone to tell them that the idea that you have in your
                     head is valid. This assignment isn't hard. You just don't believe in yourself, and
                     sometimes they just need to
Maura Davis: 01:13:56 
You are second-guessing yourself.
Babette Faehmel: 01:13:57 
Wesley.
Wesley Rush: 01:13:57 
I think it starts with children growing up
Alexandre Lumbala: 01:14:01 
Yes, it’s developmental
Wesley Rush: 01:14:02 …and your parenting skills. You know, I'm a parent, Lonnie, you're a parent, you're a father. Um, and when it comes to my children and my son, um, I try to help him with his homework without giving him the answer because what is he learning? When it comes to him tying his shoes: I've witnessed him tie his shoes and keep them tied for a week. Yeah, yeah, the mission is complete. Your shoes are tied, but you're not learning how to tie your shoes. And you keeping your shoes tied forever is you're losing the human aspect of you really learning how to tie your shoes properly.
Lonny Davenport: 01:14:40 
I think that's a young boy thing.
Alexandre Lumbala: 01:14:41
We can edit this out, but it's just funny because you're saying that with your laces
                     untied.
Wesley Rush: 01:14:47 
My shoes are untied.
Alexandre Lumbala: 01:14:48 
I've had mine tied for a while now.
Wesley Rush: 01:14:53 
And I look at the same thing as Chat the AI, ChatGPT it's you're not learning anything,
                     you're getting the answer and you're getting complacent, and you get comfortable and
                     dependent on that. And then years go down the line and you may get the degree. You
                     may get your medical degree, but when it comes time to do the surgeries, you don't
                     know what you're doing.
Babette Faehmel: 01:15:14 
No, absolutely. I mean, I think it's just like the, the challenges are so multifaceted
                     and and. So like, because, I mean, like we need to embrace mistakes as part of learning
                     and not just as a as a fear, like I don't know, like a weakness. We need to embrace
                     the value of asking stupid questions, because most questions are not stupid, they
                     just need to be asked, right, um, and then also kind of like, just like, like, like,
                     embrace discovery, like the joys of discovery, right? So I mean, I think we, we need
                     to really maybe, well, we already said that before covert like take it as a chance
                     to reinvent education, right? Like, okay, we're still, we're still trying, we're still
                     trying. Um, I would say like, let's keep trying and let's keep these channels of communication
                     open between students, right? And professors, and academic support, um, professionals,
                     right? And this, that's the only, that's the only way, uh, to, to deal with this.
                     I would say, what, what about you? How about you, Alex?
Alexandre Lumbala: 01:16:12 
Do you want me to give a little closing statement?
Babette Faehmel: 01:16:13 
Yeah.
Alexandre Lumbala: 01:16:15 
Yeah, yeah. No, I feel like the conversation it's on personal close just now. So yeah,
                     you said it perfectly, honestly.
Babette Faehmel: 01:16:24 
All right, well, on that note then. Um, we had, I think we could continue this conversation,
                     actually, we could continue it forever.
Alexandre Lumbala: 01:16:31
Yeah, we could!
Babette Faehmel: 01:16:34 
We could keep talking for two hours and three hours, and whatnot, but anyway, so okay.
                     Thanks a lot for participating.
Lonny Davenport: 01:16:44 
Yes, thank you for having me.
Babette Faehmel: 01:16:45 
Maura Davis
Maura Davis: 01:16:46 
Thank you for having me.
Babette Faehmel: 01:16:45
And Wesley Rush.
Wesley Rush: 01:16:48 
Thank you.
Babette Faehmel: 01:16:49 
Thank you so much. Many Voices. One Call is made possible thanks to the contributions
                     of the SUNY Schenectady Foundation. We're especially grateful for the School of Music
                     and in particular Stan Isaacson's continuing generous support for the technical details.
                     The recording and editing of the podcast was possible thanks to students Rowan Breen
                     and Evan Curcio. Heather Meany, Karen Tansky and Jessica McHugh-Green deserve credit
                     for promoting the podcast. Thanks also go to Vice President of Academic Affairs, Mark
                     Meacham, College President Steady Moono, the Student Government Association and the
                     Student Activities Advisor. Stay tuned for more episodes where you get your podcasts.