MANY VOICES, ONE CALL

Many Voices - One Call: Season Three/Episode Three: Artificial Intelligence (AI): Why we need to talk about it!

What is A.I.? What is a Large Language Model? How does it matter for teaching and learning? And will A.I. inhibit or help the acquisition of knowledge? These are essential questions that suddenly came to the fore in the winter of 2022 when OpenAI released its seemingly magical "ChatGPT" for beta testing to the public.

On this episode, new (!) student co-host Alexandre Lumbala and Dr. Babette Faehmel discuss the significance of A.I. for education with an impressive panel of guests! From SUNY Albany's AI+ Initiative, we are joined by Dr. George Berg,  Associate Professor of Emergency Preparedness, Homeland Security and Cybersecurity; Dr. Justin Curry, Associate Professor of Math and Statistics; Dr. Alessandra Buccella, Assistant Professor of Philosophy; and Dr. Rukhsana Ahmed Associate Professor for Communication. In addition, we will hear from SUNY Schenectady's very own Professor of Cybersecurity Keion Clinton, and Director of Library Services, Jackie Keleher.

The recording of this episode was possible thanks to the School of Music’s - and in particular Sten Isachsen’s – continuing generous support with the technical details. Music students Luke Bremer, Jacob DeVoe, Jean-Vierre Williams-Burpee, Rowan Breen and Evan Curcio helped with editing, mixing, and recording. Heather Meaney, Karen Tanski, and Jessica McHugh Green deserve credit for promoting the podcast, and the SUNY Schenectady Foundation for its financial support.  Last but not least, we want to thank Vice President of Academic Affairs Mark Meachem and College President Steady Moono for supporting our work.

The views voiced on this episode reflect the lived experiences and uncensored opinions of the guests; they do not necessarily capture the full diversity of attitudes within a larger community, nor do they express an official view of SUNY Schenectady.

Babette Faehmel (Co-host): [00:05]

Welcome to this episode of Many Voices. One Call, SUNY Schenectady’s diversity, equity, inclusion, and social justice podcast. We are here today for a special episode on artificial intelligence and teaching and learning. I'm your host, Babette Faehmel, History Professor, and we have a new student co-host. Alexandre, do you want to introduce yourself? 

Alexandre Lumbala (Co-host): [00:30]

Yes, my name is Alexandre Lumbala. I'm an international student here at SUNY Schenectady. I am a second year, and I will be graduating—or transferring—next spring. I'm happy to be Babette's new podcast host, and I'm happy to be a part of more episodes. 

Babette Faehmel: [00:47]

Awesome. So, we are joined today by a very impressive group of guests who are all, in some form or fashion, affiliated with SUNY Albany's AI Plus initiative. Could we maybe just go around the room and everybody introduces themselves? Rukhsana, do you want to start? 

Dr. Rukhsana Ahmed (Guest): [01:06]

Yes. Good afternoon, everyone. I'm Rukhsana Ahmed, Communication Professor at SUNY Albany. 

Dr. George Berg (Guest): [01:14]

Hello everybody, I'm George Berg, I'm from the Cybersecurity faculty at the University at Albany. 

Dr. Alessandra Buccella (Guest): [01:21]

Hi, I'm Alessandra Buccella. I am a newly appointed Assistant Professor—part of the UAlbany AI cluster, I guess—in the Philosophy Department. I'm very excited to be here. Thanks for the invitation, Babette.

Babette Faehmel: [01:35]

Sure. 

Dr. Justin Curry (Guest): [01:36]

Great. And I'm Justin Curry. I'm in the Department of Mathematics and Statistics and just recently promoted to Associate Professor. 

Babette Faehmel: [01:43]

Congratulations.

Dr. Justin Curry: [01:44]

Thank you, thank you. 

Alexandre Lumbala: [01:45]

We are also joined by SUNY Schenectady's very own Jacquie Keleher, Director of Library Services, and Keion Clinton, Associate Professor of Computer Science and Cybersecurity. 

Babette Faehmel: [01:54]

Let's maybe start with some basic definitions and clarifications. I mentioned AI—so, AI is, of course, a pretty big umbrella term. So, what are we talking about when we are talking about the use of AI in teaching and learning? 

Dr. George Berg: [02:10]

I'll give it a shot. Artificial intelligence is generally thought of as getting computers to do activities that we would think of as being intelligent, sort of a hallmark of human beings or maybe some of the higher animals. So, that's typically what artificial intelligence is thought to be. 

Babette Faehmel: [02:27]

And when it comes to teaching and learning I would presume that one of the most common forms of AI that we are talking about are these large language models like ChatGPT?

Dr. George Berg: [02:38]

I think that's exactly why we're here, right? Because...

Babette Faehmel: [02:39]

Okay, yeah—well, right? (laughs)

Dr. George Berg: [02:43]

I mean last November it burst onto the scene when OpenAI made ChatGPT available and everyone— students, faculty, every—started thinking, ‘what can I do with this?’

Babette Faehmel: [02:53]

Absolutely, absolutely. Well, I don't know. I mean from my experience, actually, whenever I approached the topic with my students, most of them were pretending to be—to not know what I'm talking about and they might not be lying to my face, even though I suspect some of them at least are. What about SUNY Albany students? How informed are they about these developments? 

Dr. Justin Curry: [03:18]

Yeah, so Justin Curry here. So, I teach one of the intro to Python classes. And this is an area where I think ChatGPT's influence is really nefarious, because it can, like, basically solve coding exercises for you without, like, any real prompting. And my impression, again, is that students aren't always forthcoming about, like, what they know or what they're acquainted with, and this semester is the first semester where I've really, like, addressed it from the get-go. 

[03:45]

It's like, here's my policy on using ChatGPT, like, and, like, also trying to, like, incorporate it into my live lectures so that students know what the pitfalls are—especially when it makes mistakes—and I think that's usually enough to get students a little scared of using it without doubting it.

Babette Faehmel: [04:04]

So, just on the offhand chance that a listener doesn't know what ChatGPT is, what is it? How does it work?

Dr. Alessandra Buccella: [04:14]

Maybe I can take this? (chuckles)

Babette Faehmel: [04:16]

Sure.

Dr. Alessandra Buccella: [04:17]

Alessandra. So, very high-level explanation, Chat GPT is, like you said, a large language model and basically works through a technology that's called generative AI, which means essentially it's a series of algorithms whose job is to predict and make up, essentially, the next most likely token or—in the case of ChatGPT—word or term that comes in a sequence. So, these models are trained on a large body of data—usually text data in the specific case of ChatGPT—and what they do is, during their training, they learn patterns of, basically, continuation of sentences. So that once you get new inputs in after the training has ended, they can basically respond by spitting out—again—the likely predictions that they've learned during their training. So, they're essentially text production devices that are trained on large bodies of data and they just use whatever patterns or regularities they've learned to produce new text in response to prompts. 

Babette Faehmel: [05:35]

That's a great explanation. To be perfectly honest, that's very clear. So, it seems—and as George already said—this thing was released in November last year and since then it seems almost that every week there's a new development, there's some new thing that it can do, and can it, like, generate citations? Now, how—what is the likelihood of it to produce hallucinations of fake results, all that kind of stuff? So, as SUNY AI Plus experts or members of this initiative, what would you say? What should students—? Now, what we—? What do students need to know? What does faculty need to know about this new tool? 

Dr. Justin Curry: [06:28]

Yeah. So, Justin Curry here, I'll just say that the comparison I keep kind of going back to is, like, ChatGPT is kind of, like, the graphing calculator for the, you know, arts and humanities. Like, it definitely reached a point when I was going through school where, you know, we started learning how to use graphing calculators and it luckily had, like, enough limits that it was really a help rather than a hindrance. 

[06:53]

But I also ran into a situation where I tried to take, like, an advanced math class and I had, like, the better Ti 86 calculator at the time, and I found that I could basically do all my calculus problems for me and that semester I essentially learned nothing. And it wasn't until I went to college where my first, like, calculus class there was, like, all calculators are prohibited, and then that really caused me to be like, ‘oh, I was really leaning too much on this as a crutch.’ (Babette agrees) And so, I think trying to communicate those sort of, like, words of caution and thinking about, like, what are the kind of stop gaps we're going to put on this technology is going to be an important discussion we need to have as a university and starting first as individual faculty, unfortunately.

Babette Faehmel: [07:38]

Right, right. Are you having these conversations already as a collective at SUNY Albany? 

Dr. Justin Curry: [07:45]

Well, the AI initiative just started, so we are still kind of herding ourselves together, but I'm hoping in the next couple of months we'll start working together more as a collective. I don't know if you guys have any...?

Dr. Alessandra Buccella: [08:00]

I know—. I mean, I'm not teaching an undergraduate class right now, and for graduate students, is a slightly different set of problems when it comes to using technology—especially in the subject matter that I teach, which is philosophy. But I know of colleagues that have been—not just at SUNY Albany but more generally in academia and other institutions—they've been taking a very proactive approach in, like, explicitly mentioning ChatGPT; not creating any sort of, like, mystery or, you know, fear or some sort of sense of secrecy around this technology, because it's freely available, everybody can just use it on their own computers. So, there's really no point in pretending that this is not around. In fact, it would be, I think, a good strategy to make it explicit and say look—and maybe even encourage students to experiment with it. Be pretty explicit about it: ‘take a run at it, see how it works. You'll notice pretty soon that it has some limitations.’

[09:03]

And like Justin was mentioning earlier, that students get a little scared when they see that this technology is really not as perfect as maybe it was advertised. And they might just take—spontaneously—a step back from using it in the first place. It's—. But yeah, more generally, I would say, it’s—. I think it's really good to face the thing directly and maybe come up with, like, some collaborative policies in a course, involve students in the decisions and how they decide to rely on this technology in their own education. I think it's a good opportunity to have conversations about, like, what it means to be in charge of your own education. (Babette agrees.) If you're a college student, you should be here because you want to learn and not just to, you know, get grades and a piece of paper at the end of it. You should, you know, value learning for the sake of it. 

Babette Faehmel: [09:58]

That is definitely what it should be, absolutely. Alex, are you aware of conversations that students have or how they, like, approach this new tool? 

Alexandre Lumbala: [10:10]

Yeah, ChatGPT is very interesting. I also started incorporating in some of the work I do in class—some revisions and understanding problem sets kind of further—especially if I don't have a tutor available to me, you know what I mean? It does help (Babette agrees) in getting deeper explanations sometimes. But I was also hesitant in using ChatGPT and other AI models because I was told—or I heard—that they can spit out false information and I was afraid that if I was to apply information I got specifically from ChatGPT into my homework I would get it completely wrong, and I wouldn't even be aware. So, it's definitely interesting. And I actually have a question for—I think it's Professor George and Keion, maybe—because I believe both of you have taught computer science classes at a certain point and anyone else who has taught a computer science class. How do you—because I'm taking a C++ class—how do you feel about students using ChatGPT to help them with coding and to help them understand better? 

Keion Clinton (Guest): [11:11]

So, this is Keion. It's 100% cheating. (laughter) To the extent that I tried to break students' mental last—yesterday I believe it was—for my Programming Fundamentals class we're now using C Sharp for Programming Fundamentals, we just started IF statements. So, since we have a lot of computer science slash programming for gaming students, I brought up AI in a form of video game AI. And students got to talking about how AI characters will be horrible and then that quickly changed the conversation to ChatGPT, and then I actually had a lot of students just—for lack of a better word—rat on themselves. It was like, ‘oh yeah, I use chat GPT. Blah, blah, blah.’ Like is it helpful for you, is it useful? And they went into the same issues with the video games. The AI is not as smart as it could be. And when you dive deeper into it and you realize that it's just a bunch of individual IF statements to say that if this condition is true, print out this data, I feel like it opened a lot of student eyes to allow them to—going back to your statement—to stop using it as a crutch and to be able to think about ideas on their own. 

Babette Faehmel: [12:32]

Yeah. So, I mean—I totally second you saying it's 100% cheating, because, I mean, I'm a historian and you can put pretty much every kind of history exam question into ChatGPT, and you get perfectly fine answers. They just are, like, terribly boring to read because it's like reading Wikipedia for me, and for that reason they are fairly easy to spot. But I've also made the experience that when I bring up the issue of, like, suspicion of cheating through AI and I have a conversation with my students, like, if they—unless they are, as I mentioned earlier, lying to my face, I have to say it seems that oftentimes they don't quite know what they are doing, because I noticed that for most, for many of them, they actually don't know what ChatGPT refers to. Instead, they have these new apps on their phone that are ChatGPT powered, but they are not particularly conscious of how they work and that it is a form of AI. 

[13:35]

And so, I think some of the confusion is genuine. But it also really, really concerns me because—I mean, this is a diversity, equity, inclusion, and social justice podcast—and I'm concerned that unless we as faculty really discuss what these tools are and what they do, our students are ill-served, because this is a new thing, it won't go away. And Alessandra, I notice you have—like you are very interested in the social justice aspects of AI, and I mean I'm like—. Let's just get this out. Like, I'm so fascinated by your job title of Philosopher of AI. I think that is so cool. So, what is your take on social justice and inclusion when it comes to AI? 

Dr. Alessandra Buccella: [14:21]

So, great question. I'm not sure I have a completely worked out answer to this because as a philosopher, I'm really trained to see so many different facets of a problem and this is specifically a problem that—or an issue, not even talking about a problem necessarily because that has itself a negative connotation—but there are so many different aspects to it, there are great risks and great opportunities all at the same time and they're all pretty entangled with each other. I'm thinking...having a tool that basically knows everything that's been written on the internet—up until a certain point, obviously, because it can't update in real time. This is also another kind of aspect that we don't talk about enough, that there's sometimes this idea, because this is such a—. ChatGPT spits out these answers so quickly and so smoothly that it almost seems like it's actually an agent that is thinking and delivering an answer in real time, but in fact it was trained up to—. It only knows what it was trained on, so it can't just learn in real time, despite, you know, some new updates that are coming out. 

[15:37]

So, this tool has a lot of knowledge, a lot of potential to democratize this knowledge. It's easily accessible. So, there is something to be said about, you know, not everybody has access to a well-stocked library, or doesn't even have, you know, the fast enough computers at home or internet connection that is stable enough to use internet encyclopedias, etc. This is potentially a much quicker and easier way to access knowledge and information. So, I think there is something to be said about the potential to increase equality in access to knowledge and access to—like Alex was mentioning—some people don't have tutors, or some schools might be underserved, and some communities might be underserved in terms of, like, getting second order help on their knowledge gathering practices, and this is really a tool that can be helpful to a lot of people. 

[16:49]

On the other hand, you know, it's still something that is made by companies whose main goal is to make profit. And, so, whenever there is this, you know, whenever profit is involved, social justice issues and potential risks for, you know...Who ends up benefiting from these type of technologies? (Babette agrees.) It's obviously a question...but people are talking about it, right? That's probably one of the biggest parts is, like, the conversation is happening. These tools are—now are giving people the opportunity to face these issues...very directly and very quickly, because we need to act quick because of how quickly these technologies are developing. 

Babette Faehmel: [17:34]

Yeah, that scares me. In higher education, we are not the most agile, like, institutions that pivot to—even though we did that during COVID. I mean, we were—. All of a sudden, we pivoted, like, very quickly. So, I hope, like, maybe we can use that model again. Rukhsana, you were—I think you were, like, nodding or shaking your head?

Dr. Rukhsana Ahmed: [18:02]

Yeah, I—. (crosstalk) Yes, I wanted to—I was itching. So, this is Rukhsana Ahmed. So, I'm going to touch on a few other points that were—if I may—that were brought up in previous discussions. I think it's the—. And I'll try, like, if my memory serves me well. I guess it's, yes, students, we are expecting a lot from students, but I think we also need to admit that we, as faculty members, we don't know much. (Babette agrees.) So, we are not just, you know, it's the—. I always mess up with this analogy. It's, like, is it the elephant is in the room? (laughter)

Babette Faehmel: [18:32]

Yes.

Dr. Rukhsana Ahmed: [18:33]

So, we got to really talk about it. So, in the (clears throat)...excuse me. In my discipline there's this—and in other sister disciplines—there's this, you know, idea of digital literacy. (pause) So, if you think about digital literacy from the point of digital literacy...Yes, you know, this could be a really accessible platform to use. But what about folks who are part of our groups and who don't have that technical background? So, when the, you know, digital technologies started to make their advances, we talked about, you know, the generations. So, the baby boomers, the gen—now I...I lost track. 

Babette Faehmel: [19:32]

I always lose track of that.

Dr. Rukhsana Ahmed: [19:34]

Yeah, exactly. All the years and all that, that Gen X, Y, Z and all that. So, we were considered—. Like, my age group would be, like, digital immigrants, because we didn't grow up with technology, as opposed to my daughter, who grew up with technology. So, they're digital natives. So, what about AI? Would any one of you say that you are digi—AI native? (crosstalk) I don't know. I don't think so. I didn't think so. 

[20:03]

So, there is something to consider. So, we are all AI immigrants. Which means we really need to take a hard look at ourselves (Babette agrees) and actually do, sort of like a knowledge check. Have these conversations, dialogues; there are disciplinary differences. So, I think I'm the only one in the room without—like, in the discipline which does not require AI to be part of, or ChatGPT, to be part of the curricula, the topics we will teach. But it's not that we are not using it. 

20:41

We are utilizing it in different forms, but we are—. I don't think we are having that conversation. So, at least I know that we are really eager to have that conversation where—. So, part of like democratizing it—. So, it's—. So, all parties need to be involved. You know, the learners—and I think as faculty members, we are also at the learner seats too with our students. So, I think you know, as soon as we acknowledge that, I think you know we need to—. So, what I would say in summary is that we need to think about the competencies, you know, the different competencies that are important, and those competencies, if we want to build those competencies—those skills—that also would need to be accommodating the technological advances. Because, like you said, you know, it's still learning, and when you said that I was like, ‘oh, do you really want it to learn in real time?’

[21:42]

I can't, like I can't fathom that. But it's still learning, so there will be developments. And then also research findings, because we need to really find out, you know, really what's good about it, what's not so good about it. You know, because we are, like, evidence-based practices, right? And also changing social norms, because we are evolving with it. I know that I'm still learning with it. Yeah, so, I'll stop there now. 

Babette Faehmel: [22:09]

Oh, that I can only—I can only agree. Like, this is just—. First of all, I find it actually kind of scary how I sometimes catch myself trying to hoard knowledge because I will step into the classroom and ‘hey, I'm the teacher.’ And so, this really, I mean, in many ways it does—it takes away the need for a textbook. And, like, I mean, I have been using open educational resource textbooks for years now, so they are free, so I will keep using them. But like, basically, this is a Wikipedia on demand, right? But it's also the kind of like know-how. Like, I mean, we really, right now I think, need to massively invest in our own professional development at a time when nobody has the time. And I mean I don't know. I think I was expecting SUNY Albany to say, ‘oh, we have it all figured out, professional development, we offer this and that, and then we do this.’ So, I can bring that to my superiors and say, ‘see,’ but apparently, we're all in that early learning stage. And yeah, I mean, I think I totally agree, we need to open ourselves up to the possibility that we learn a lot from our students and with our students. Alex, have you had that experience that professors are using it in class a lot?

Alexandre Lumbala: [23:32]

No, I haven't actually. And it also dawned upon me that I was underusing it sometimes. There was one when I first took my first English class, I had a classmate that would use—it's called Grammarly. 

Babette Faehmel: [23:47]

Oh, yeah. 

Alexandre Lumbala: [23:50]

Now it's more popular. But she brought it up and she was like, you know, ‘just throw it in there and it will help you find your mistakes and help you correct them and change words and whatnot for your papers,’ and I was like, ‘I don't feel like I'm going to be learning that way.’ And I went on—I went through the class, struggle with it, because it was my first like American English class and yeah, but—. I can—I went into other classes where the cheating rules weren't like the same as the first English class and I was like, ‘oh, I could use Grammarly to change my word forms and sentences and all that and actually use it.’ And I was so surprised as to how good it was and how I was actually learning, and my writing was improving. And it was kind of like, no one really talks about it in class necessarily, especially classes that don't use AI typically. So yeah, I was also—. I'm happy to see i—. I would be happy to see if, like, the school actually takes an initiative to kind of, like, addressing AI use in classes. 

Babette Faehmel: [24:43]

Totally, totally. Like, yeah, absolutely. Like with students—like with students in the room. 

Alexandre Lumbala: [24:48]

Yes, exactly. With students in the room, explaining to them how they could use it, how they might not want to use it, and how they might also be able to use it to improve their knowledge within the class basically. 

Babette Faehmel: [24:58]

Absolutely.

Dr. Alessandra Buccella: [24:59]

And, if I may piggyback on that actually, I think this is actually a perfect example of how you can use these tools in a, you know, in a truly pedagogically valuable way. Because it's—it sort of brings up some features of learning that are not usually as emphasized, especially in college classes, where it seems like most classes just have the professor—the authority figure in front that lectures and delivers the content—and the students are supposed to passively absorb it and take notes and learn whatever the professor has done. 

[25:38]

I mean, this was especially true growing up in Italy, college there was very much with this, like, having this structure and this format. So, having space to experiment with technology and tinker with it and have this back-and-forth exchanges with Grammarly. Even just writing an email and you get this real-time feedback on how to improve your sentence structure or the tone of your email—these kind of things—it really emphasizes how human learning is not just the absorption of content (Babette agrees) but it's about making connections with these content and knowing how to do new things with this content. 

[26:17]

And I think these technologies are really an amazing opportunity to dig deeper into, you know, becoming more mindful and more aware of how we learn and how, you know...education in general is supposed to be essentially much more of a cooperative enterprise than it's often made up to be. 

Babette Faehmel: [26:39]

I just really wish it wouldn't have come on the—like right after the COVID pivot. First of all, everybody's exhausted and, like...there are all these disruptions already and now there's another one. But it also seems that we are dealing with the COVID learning loss and now we actually have a lot of students, I think who...Well, the value of the content that they—like in in our community college classes—is not clear to them. Like, how does it apply to, like, my future and my career goals and whatnot, or my desire about having a meaningful life? It's all seems to be like a dead letters and just like—as you said Alessandra—like the sage on the stage kind of teaching. But also, I think our students don't always know what information is and what it should be and how to then basically process information and to use information to create something new, or, like, to process, like, to look at it critically. And I know, Jacquie, this has been like your thing for quite a while; information literacy and, like, how to integrate and incorporate all faculty in information literacy...like development, professional developments and such things. Do you want to talk about that for a sec? 

Jacquie Keleher (Guest): [28:08]

Sure. Thanks, Babette. Information literacy to librarians is incredibly important, but it really became so much more important in the recent past in digital literacy, and we have to expand our scope. But we need faculty to understand what that is and our importance in that role because we need our students to really understand that because there's so much information out there that they are consuming. I have family members that remind me regularly—we're in a book club together—of how much the younger generation consumes information and the things that they know, because it's always there. And the question is: are they consuming it in a way that they're getting the correct information and how do we teach them what is the ‘correct information’—I'm air quoting—or is it from a reputable source? 

[29:01]

But I think a couple things about AI and information literacy and digital literacy is—Alexandre, you brought up a great point of Grammarly, right? We've been using AI for such a long time, we just didn't name it that, right? We didn't publicly name it that, and we've been asking our students—or our students have found it in ways that until ChatGPT really put it on the map. But there are some benefits; I mean, if you use ChatGPT—or BARD or whatever you're going to use—and you start your research, what it can help you do is really form a really good research question. Because you have to be able to ask ChatGPT the right question to get the information you're looking at. And I think if we talk about that, and we have some professional development with faculty—and then we can get into a classroom with students who are in the classrooms—that would be important. You know, Babette, you are always a champion of librarians and information literacy, and I thank you for that. We need to do better. 

Babette Faehmel: [30:03]

If we assume that 100% of our students will very soon be using ChatGPT for instance—or Bing or whatever—how are they using it? Because what I see right now is that students...the students who I—let's just in air quotes—’catch’ using it, use it like, pardon the frankness, the dumb way. Because they just put my questions into ChatGPT and then give me exactly what came out of it, instead of using it like a tutor, instead of using it like a tool, instead of using it to dig deeper. And this is a big concern of mine, because it drives me crazy when people are so uninterested in information, like when—when it does not trigger any further thinking questions. So, that's a big concern that there will be a division between the smart users and the not-so-smart users. And then the other thing that really concerns me is—just what Alex was earlier referring to—students who use Grammarly and are not aware that Grammarly is now using...I mean, it's like AI powered or whatever the term is. 

[31:10]

So, if a professor uses plagiarism detection tools like Turnitin, that thing will come up flagged. And then who do we suspect of cheating? It's maybe a student whose grammar was not perfect before and now, all of a sudden, it's pretty good, right? And who might they be? I mean, this is not just like—. It's not—. It's like that—. There can be easily a diversity, equity, and inclusion problem here and so—and once again—we need to make sure that, as professors, we have these conversations in a public forum with our students, but also that we—as Rukhsana mentioned—educate ourselves, right? (agreement)

Alexandre Lumbala: [31:56]

Alexandre here. Actually Babette—on that note—I've heard about the platform or the tool that professors use to grade and to catch plagiarism issues, Turnitin, and I don't know a lot about it, and I don't think a lot of students know a lot about it, but I just remember one specific class, there was a homework assignment that a professor did mention that a lot of people were caught using Turnitin. But my question basically is: what is Turnitin? Like, what is that? How—? Yeah...

Babette Faehmel: [32:26]

Yeah, okay. So, I assume—and correct me if I'm wrong—it's AI used to catch AI. 

Alexandre Lumbala: [32:33]

Okay.

Babette Faehmel: [32:34]

It basically—. I would assume that—. First of all—. So, it used to be a plagiarism detector where it basically can tell you which parts of your paper are not original because they already exist on the web, right? So, Justin is nodding, I'm relieved to see that. But now it also catches...Well, these AI detectors are problematic in their own right because they go by the statistical, right? So, human language is kind of like...there's a degree of randomness or a degree of unpredictability to how we talk, right? And when it's AI written, where it's basically just based on a statistic prediction model—this is the next word that now should come after this word and this word and this word—there is less randomness to your writing. And so that's why it gets flagged as there's a probability of—I don't know—60% that this is human and AI created. Or, I mean, I've seen some results where it basically said: there's a high likelihood that this was 100% AI created, (Babette laughs) and that's how Turnitin now operates. Would you agree? Jacquie?

Jacquie Keleher: [33:49]

I believe so. (Babette laughs.) But I mean correct me—again—if I'm wrong. Turnitin was something that actually students would run their paper through and then Turnitin kept all of the student information, and then a professor would use it from a professor point of view, and they would say, ‘yes, the student cheated because they got their information from somewhere else.’ Now they've also added an element of ‘if you're using AI.’ So, there's some ethics there, right? Turnitin—. You're using Turnitin as a student, for whatever the reason is, but now they have all of your—they have your paper and they're using it against other people. So, you know, it goes to the larger ethics question of, you know, what data these artificial intelligence is keeping about you, and knows about you, and what you're sharing. But—my apologies if I got distracted or off topic—but that's what Turnitin does...so.

Dr. Justin Curry: [34:44]

Yeah, just—. One thing that seems to be popping up in a lot of these conversations is: like, how do we deal with education in the in the face of...basically cheating, plagiarism—things like this—and, like, the extent to which we are essentially weaponizing AI to do that. And I think this actually fits into a theme—Babette and Alessandra, I know you both have been pushing towards—which is, like, what are the dangers in terms of, like, increasing inequality? Because the one thing which I keep thinking about, again and again, is COVID did push us towards online education. And one thing which is like really, really hard to protect yourself from is—especially, I think, whenever there's long written assignments or long coding assignments or what have you—like, if you're not having any, like, real time in the same room opportunities with students, like, you're not really getting a real sense of what they understand or can do, if they can essentially always turn their homework in through ChatGPT. 

[35:43]

And I think that's a problem because we are—I mean, I hate if I'm generalizing here—but, I mean, the people who tend to be really drawn to online education are the people who have, like, non-traditional sort of learning arrangements...maybe they're working a job, they can't get to campus. And I think one thing—. The only way I can see ourselves is—really safeguarding ourselves from ChatGPT is by really turning to a lot of in-person education. Of course, it doesn't need to be 100%, but I do feel like there needs to be some sense of, like, co-creation of knowledge amongst humans. And whether that happens in a discussion section in a philosophy class—so you can get to see what people are actually thinking—or, you know, in the case of math it's usually having an in-person like timed exam. As much as exams suck, like, that's kind of the only way we can really see, ‘do you really know how to do this on your own?’ Like, that's the conversation we should be having. 

Babette Faehmel: [36:45]

Yeah, yeah, exactly. Because, I mean, there are all these, like, inherent dangers of that too. I mean, one thing that we all were reminded again and again during COVID is that if you time your exams then that leaves behind all these kids that have spotty wi-fi connections. (Dr. Justin Curry agrees.) Those kinds of things. So, it’s almost like, are we always—almost always trying to, I don’t know, trick the technology and always be one step ahead and it seems impossible. (laughs)

Dr. Justin Curry: [37:13]

Yeah. And I think the danger is, like, especially when—so I teach in the data science program—is I'm really worried about students like getting ChatGPT to code for them up until they're ready to get a job. And then an employer says, like, ‘here do this task which has never been done before.’ I hate to break it to you, but ChatGPT doesn't really know what to do in a situation that's it's never been trained on. I mean, again, we could get a little more philosophical about originality as a byproduct of blending existing things. But there's a real danger that, like, people are going to come out, spend all this money on a master's degree, and then, like, not be able to do the thing they were supposed to be able to do. And we're gonna create a gap where people who did get the in-person training, like, are going be able to do the job and get the high-paying, you know, things, but...

Babette Faehmel: [38:01]

Exactly. Students who now, I don't know, in addition to their LSAT tutors have a tutor in prompt engineering and they will be, like, using the new tools so creatively. It's going to be astonishing. But I mean, on the other hand, have we really prepared our students to, like, deal with unheard of—completely new problems before AI? Or have we just basically expected them to, I don't know, reproduce knowledge that we put into their head? Like, have we really used—how we should be using education to empower? And I don't know. So, I struggle with that. I am trying. I mean, I want to believe that I'm trying to do that, but am I? (laughs) So...? 

Dr. Rukhsana Ahmed: [38:58]

I...

Dr. Justin Curry: [38:58]

Yeah—go ahead.

Dr. Rukhsana Ahmed: [38:58]

Yes, thank you. Sorry, Justin.

Dr. Justin Curry: [39:01]

No, no, no.

Dr. Rukhsana Ahmed: [39:02]

So, I wanted to actually respond to something Justin just brought up—but also just quickly going back to what you just said—because I was also actually thinking about this: I think we need to take not just a step back, but a few steps back—and I'll come to that. But in terms of education: I think as faculty members oftentimes the onus...shouldn't just be on us. Because, you know, we might have—. You know, think back to the time when you wrote very passionately probably. I'd like to think that teaching philosophy—the first time you were applying for your job, right? Not when you're changing job or applying for a promotion and whatnot...down the road that passion sometimes would be shaped, reshaped, and I don't know where it will go, because of the constraints of what we work with—resource constraints and whatnot. So, I think, you know, there's this dialogue that needs to happen, where administration needs to be involved too. (Babette agrees.) 

[40:07]

Because in a class, like, I'm, you know, very passionate about those kind of personal one-on-one time with my students. But when I am in a class with a lot of students where I know that I have another class and all that and I have to manage everything—research—so, I guess, you know, I have to also adapt to that circumstance. (sharp exhale) And the assignments can be sometimes, you know, they just end up being quizzes. So...and that, you know, feedback. But what I have—going back to what you were saying—tried to do, because I've been teaching asynchronous online classes for some time...and thinking about the ChatGPT. Because you know there was a discussion where a few of us were there and one of my colleagues did say that they ran through, like, they ran the test through the ChatGPT and it really actually answered everything perfectly. 

[41:07]

So, okay, what do you do? So, I use focus comments. Focus comments are—. So, you know they write codes using ChatGPT. You can just even do one minute paper. So, you actually where you ask students very pointed questions—just reflection questions. So, they're going to say, like; what was the one thing they were surprised about, what stood out to them, what they thought could be done better? So, where there is actually a personal reflection on what they have done, so through that you can actually get a sense of their learning. So, that's what—and it's very personal. So, I don't know if ChatGPT is smart enough now to really...unless they are gathering all the data about us and creating the profile. (laughter) That's like...you know, what am I thinking about, or what this person could be thinking about now, going back to my third and last point...the three steps or four steps back. 

[42:01]

I was just thinking about literacy. Because a lot of times, you know, we—I heard ourselves talking about ‘oh, the right information.’ Yes, you want to find out that you are able to gather source credibility. We used to talk when, you know, internet was coming up and Wikipedia; we would like put in our students assignment that Wikipedia cannot be a source because you know of this reason and that reason. Or you found a source, you know, if it's ending with dot gov or dot org then it's credible and all that. So, with AI now it's just like, you know, what do you do? So...we are asking a lot of us about others and whatnot. And if you think about like in the US—. So, in the context of—like, health communication is my main research area. Health literacy, you'll be surprised. Like, my stats are a little dated. I think almost a decade back, US was—like 90% of the US and the people in the US didn't have the required level of health literacy; which would be to access, find information, access it, understand it, to make an informed health decision. 

[43:11]

So, it goes back to literacy—basic literacy level, because they're not reading at the basic level: like 3rd grade, 4th grade, 5th grade, 7th grade, 8th grade. That's the general public you're talking about. How do you expect us to be so far—? And that kind of literacy would also involve—because 90% of the information, of our health information—we are getting from online. Now, if you have to want to have that—have to have that digital literacy. It's also numerous skills: so, science literacy, computational literacy, and all that. So, with AI, like what are we expecting of us? So, we are not even there, like we are really lagging behind. And now it's like this race we have with AI. Sorry, I got so excited. (laughter)

Babette Faehmel: [43:56]

No, absolutely, I hear you. I mean in history it's a different kind of literacy problem, I think. It’s the search for the real truth. Like—it's, like, my students oftentimes when they confront new information, they throw out all the old information because they realize, ‘oh, this was wrong, so this is now right.’ But that's not the point. The point is to keep searching for the best information, to look at it from different perspectives and to realize—to understand—that there is not just one perspective that is the true one. And where does knowledge come from? How do we know what we know? It's kind of, like, this epistemology issue, and in many ways, I think we are now faced with the consequences of an educational philosophy—or education system—that has really not empowered people for decades. Like, it's just, like, I don't—. Like, it's educating a workforce, but not critical thinkers. And now this hits us. 

Dr. Justin Curry: [45:00]

So, yeah—. Again, the conversation we seem to still be having is like: how do we encourage students to create? And I think you just said the magic word, which is: we really need to focus on thinking skills. And in my mind, like, the way to kind of incorporate what Rukhsana just said to get students to, like, create their own answers to things, is we need to give them both tools for thinking, right? Logical principles for how do you go from, like, point A to point B, but then, secondly, also be very clear about, like, the landmarks they're supposed to be kind of navigating between. So, like, all right, we introduced this concept from history. Like, how did this, like, impact these other things? 

[45:45]

And I think the one way not to, like, make all this conversation about cheating, but to really...Because, like ChatGPT is going to pull in the world's information, whereas, like, what we need to try to be encouraging our students is, like, given this limited knowledge which we covered in class, like, how do you, like, think about this problem? And I think in general, like—. And I had the same thought when I was going to college was, like, ‘oh, Google's a new thing. Like, if you have instantaneous access to knowledge, like, why does it matter, like, memorization, like, you can just look it up, right?’ You want to be able to, like, think logically, and we really need to focus on that. Like, and I think as a university, we need to, like, ask ourselves, like, what are, like, principles for thinking that we want to, you know, give to students? And that's going to be different for every discipline (Babette agrees) and for history, maybe it's Marxism, I don't know. (laughter) Like, no—but go on...

Dr. Alessandra Buccella: [46:36]

Yeah, and to add to that briefly, Justin. I was—. I would also add that these technologies are...in combination with the increase in online education and asynchronous options, which is all—you know, diversity is good in all sorts of ways, and diversity of methods for learning is also very important. We might arrive at a situation in which—. I mean, I was reading statistics already when people are starting to question the very importance of going to college and whether this is really something that makes a difference in your long-term goals and what you're going to end up doing with your life and the opportunities that will be presented to you. So, diversification of options is very good, but it's only good if this doesn't imply or leads to fragmentation. Cause another big risk of these technologies is—and not just ChatGPT but I'm thinking more broadly about also the way social media have been evolving in the past several years and how the new generations now, one of the reasons why they—we might be getting people who, like, have a harder time reading is because there's TikTok: which is basically just short videos, audiovisual stimulation that doesn't really give you time to reflect and incorporate that content into something else that you already know. So, it doesn't encourage a critical approach. And so, there is a risk that people will be more and more isolated with their own content and forming their own content bubb—but each one with their own content bubbles. (Babette agrees.) And that's really—should push us to interrogate ourselves in terms of, yeah, the creative process of coming up with knowledge and knowledge as a network building exercise; connections among pieces of information, not just a sum of information that the bigger your bucket, the more things you can put in, and the more—you know, that the bigger your bucket, the more things you can put in and the more you know. But it's really more about seeing connections and seeing—and these connections can only be brought up through collaboration and through humans working together and building some common ground that cannot just be found at this point. One—. Back in the day, there was only one source of knowledge, or two sources of knowledge: everybody would read the same books and watch the same movies. Now, there's so much personalization going on that it really has to be a conscious effort to bring it back to—and willing to construct—this common ground that is not just given to us anymore. 

Babette Faehmel: [49:26]

I totally agree. Also, with the importance of competencies and really conveying to students: what is thinking? How does it show up in our daily life and how is it important? Because, really, I don't remember ever having had a conversation—. We have these Faculty Institute Weeks that are at the beginning of every semester. There was never a Faculty Week Institute that was about ethics of information, or thinking and critical thinking across the curriculum, and maybe it's time. (laughs)

Jacquie Keleher: [50:09]

It is time. May I ask a question, Babette? How, all—? How important do you all think it is for us to teach students about using artificial intelligence though? Like, you've developed a course at UAlbany, right, for every freshman?

Dr. Berg?

Right.

Jacquie Keleher: [50:25]

So, I mean you must—UAlbany must think it's incredibly important?

Dr. George Berg: [50:27]

Well, I can't speak for the whole university: I'd get in a lot of trouble. This is George. My take on this is—and I think Justin alluded to this—at some point...they're going to walk out. Students are going to walk outside of our halls and the places they're thinking of working are going to want them—are going to assume that the students have these skills...because these are fantastic business tools, for better or for worse. And one of our responsibilities—again, subjectively—is that the students know how to intelligently use these tools, how to catch them when they don't. And there's actually a critical thinking aspect in there: if we can convey to the students the fundamentals of their disciplines of knowledge, once they've got that, these tools are actually great for critical thinking. ‘What did ChatGPT miss? What did it get wrong? How can I make it better?’ And essentially becomes kind of a raw material that they can manipulate. So, everyone needs to—. You know, we talked about a lot of different kinds of literacies and AI literacy is going to be out there. Because a lot of folks, the places they're going to find themselves interning at, wanting to get jobs at, are going to ask them; they're going to assume that they know this. So, I think every student coming out of higher ed needs to at least know what it is and be a little bit facile with it. 

Jacquie Keleher: [51:47]

Thank you. That was really a selfish question, because Babette and I just had this conversation the other day, that I was in a webinar and the person said, ‘AI is not going to replace your job, but what it is going to replace—who's going to replace you is somebody who uses AI if you don't,’ right? And we were talking about how we need to make a concerted effort on this campus to expose our students to those things. So, it's great to know that you all are doing it to help us...here. 

Babette Faehmel: [52:16]

I think it would be an ethical problem if we don't. Because we are an institution for education, and especially one that is open access and that is supposed to really democratize, like, access to higher education. Our students need to be like—they just need to be, like, informed and trained and educated about these new tools. Because I mean, I don't—. I only have to open, like, an issue of WIRED, and look at an article about AI and I see the same white faces look back at me, right? It's just like, it's a—it's not a diversified, like, group of people. It's male. It's predominantly white. And I mean there are plenty of articles out there about the biases that are encoded in code, or in the algorithm, and that's not new, right? So, definitely, it would be so neglectful, absolutely. But well, we are a community college. Our professors have a very high teaching load. We have—well, we have enrollment is down across the board in higher education, but still, like, there's not a lot of time. So, I'm once again, you, as the AI experts, where would you say—? What would you recommend where we put our emphasis and our focus first when it comes to professional development and open forums with students and such things? (pause) Anybody? (laughter)

Dr. George Berg: [53:52]

Two things come to mind when you say that. I think there's two things. One is, in general, if you're a student, definitely engage with your teacher. It's where the rubber meets the road. It's the difference between Alexandre's learning from Grammarly and you not learning from your graphing calculator. It's that engagement. Engagement is the magic word. So, a lot of it—and this doesn't have much to do with AI—it was true in October of 2022: engage with your professors. And Rukhsana made the point about it's the one-on-one, the small group time. Those are critical, no matter what your environment is. So, I think that's a big part of the answer. Now, in state institutions, are we resourced to be able to readily do that? That's an entirely different question. And the second thing is with the ChatGPT and all these large language models that's a little more problematic, because as you're pointing out, there's 24 hours in a day; we're expected to have 27 worth of things to do. It just—got to find a way to prioritize it. 

Babette Faehmel: [54:54]

Yeah. Well, I mean, there's also a lot of discussion about using AI to get rid of routine tasks such as assessment—which has never been my favorite anyhow. So, yeah—but, once again, it takes the skills to do that...to learn how to use the tool smartly. And it's been a while since this thing came out, and by now I feel like my—our students are...But I mean, that's the thing that I keep thinking about. My students right now are learning about all of this on TikTok or, somehow, from one another. It always reminds me of like reading about like what Sex Ed is like when Sex Ed is not being taught in class: students teach them—like, each other and it's never a good idea. (laughs) So, here we have it again, right? So, there's something that is just not—that there's a disconnect here. And I hope that we will close that soon through professional development and involving students in the discussion about what to do next, and how to use these tools. 

Dr. Rukhsana Ahmed: [56:03]

Babette, may I offer...an icebreaker for those kind of discussions?

Babette Faehmel: [56:08]

Please.

Dr. Rukhsana Ahmed: [56:09]

Okay, unsolicited. So...Carl Rogers was a humanist psychologist: so, he was known for his approach to patient-centered care. So, the idea is, you know you give a lot of—and one example would be the parent-child kind of relationship. You know we want our kids—we want our students to do really well and oftentimes—going back to the idea of assessment—we may not be very open to giving like a positive regards. So, the idea is: give a lot of positive regards. We teach our students also to be good evaluators, provide peer feedback. Don't say, ‘oh, your paper sucked,’ and then you don't give any more constructive, you know, like, feedback, right? So, thinking about those ideas, how can, you know, like in a—let's say you're in an ideal world—I would like to think we're having this discussion. We find ourselves, like, you know—we're going to have an open forum about AI, you know, where we have the administration, we have faculty, we have students, we have staff and everyone. So, and, you know, we are all concerned about, I think, you know, at least, if I may say, critical thinking, the idea of critical thinking, and a lot of times we say, ‘do I actually engage in critical or does, you know, like, he or she or they?’ And you can say, like, ‘yeah, we all engage in critical thinking on a daily basis.’

Babette Faehmel: [57:55]

I mean, everybody has this urge to produce something or create something; I mean, like, isn't that kind of human? Like, whether or not you paint, like, a buffalo on a cave wall, or if you are, like, creating something through code, right? And if we can do that more, maybe, really—. I mean, so, every crisis could be an opportunity, just like COVID was a crisis that became—could have become a bigger opportunity. But, like here, once again, like, take three steps back from what we are doing because, like, I—. A student said this to me the other day—a student who I really, really, like, love having in my class as a very, like, productive contributor to everything—he, and he basically said, like, you know, ‘it's week five and by now we all, like, realize that the class we felt so excited about is a lot of routine stuff and it can be boring,’ and he is so absolutely right. And I mean, if—why are we doing that? (laughs) Well, like, why do we—? Why do we have to make it so...I don't know, unplayful? And maybe this is an opportunity for us, but we are also—I mean, we all went through grad school, where they took that out of us. (laughs)  

Dr. Justin Curry: [59:20]

Absolutely. Well, and I think something that both of you are touching on is, like, you know, ChatGPT—or these large language models—they're not, like, embodied in the world. And, like, us as humans, we are, like, existing in the world and we each have unique experiences. And I think something that, like, Rukhsana really touched on nicely was this idea of, like, getting people to do self-reflection because, like, that's the only place where we're going to have, like, novelty really brought in here. And so, I think in general, we're going to be, like, pivoting our education. And you know, again, if you look historically, education was determined, like: can you recite lemma three of Euclid's elements? Like—and nobody cares, right? Instead, you want to say, like, well: how does math, like, show up in my day-to-day life? And you want to be getting at these sorts of questions, and you also want to be pivoting your assessment to, like I think, be more individually based. 

[01:00:07]

But to your point like, how do we do that when we're resource strained as instructors? My hope is that we do less of the boring stuff when it comes to assessment and we, like, put more time into like the fewer qualitative assessments of people's understanding. So, like, I hate to say it, but, like, oral exams probably should be coming back. As many biases as that has, like, that's one of the few times where I get to really ask follow-up questions to see what people are understanding. And just as one last point: I want everyone to know, like, ChatGPT doesn't actually understand anything. It's a stochastic parrot that, like, knows how to associate things. And so, getting people to, like, really reflect on how they make meaning out of their lives and their experiences and how that connects to the things they're learning. That's going to be, I think, more of a focal point. And we are going to have to redesign our curriculum— curricula.

Babette Faehmel: [01:01:01]

We do.

Dr. Justin Curry: [01:01:03]

Which—yeah, is more work. (Babette agrees.)

Dr. Alessandra Buccella: [01:01:06]

Which brings back this—very nicely—the issue of diversity, equity, and inclusion: because it's going to force us to face the way in which higher education very often, yeah, took the individuality and the lived experience of the humans that are involved in higher education out of the equation, when that is the bread and butter of what education should be. You should be learning to be yourself and to carry yourself into the world. That gets—got often pushed under the rug, but I'm actually welcoming the challenge from AI in this sense, because...they've tried to make ChatGPT come up with, for example, personal essays for college admissions. Things that... 

Babette Faehmel: [01:01:58]

Oh, they’re horrible.

Dr. Alessandra Buccella: [02:00:00]

...they're trying to make it impersonate different categories of people and what comes out is really just very boring and, you know, uninteresting stereotypes—most of the time. So, there is really human richness—the richness of human experience is not being lost. If anything, these tools are forcing us to bring it back and value that even more. 

Babette Faehmel: [01:02:24]

I mean, I totally agree that right now, this is what you get, right? It's like some of—the writing is bland: it's very—it's not rich, it's not human, it's a machine. However, I mean, they—we've also...I don't know where we are in one year from now. Now they can, like, imitate characters, right? They're like—I don't know how that works, Keion, you have to explain this to me one day. But it's just—there's a change every week. So—and I know we are like above our one hour, like, period that we get in here usually, but let's, as a few last points, like—where do you see this going in the next two years? And then, what, like—? Well, if—let's just say hypothetical, completely hypothetical scenario—you have some faculty who just don't think that this applies to any of their, like, professional area. How do you—like, what would you tell them? Like, how can anybody afford to sit this one out? 

Dr. George Berg: [01:03:34]

George here. I think we already have those faculty. It's not just ChatGPT. You talked about the sage on the stage, old school pedagogy, that just doesn't engage or interest students, not having relevance in there. I think those folks are out there. I think they're just going to be further and further removed from serving their students well...and I think that's an administrator's problem. (laughs)

Babette Faehmel: [01:04:01]

Okay, thank god. 

Dr. George Berg: [01:04:03]

Since I'm not a chair anymore. (laughter)

Dr. Alessandra Buccella: [01:04:05]

Computer science people, where do you see these technologies...go in a few years from now? That is really an interesting point that I have no idea, honestly. As a philosopher, I really don't know. 

Babette Faehmel: [01:04:18]

(laughs) Me neither.

Keion Clinton: [01:04:19]

Well, I've always been a neighborhood pessimist, so I'll probably never have anything nice to say about anything. The fact that Wikipedia is still being used to cheat for a lot of English classes, even before ChatGPT, I feel like—and I feel like both of you can attest to this—a lot of my homework was submitted from I Dream in Code (sic.). So, just being able to pick up on the students' adult patterns and more so, I feel like I got used to being able to adjust to how students code. So, and again, that goes back to the one-on-one, so being able to actually know your actual students, then it's easier for you to pick up on the cheating aspect. Do I think it's going to stop? No, not at all. (Babette agrees.) For my cyber security classes always use the statement that, ‘true, you're going to learn cyber security in this class. Are we going to stop all cybercrime? No. There have been police officers since the beginning of time: there’s still crime.’ So...we're literally fighting the non-winning battle for lack of a better word. So...such is life. (laughter)

Dr. Justin Curry: [01:05:43]

I mean, my feeling is, like, this is a really exciting time to be alive, right? I mean, and it's, I think, exciting for everyone, like for computer scientists. Like, we're going to—’cause now, like, we're more, like, mechanics, right? Someone has just built the first car and we're like, ‘okay, well, I guess that problem’s solved.’ Like no, no, you need to know how to work on it; you need to know how to improve it; you need to know how to add the turbo booster. For large language models it's going to be like: how do you adjust the temperature on these settings, so you get more predictable outcomes? How do you like connect it to these sort of multimodal aspects? Like, if that's something that you really find interesting, then, like, now's a great time to, like, study data science, computer science, math. But I think—you know, also to Alessandra's point—like, you know...The liberal arts, like—and I'm hoping some administrators and the people who are in charge of budgets are listening to this—it's actually no more important time for us to reinvest in the humanities because we really, really need to, like, teach students, like, what are the principles or, like, you know, ideas of, like, being humans and, like, and how do you, like...

[01:06:49]

Yeah, how do we reclaim our humanity instead of just having, like, you know, computers, like, parrot, whatever was written over the past 2,000 years, right? Like...and to that end, you ask, like, what kind of new technologies might we be seeing the next couple years? Like, I actually—I'm hoping that we're going to get more to what I'm going to call small language models. So, basically, like, strip back LLMs that then are, like, trained on specific corpuses of texts so that you get, like, you know, chat Aristotle, or chat Plato, or maybe you have chat Sartre. (Babette laughs.) And then you can, like, see, like, what they would say in response to these questions and sort of have this idea of, like, almost like a personality, but trained very specifically on, like, their writing. And, of course, a spooky idea which I really like is eventually, like, if you think about our loved ones, getting them to talk in and describe their life experiences and have all of that recorded into, essentially a bot, and then you can basically have virtual grandpa after grandpa's died. And it's going to be interesting to see these kinds of technologies and, of course; how does it affect us? Well, eventually there's going to be little models trained on everything I've ever written, and so then we could see what's a virtual Justin going to say in these kinds of settings. So, again, it's an exciting time to be alive, but you're going to need to educate yourself to participate in this conversation. So those are my thoughts. 

Babette Faehmel: [01:08:19]

It's exciting and exhausting. (laughter)

Keion Clinton: [01:08:23]

The one thing that I feel like most people always tend to forget: successful AI is literally Skynet. So, if we keep pushing for AI to get better and better, that's when AI realizes that humans are worthless, and it's age of Ultron, Skynet, iRobot. There have been so many warning signs that says that AI is probably not the best idea for humanity. 

Babette Faehmel: [01:08:51]

Well, somebody ignored that. (laughter)

Alexandre Lumbala: [01:08:55]

There’s a—Alexandre here. And on your idea about the virtual, you know, people who have passed away in a sense, or a virtual person, there's a Netflix series called Black Mirror. I watched it a while ago, it's about that. It kind of—. Most of the episodes are centered around technology and AI use and how it can affect, you know, from like niche, like a one person's life, to like a whole society's life, and yeah, it's kind of interesting. Also, as a student, because we wonder as well, all the things that we're studying right now and like all the memorization techniques that we might try and incorporate, are we just going to get to a job where we kind of just have to know how to use an AI model and get all the answers from that, and then try and solve problems? Or do we still have to, like, value, you know, the acquisition of knowledge? Or should we go for the critical thinking kind of method? And it's kind of scary to see where it will go. But hopefully all the thinking about AI and how it's gonna realize that humans are not worth it will maybe make us take a step back from AI and value human skills more. So...

Dr. Alessandra Buccella: [01:10:03]

And if I may suggest a topic for the next day I podcast—because talking about this now would really require a whole different episode—but we've been focusing on the intellectual consequences of AI. What about the emotional ones? What about the moral ones? Having virtual grandpa that will live forever in our computer is going to challenge a lot of our, like, affective responses to other humans, and that is a whole new set of challenges. Emotional education, not just, you know, critical thinking, but emotional intelligence is going to become more and more important and more critical awareness of where our ethical and moral...principles and rules and heuristics come from and why they are the way they are. That will have to become part of this new information literacy model. Going on—going forward. 

Babette Faehmel: [01:11:10]

So, I think I just heard you say that you would totally come back (laughter) for an episode or an open forum with students. (pause) Now is where you say ‘sure!’ (laughter)

Dr. Alessandra Buccella: [01:11:20]

I'm a big advocate for involving students in these conversations as a higher education institution. (Babette agrees.) So, absolutely.

Babette Faehmel: [01:11:30]

Absolutely. Okay. So, who wants to have the last word? (pause) It's all been said? Alex? 

Alexandre Lumbala: [01:11:40]

Do you want me to take the last word?

Babette Faehmel: [01:11:42]

Yeah.

Alexandre Lumbala: [01:11:44]

This is also my first podcast episode. I was very excited to be a part of this. It was nice to see a lot of—a group of professionals actually kind of engage in conversation about things that students and younger people are kind of interacting with every day. I hope we would do a follow-up episode because I see that there's also a lot of things within one topic that could be touched on. But yes, that is my final word.

Babette Faehmel: [01:12:10]

Perfect, perfect. Absolutely. Well, thank you so much everyone: Justin, Alessandra, George, Rukhsana, Jacquie, and Kion. This was awesome and very educational and challenging, and this is exactly what we want our podcasts to be like. Thank you, thank you. 

Dr. Rukhsana Ahmed: [01:12:31]

Thank you very much. 

Alexandre Lumbala: [01:12:35]

Many Voices, One Call is made possible thanks to the contributions of the SUNY Schenectady Foundation. We're especially grateful for the School of Music's—and in particular Sten Isachsen's—continuing generous support with the technical details. The recording and editing of the podcast was possible thanks to music students Luke Bremmer, Jacob DeVoe, Jean-Pierre Williams-Berpie, Rowan Breen, and Evan Curcio. 

Babette Faehmel: [01:12:59]

Heather Meaney, Karen Tanski, and Jessica McHugh-Green deserve credit for promoting the podcast. Thanks also go to Vice President of Academic Affairs, Mark Meacham; College President, Steady Moono; the Student Government Association; and the Student Activities Advisor.