Learning Technology Coach Podcast

S3E4. A Faculty’s Perspective: The Impact of AI on Teaching in Higher Education

November 07, 2023 Centre for Innovation in Teaching and Learning (CITL), Memorial University Season 3 Episode 4
Learning Technology Coach Podcast
S3E4. A Faculty’s Perspective: The Impact of AI on Teaching in Higher Education
Show Notes Transcript Chapter Markers

Featuring Julie Pitcher-Giles - Assistant Professor, Business Administration, Grenfell Campus. Memorial University

 Dr. Pitcher-Giles has extensive teaching experience and focuses her research on strategic social responsibility. Other interests lie in the scholarship of in-person and online teaching and learning, university learning, and curriculum development.

In this episode, hear what an instructor says about artificial intelligence in higher education. Dr. Pitcher-Giles talks about how faculty members perceive the rise of AI, their concerns, and what benefits they see for the learning environment. She thinks that AI will be essential to personalizing education and that it can help make teaching materials more accessible and inclusive.

The Learning Technology Coach Podcast is a CITL production.

Speaker Key:
TO              Timilehin Oguntuyaki
VK              Verena Kalter
JP               Julie Pitcher-Giles
AN              Announcer

 
00:00:08
TO | Hello, everyone. My name is Timilehin.

VK | And I am Verena.

TO | And welcome to the Learning Technology Coach podcast.

VK | In series three, we delve into the world of artificial intelligence.

TO | Its role in post-secondary education.

VK | How it’s being implemented into the learning space.

TO | Plus a whole lot more.

Hello, everyone. Welcome back to another episode of the Learning Technology Coach podcast, brought to you by the Media Services production team of the Centre for Innovation in Teaching and Learning. With me here in the studio is, of course, my co-host, Verena. Hi, Verena. How are you today?

00:00:48
VK | Hi, Timilehin. Thank you. I’m so good. I am really excited for this episode today. I think I say that every time, but tit’s okay. Timilehin, we’re both learning technology coaches and also teaching assistants. Have you ever thought about how artificial intelligence might impact your teaching?

TO | I’m fortunate to be teaching in laboratory sessions of science courses, which usually require students to either observe specimens in the lab or go out there in the field to conduct experimental studies. So, most of these activities require students to be observational and then to reflect on what they observed. So, it depends on what I feed them with most of time. So, it is not just about going out there and get abstract content. It’s about getting what they see and bring it back to us.

So, I would say I’m pretty fortunate. So, most of these things do not really impact the learning process for most of my students, and by extension, it doesn’t impact my teaching. But I feel so sorry for many educators out there who might be panicking or concerned about how their students use AI to flesh out their discussions and bring content to life.

00:02:06
VK | Yes, I think you really have a point there, Timilehin. I think it’s an interesting time for many teachers, especially those whose courses focus on writing and critical reflection. We know that AI is rapidly evolving, and many teachers might be concerned about how it will change the teaching and learning environment. So, this is why in this episode today, we will talk with somebody who has extensive teaching experience and is willing to share some thoughts on this topic.

TO | Yes, we have this wonderful opportunity to interview Dr Julie Pitcher-Giles of the Memorial University’s Grenfell campus. This episode will be particularly interesting for all teachers and instructors out there, but surely students will also learn a thing or two, or even three, about how teachers might approach artificial intelligence in the classroom. So, whether you are a teacher or you are a student, get in, sit back, and enjoy this wonderful episode.

Welcome back to the studio. We are still on this highly significant and evolving topic, the use of generative artificial intelligence in higher education. Most of the time, we focus on the moderation of how students use these technologies, but it is important to know what professors and instructors think about it.

VK | You are correct, Timilehin. We want to hear from the professors themselves, and we would like to know things such as, do professors use these tools, as well? Do they have any concerns that their students are too dependent on AI? Or do they have to revamp their curricula to mitigate academic integrity issues?

00:03:58
TO | These and many other questions will be answered by our guest today. We have the pleasure to sit down with an accomplished academic all the way from Grenfell campus, Memorial University, Dr Julie Pitcher-Giles, who is willing to share her experience with us. Verena, would you please introduce our guest?

VK | It’s my pleasure, Timilehin. So, Julie Pitcher-Giles is an assistant professor in business administration at Memorial University’s Grenfell campus. She has received an MBA from Memorial University and a PhD from the University of Leicester and has more than 20 years of experience in university teaching. Her PhD research focused on rural small business and community engagement. 

The broad focus of her research is strategic social responsibility, and within this area she teaches in fields related to social responsibility and strategic planning, implementation, and management. Julie is also interested in the scholarship of in-person and online teaching and learning. She explores the university learning and curriculum development and likes to focus specifically on the online learning environment.

That is quite the profile. There’s a lot of topics in there that I don’t know what research would be about, but we’ll talk a lot about it today. Timilehin, would you like to get us started with the first question, please?

TO | Yes, and I have to comment that that’s an absolutely great profile, Julie. And I want you to talk to us about what you do at Grenfell campus as a professor, what you teach, and how you use all of these new evolving technologies.

00:05:35
JP |
Sure. So, thank you very much for that stellar introduction. I feel fantastic. So, I am the chair of business administration, the business programme at Grenfell campus, and I do also teach in our programme. So, this semester, I’m teaching primarily in the areas of strategic management and organisational theory, both at the undergrad level and the graduate level. And the topic of AI is front and centre for me and for all of my colleagues on campus, not just in business, this semester. 

We got certainly an introduction to it last winter term, but this is the first full term that we’re really trying to start from day one with an eye to how we can potentially use it, what we have to be on the lookout for. And by use it, I mean, how do we use it ourselves as researchers? How do we use it in our classrooms? How do our students use it? And how do we want them to use it? 

And so, we have way more questions than we have answers, I think, is the appropriate starting point. A colleague of mine described it as being terrified and thrilled all at once at the potential of AI.

VK | Yes, I think that’s how everybody real feels about it. What are your personal feelings about AI in teaching?

JP | I would say terrified and thrilled is accurate, but maybe a little more interested or more thrilled than the terrified piece. I think my first reaction when I got wind of ChatGPT, which was the first kind of AI large language model that I was introduced to, my immediate thought was, oh my god, I’m not going to be able to catch any plagiarism anymore. Because when you think about what we spend a lot of our time doing beyond the teaching and the lecturing and the engaging with students, it's evaluating work.

00:07:40
And so, a big part of that in academic work is citation and referencing for your sources and your evidence, and that’s always been a challenging area. And of course, that ties to a very important topic of academic integrity which really grounds everyone’s work in the university. Students, staff, faculty, we’re all held to that standard to uphold a very high degree of academic integrity. 

And so, for a lot of us, when we got wind that there was the program that existed somewhere that could not just generate content that was maybe better written than some of us can write for ourselves, but it would also generate references and if you asked for citations to be inserted. And so, my reaction was pure terror. I thought, that’s it, game over, we are done. There’s no way we can catch this. We can do what we can to encourage students not to use it, but it’s just too tempting. 

But in the winter semester, I guess I took some time to play around with it myself. Colleagues started to chat about it. I started to speak with my students about it. And I also found it in my courses in terms of submissions from students. But what I very quickly learned was that AI will generate plausible responses to questions, but it doesn’t mean they’re accurate. 

And so, the validity of anything that AI produces or generates for anyone, you can’t afford to put away that critical eye. You really need to look at that and reread it, and make sure that what’s actually been generated is accurate. And I also discovered, as did some students, that the references that are generated, in ChatGPT in this instance, they looked really good, they were perfectly formatted, they came from journal titles that exist from authors who’ve actually published in those journals, but those papers did not exist.

00:09:52
So, what we learn and what we actually spoke about in our class was that AI does a really great job of collating all the data points that exist. So, when we research, you might look through the first page of results that are generated. Students and researchers, we have such a limited capacity to process that information, but AI can comb through everything that’s available. And so, you get these huge outputs.

But we learned was that AI just pulls things, like the most popular authors from a particular journal, for instance, and the most cited journal title, for argument’s sake. Or they might cite a textbook that’s 18 to 20 years old, and they might reference a concept that’s not even in that textbook. So, we can’t trust it, is, I guess, what the biggest takeaway for me was when I was first exploring. We can’t trust it, but boy is it a great way to get some research started and some conversation started. So, that’s a long way to answer that question, but I don’t know if that’s insightful.

TO | That’s profound. Thank you. And from your discoveries and your observations, especially from the winter semester now, you’ve learned a lot of things. But what kinds of conversation do you have with your students based on your discoveries?

JP | So, every year, of course, you’re going to have an introduction to your course with your new classes, whether that’s in person or online. And that was certainly a part of my opening discussions this semester. So, we always talk about academic integrity, and that relates to conducting ourselves to the highest ethical standards. And for a student, that might mean giving credit where credit is due when you’re using other people’s work or producing your own original thinking.

00:11:52
But AI has changed that game. And not just a little bit. It’s flipped it on its head. Because the question, and no one has the answer, but if you use AI to help generate or develop your own thinking, is that still your own thinking? How do you give credit? When do you end and the AI begin? It’s like a really bad Hollywood movie.

But the conversation was really about trying to understand what is appropriate use, and that’s been my kind of go-to statement. I don’t mind exploring AI. I encourage my students to explore AI, but appropriately. And I think appropriately is going to differ in every course, maybe for every assignment in every course, possibly for every lecture. 

So, there could be instances when, just like you would use Google or you would search to find an answer, maybe you can start using AI to search for answers in particular circumstances. But if you are expecting original work, how do you then use AI? Can you use it to proofread a paper? Can you use it to develop, generate an outline for a paper or some research question?

00:13:13
So, I think there are a lot more questions than answers, as I’ve said. But for me, it’s been this semester trying to understand, what’s appropriate use? And I think every student needs to take an active role in asking their instructors, what is appropriate use in your course for this assignment? Unfortunately, I think a lot of people are maybe still in the terror mode, because I know that I have colleagues who say, well, I just say it’s not permitted in any way, shape, or form. 

And I think in this environment, that’s pretty tricky, because I don’t know how we can say no entirely. AI is everywhere around us. It’s already ubiquitous everywhere we go, whether it’s our credit monitoring system that uses AI to look for unusual purchases. Some people have smart fridges now. I’m not entirely sure why, but there’s AI there. So, it’s everywhere around us. And to say don’t use it, I think, is probably just delaying the inevitable, and maybe forcing people to explore using it in ways that we don’t necessarily want.

VK | Yes, I completely agree with you. And I also think that you have to be realistic, and you can’t say no to the use of AI, because whether you like it or not, students are going to use it, that’s for sure. So, I think it’s awesome that you’re having a conversation with your students about the ethical and appropriate use of AI in the courses. 

That being said, do you think that you’ll take AI into your syllabus? Do you think you’ll use it in your courses themselves? Do you think it could be beneficial for the topics that you’re teaching?

00:15:03
JP | Yes. 100%, yes. So, I have already incorporated AI into some of my coursework. And even something as simple… So, one of the tools that we often use in business are case studies. So, whether we write a case study ourselves of a situation about an organisation, and then we provide that to students and have them problem solve their way through to a solution. I have already used AI to help me write some case studies. 

Because it’s not always about the fact that it had to be a real incident. Sometimes it’s just about pulling together the right parameters and putting together the right story so that it can be used then as a pedagogical tool. So, I find that very exciting, because it has made me more efficient in developing some testing or developing some class tools. So, that, to me, is very exciting.

Also, I do have statements in my course outlines explaining what AI is, explaining what a large language model is, and explaining that they need to understand the appropriate use in this course. So, for every deliverable that I have, I will state whether or not AI is encouraged or should be avoided. I try not to say it’s not permitted, because I really can’t manage that in any case.

So, I have used it in my own courses this semester. One of my colleagues and myself, we’ve spoken at length about how we can incorporate it. And one of the things that he’s actually trying, which is really cool, he’s requiring students to use AI for some of his assignments. So, for instance, if he assigns a particular framework or concept or model for a student, he will give them the prompt. So, using whatever AI program you’re using, have AI or have your LLM explain to you this concept or framework. And so, ChatGPT or Bing AI, it will generate a response.

00:17:19
The student’s job is to then go through and confirm whether or not that response is accurate by using more traditional, more established methods of verification, of research, of sourcing. And then, they have to write a reflection on the use of AI, so the experience of using that program and what their assessment is. How accurate is it? Is it spot on? Are there gaping holes?

So, in a way, are we teaching students to use this to… I hate using the word cheat, because I don’t really see it as cheating, but to get an edge or to have an unfair advantage, or maybe to misrepresent what their own work is. But I prefer to see as a way, as a tool that we can use to maybe improve and strengthen research and critical thinking skills. 

We don’t have to start from the same base of knowledge that we have right now. AI may enable us to understand a lot more, and maybe dig more deeply in that. So, I think those types of assignments will be really interesting for students, and hopefully help them to become more critical thinkers, as well.

TO | Interesting. So, do you consider AI potentially beneficial to students? And if you consider them beneficial to students, do you think it could be doing something really bad to students, as well? So, from the two extreme ends, do you have opinions?

00:18:52
JP | I always have opinions. So, I think there’s huge potential there. I think about it, and even from an instructor’s perspective, one of my early thoughts was, okay, so, we have ChatGPT, so does that mean I’m out of a job? Does AI mean that we’re all replaceable? Does it mean we’re all going to become super lazy because we have machines to think for us across the board, not just in university, but also in industry? There are these concerns.

And for students, does that means that students become less motivated to work? So, there are definitely those concerns. There are concerns that there will be students who will get an assignment in a course and simply copy and paste that assignment over to a large language model. That’s not learning, but it’s kind of a different form of cheating. 

So, that still happens now. That happened pre-AI in the classroom. So, I think it’s just shifted that type of behaviour. We’re still going to see that. But the vast majority of students aren’t engaging in that extreme behaviour. So, I think for those students, if we can start to explore and maybe strengthen the skills in how we better use AI, then we do become more creative. 

We become better at problem solving. Maybe we are much more able to comb through huge amounts of data and identify trends, and then see, how can we apply that, so we become stronger critical thinkers? There are concerns, of course, that we know there are biases that are inherent in AI. There are biases in any of our research. But I think we have to be very critical. 

00:20:45
So, there’s lots of advantages. We can personalise learning. So, I have a son who has Down syndrome, and he learns very differently than my other two sons, or than anyone else in our family. But to me, I’ve already looked at opportunities to use AI to think about, how do we teach skills or concepts differently for different learners?

So, there’s this huge potential to personalise our learning to a much greater degree than we’re able to now. One of the challenges in the school system, whether that’s secondary school or university, is that we have such a huge range of learners. And one example, or one illustration, doesn’t necessarily resonate for everyone in that room. 

So, with AI, are we able to very quickly and efficiently populate different scenarios or different examples that can be more meaningful, I guess, to different learners? So, I see that our ability to learn is going to be hugely impacted in a positive way, potentially, I hope. 

But I do also acknowledge that there are going to be more temptations to just simply pop over to ChatGPT or Bing AI and say, here’s my assignment due this afternoon. What do you have for me? That is also likely going to happen, too. I think we’re in a phase right now where we’re just trying to figure out what it is and what it does.

00:22:24
One of the professors who researches in this area from Wharton, Ethan Mollick is his name, and he recently wrote about… He’s always writing about AI and it’s fascinating. But he recently wrote about the fact that we don’t yet know the shape, or we don’t yet know exactly what AI is, but we have the shape of the shadow of the thing. So, I really like that, because I thought, we really know very little, but we can see this shadow which gives us a sense of just how big and interesting and fascinating and terrifying, maybe, that all of this is.

VK | Okay, wonderful. I think it’s a great example of how you could use AI to make education more accessible. And I’d like to ask one final question. If I were a young university teacher just starting out, do you have any advice for me how I could talk with my students about the ethical use of AI? What are the key points that I would have to talk with them about?

JP | So, I think that core actually stays pretty consistent in the university. When it comes to academic integrity and the standard that we have to hold ourselves to, to develop our own original work and thoughts, and submit our own work, and engage in respectful ways with all our communities, all of those things still apply to AI. I think AI just gives us another tool where we have to make sure we keep that mindset front and centre.

So, I think a discussion on academic integrity should lead everything that we do, whether that’s in a numeracy course, or a quantitative reasoning course, or if it’s a research degree. Academic integrity really means that you have to show up, and you have to do the work, and you have to acknowledge the supports that you’ve gotten along the way. And I think it applies as well when you’re using AI as it does when you’re using our databases at our library or doing primary research.

00:24:41
So, for me, I don’t think that actually changes. That might be the one stable thing we have, is that we still have this standard, and that might help guide us in figuring out what is appropriate use of AI. So, that would be my two cents.

TO | Wow, that’s really good. It’s a shame we have to put this to one side now. We want to thank you again for your time, because if we say we should continue, you can’t spend your day here. You are really interesting to listen to. You have a lot of wisdom to share. Thank you very much for your time. Have a good day.

JP | Thanks so much for asking.

VK | Thank you, Julie.

JP | Take care, guys.

VK | Welcome back to the studio. That was so interesting and insightful. Timilehin, tell me what you think. Did you have a big learning moment today?

TO | Of course I do, and Julie had a lot to say. One thing that stood out for me is that, with AI, we can personalise learning. We can modify learning materials to make sure that every person, regardless of their learning requirements, they can still have adequate access to information. That’s a very big one for me.

VK | Yes, I completely agree with you. That was super helpful. And I think that’s it for today’s episode. Until next time, everybody. Thank you for listening. I’m Verena.

TO | And I am Timilehin.

VK | Ciao.

AN | The Learning Technology Coach podcast is a CITL production.

                   

 

Episode Introduction
Guest Introduction
Academic Integrity and AI
Conversations with Students and the Use of AI
Incorporating AI into Coursework
Plagiarism and AI
Using AI to Personalize Learning
Including AI Tools as Part of Academic Integrity Standards
Summary - Big Learning Moment!