Check in with Your Pedagogical Values When It Comes to AI

Hey professors,

What are your values when it comes to AI? And following up on that question, what are your pedagogical values when it comes to AI usage in the classroom?

Since ChatGPT was first introduced in late 2022, advancements in AI technology have revved up considerably.  It’s no longer possible to ignore the fact that generative AI is fundamentally changing the ways students are learning and engaging with your courses.

And perhaps you may have noticed that AI has already been integrated into many of the platforms that your university uses. Even if your university didn’t explicitly sign new contracts with vendors, AI assistants and chatbots have crept up into the learning management systems and other platforms that you use on a regular basis. 

And these tools come by way of updates that you likely never knew about or had the opportunity to opt-out of.

If you’re confused about how to navigate this new reality, it’s completely understandable! It’s also reasonable if you are feeling conflicted about whether you are even Team AI or not.

So, in this 45th episode of the Rise with Clarity Podcast, I wanted to share with you 10 reflective questions that can help you to better assess what your pedagogical values are when it comes to AI. And I also want to point you in the direction of some books, podcasts, and articles that can help you to clarify where you stand in terms of AI in general. 

My Reflections on Takeaways and Learning Objectives

I have to admit that I am pretty relieved that I am not in the classroom right now. Frankly, I’m not sure that I would be able to manage this moment very gracefully. As professors in the US academy right now, you are all dealing with some incredibly challenging issues, in so many different respects.

Recently, I was thinking about a course that I used to teach on a regular basis at my first job—an undergraduate ethnomusicology survey course called Musics of the World. This was mostly a course that non-music majors took in order to fulfill their diversity or world cultures requirement. By the way, it’s unfortunately a very distressing sign of the times that this kind of requirement is being eliminated at some US universities.

Anyway, in that course—one of the assignments is for students to attend a live music performance—ideally in connection with one of the units that are covered in the course. For many of the students, this would likely be the first time that they would attend a concert of non-Western music.

I would coordinate with some of the affiliated performance faculty in our department as well as the Arts Center on campus in order to highlight certain concerts and even arrange for discounted or free tickets for the students. And often times I would be able to integrate performances by guest musicians directly into the course.

After attending a performance of their own choosing, students would have to submit a concert report (along with a photo of their ticket or the program) and reflect on their observations.

In my mind, the learning objectives of this assignment were to: 1) have students engage in a performance event that they might not have been exposed to, and that might have been a little out of their comfort zone. And 2) use the tools that were introduced in the course to analyze the concert in terms of the musical sounds they heard, the setting of the event, and to do some light research on the cultural significance of the music.

How would this kind of assignment fare today? If students wanted to use AI to fast track this assignment, it would be very easy. 

A student could ask generative AI to create a concert report that would hit all of the marks for an A grade. If a musical group has a regular program that can be found online, along with program notes—it would be all the easier for ChatGPT to create a concert report with a lot of specific details.

So knowing this, would an iron-clad rule blocking the use of AI work in this case? That might have worked 2 years ago, but it probably would be pretty hard to enforce today. I know that some instructors have integrated in-class writing activities to get around the AI usage in homework assignments. But this kind of activity would be quite difficult to integrate into a large course of over 100 students and with this particular concert report assignment that I just described.

At the end of the day, my main pedagogical intention for this exercise is to have the students engage with a musical culture and culture that is likely different from their own. And to go in with a sense of curiosity and respect for what they were about to hear and observe. So, if I were to teach this class today, I’d make sure to articulate this clearly to the students. And to be honest, I’m not sure right now how I would modify this assignment in light of the ubiquity of AI.

I wanted to share this example with you because I sense that many of you are faced with similar dilemmas, where the usage of AI by students is really challenging the kinds of assignments and assessments that you have spent years designing.

Besides all of this, let’s also acknowledge that there are some serious ethical dimensions to consider as well. Like the environmental destruction that massive data centers create. And big tech’s exploitative practices in places like Kenya, where extremely low-paid data workers are forced to engage with the disturbing content on the internet in order to train AI. Moreover, there is the training of AI on copyrighted books—maybe books that you yourselves have published—without your consent.

There are plenty more issues that we could talk about today—like surveillance by big tech and the government as well as what scholar Ruha Benjamin has called the New Jim Code—where racist habits and logics are built into technological systems. But for now, I’m just going to go ahead and put links in the written transcript to relevant articles and books that you may want to check out.

On the flip side, there is a lot of pressure being put on you to uncritically adopt this new technology by big tech, some of your own institutions, and by AI evangelists. You may feel start to feel a sense of FOMO (fear of missing out) if you don’t acquiesce right now.

For one, there’s the argument that professors need to be engaging with AI literacy because today’s students are already using it and that they need to develop critical engagement skills for their own futures. And then there’s another kind of argument that says that certain aspects of AI can have a democratizing element to it by lowering barriers to access—especially for students from marginalized backgrounds.

I think it’s worth it to pause for a moment and to get some clarity on what your own values are in relation to AI. And then branching out from there, it can be helpful to check in with your pedagogical values as they relate to AI usage in the classroom.

Check in with Your Pedagogical Values When It Comes to AI: 10 Reflective Questions

Here then, are 10 reflective questions for you.

  1. How are your core values aligned (or not aligned) when it comes to AI?
  2. What are your core pedagogical values that remain consistent over time?
  3. What are your concerns when it comes to the usage of AI?
  4. What are the possibilities that you see when it comes to the usage of AI in your teaching?
  5. When it comes to AI, what are you open to? 
  6. When it comes to AI, what is non-negotiable for you?
  7. In your course on _____, what do you want the ultimate takeaway for your students to be?
  8. Related to the previous question, how does the usage of AI by students challenge that takeaway?
  9. Related to question 7, how does the usage of AI by students enhance this takeaway?
  10. If you were to revisit your teaching philosophy statement right now, what would stay the same and what would you revise?

I hope that some of these questions can be helpful for you as you think through how you’re going to navigate this tricky time in the classroom. I have a feeling that I’ll be returning to this topic in a future podcast episode, so feel free to reach out to me if you have thoughts, insights, or suggestions. I’d love to hear from you. You can e-mail me at Katherine at RisewithClarity.com.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Additional Resources 

(Note: I will try and update this list occasionally!)

Books

Bender, Emily M. and Alex Hanna. The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. 2025. (New York: Harper Collins)

Benjamin, Ruha. 2019. Race After Technology. (Cambridge, UK and Medford, MA: Polity Press).

Bowen, José Antonio and C. Edward Watson. 2024. Teaching with AI: A Practical Guide to a New Era of Human Learning. (Baltimore: John Hopkins University Press).

Hao, Karen. 2025. Empire of AI: Dreams and Nightmares in Sam Altman’s Open AI. (New York: Penguin).

Articles

“A.I. is on the Rise, and So is the Environmental Impact of the Data Centers that Drive It” by Amber X. Chen.

“How are Instructors Talking About AI in Their Syllabi?” by Sarah Huddleston

“How to Grapple with the AI Already on Your Campus” by Marc Watkins

“Search LibGen, the Pirated-Books Database that Meta Used to Train AI” by Alex Reisner

“The Professors are Using ChatGPT and Students Aren’t Happy About It” by Kashmir Hill

“The Student Brain on AI” by Beth McMurtrie

Reports

“Artificial Intelligence and Academic Professions” issued by AAUP

Podcasts

“A Different Way to Think about AI and Assessment” (featuring host Bonnie Stachowiak and Danny Liu) on the Teaching in Higher Ed Podcast

“The New Jim Code? Race and Discriminatory Design” (featuring host Rebecca Koenig and Ruha Benjamin)

“Organizing, Mobilizing…and AI” (featuring host Ethel Tungohan and Elisha Lim) on the Academic Aunties Podcast

Blogs

“Want to Engage Students and Strengthen Your Teaching in the Age of AI? Start with this Simple Strategy” by Brielle Harbin

Audiovisual

AlphaGo – The Movie (Google Deep Mind)

“Dark Sides of Artificial Intelligence” (60 Minutes Episode)

“‘Empire of AI’: Karen Hao on How AI is Threatening Democracy & Creating a New Colonial World.” (featuring host Amy Goodman with Karen Hao on Democracy Now)

“How to Make AI Systems More Just” (featuring Hilary Pennington and Dr. Timnit Gebru)

“Surveillance, Technology, and AI” (featuring Stéphan-EloÏse Gras and Rachel Donadio with Meredith Whitaker)