Two Responses to AI and An Innovative Approach to Teaching Ethics

By Cynthia Rutz, Director of Faculty Development, CITAL

In this blog I will share with you three more sessions from the recent Lilly Conference that could be helpful for your own teaching. The first two sessions have very different approaches to AI. The first provides some practical classroom tips, while the second suggests that faculty take a more skeptical approach to AI. The third session details an innovative collaboration with the theater department to teach ethics to engineering students.

Practical Pedagogical Responses to Generative AI

This speaker began by listing five reasons why faculty are understandably concerned about generative AI:

  1. Academic integrity.
  2. Students using AI unquestioningly, as if it were an oracle.
  3. Students are not in a position to judge when AI is right or wrong.
  4. AI hallucinates sources. The speaker thought that a better way to put this is that AI  “bullshits,” i.e. generates sources with a reckless disregard for the truth. The speaker cited the case of lawyers who were fired for using ChatGPT for legal briefs in which it hallucinated law cases. (https://www.washingtonpost.com/technology/2023/11/16/chatgpt-lawyer-fired-ai/)
  5. AI takes away your teachable moment. Faculty know that homework is not about getting the answer, but about learning the process. When students use AI to get the answer, they are bypassing the process. We need to ask ourselves which parts of the process are essential and what parts are appropriate to use AI for. 

The speaker then brought up three possible academic responses to AI and strategies for each. (These three are reminiscent of Derek Bruff’s red, yellow, and green light.)

1. FIGHT IT

  • Consider weighting grading based on what you can actually watch them do.
  • Put your assignment prompt into AI so you can see what an AI answer looks like. 
  • Write assessments for which AI is most likely to hallucinate sources. Note that AI often does not have good data on recent events.
  • Do NOT use AI detectors. Not only are they inaccurate, but they disadvantage non-English learners. 

 

2. EMBRACE IT

  • Give assignments that require students to both use AI as a source and to cite it.
  • Consider using AI yourself and encourage students to use AI for such accidental tasks as citation cleanup, editing, first drafts of presentation slides.

 

3. TEACH A DISCRIMINATING VIEW OF IT

  • Teach students to both cite AI and to evaluate it as a source.
  • Give students assignments where they evaluate and correct generative AI output.

For his upper-level classes he does not allow the use of AI; he tells them that at this level AI is worse than the worst student in the class. For lower level classes he is developing various techniques for detecting AI usage. If he finds that a student has used AI when instructed not to, he meets with them individually before assigning a penalty.

The speaker had lots of ideas about course design that addresses AI.  He suggested emphasizing purpose and engagement with assignments, for example, giving students options. Consider designing writing assignments around the writing process, not just the final result, and talk to them about their process. For example, students could use a feature like “track changes” from their very first draft so that you and they can see how a paper evolves.

Presenter: Jim Huggins, Computer Science, Kettering University

______________________________________________________________________________________________________________________________

Friend, Frenemy, Foe, or Faust: Education’s Dance with AI

This speaker is an instructional designer who has experienced peer pressure to say that Generative AI is just great. But so far he himself remains a skeptic who believes that the narrative around it is driven by hype. He has four pleas to the Academy: 

1. Resist the hype. 

They hype is coming from outside the academy from those who would most benefit. These outside forces are imposing urgency, telling us we must train students to work with AI right now.  In fact, his own dean recently mandated it. However, the love fest with AI is starting to diminish, even in the business world.

 

2. Be Informed

The metaphors we are using can be harmful here; AI is not “intelligence.”  It should be called instead something like “maths maths,” because it does not think. It is a large language model (LLM) and LLMs do not provide truth, but the illusion of  knowledge. It is really merely fancy autocompletion, i.e. it finds patterns. Note that reinforcement learning from human feedback is actually training AI to be less racist and also apologetic when it gets things wrong. We also need to beware of the promise of efficiency, which is rarely met. It takes time to put in prompts several times, to check for hallucinations, etc.

 

3. Be Leaders, not Followers

As teachers we need to be like the Roman god Janus, looking both backward to the past and forward to the future.  We need to start doing with AI what higher education does best. Instead of just adopting it randomly, we need to use it in controlled ways and then evaluate its usefulness for good pedagogy, i.e. the Scholarship of Teaching and Learning (SoTL).

Research shows that making mistakes is key to neuroplasticity, our brains need to struggle with new ideas for at least twenty minutes. So be careful using AI even for idea generation because the research shows that you get attached to those early ideas, and therefore will not struggle to find better ones.  

As faculty, we also need to beware of overusing it and skipping this useful stage of struggling. For example, Blackboard now has AI all over it, giving faculty ideas for classroom activities, for example. We also need to disclose to students when we are using AI, modeling that for students.

 

4. Have Smart Conversations about AI

Talk with students and colleagues about the importance of expertise and skills, the areas where AI can’t help you. Be sure to include those skills in learning objectives and to discuss them with students. Such discussions will help students  realize what skills they need to master, so they will be less likely to outsource important activities to AI.  

 

AI’s Challenge To Us:

The speaker ended by talking about what we can learn about our own teaching from the challenge presented by generative AI. In the past, we may have relied too much on writing as a way to gauge student knowledge. Insofar as AI can do our writing assignments well, we can see that there is no direct connection between language and knowledge. So a student handing in a well-written paper does not necessarily mean they have learned the material. Our challenge will be to either reimagine writing or to find other ways for students to demonstrate knowledge and skills gained.

Presenter: Matthew Roberts, Grand Valley State University

______________________________________________________________________________________________________________________________

Advancing Engineering Ethics Using Active Learning

This talk outlined an innovative program for engineering ethics that involved partnering with the theater department.  The speakers began by talking about the limitations of the traditional approaches to engineering ethics, which have been either theory-based or case studies. The problem with these approaches is that they were not engaging students, written exams could be unfair to non-native speakers, and the case studies did not involve current-day issues.  

Their new approach was to have engineering students create videos on current-day ethical dilemmas in the field. They had a grant that allowed them to “hire” four theater students who helped with costuming, scripts, and acting. The project began with six current-day issues, then the students ultimately chose three of them to dramatize. 

  1. The Texas Power-Grid Crisis: How Texas’ power grid failed in 2021
  2. The Pacific Gas & Electric Wildfires: PG&E confesses to killing 84 people in 2018 fire
  3. The Toyota Fake Emissions Scandal: Toyota falsified emissions data from at least 2003 

The first two videos were made primarily by the theater students, but by the third video, the engineering students took over. Each video involved role playing, recognizing stakeholders, identifying ethics violations, and envisioning an alternate ending. A senior design class of twenty-four students created the videos, then they were shown to freshman engineering students. 

Benefits to students included: 

  • Increased Engagement 
  • Practical understanding of ethics in real-world contexts 
  • Skill development: critical thinking, communication, and teamwork
  • In the case of the theater students: paid experience in the field for their resume

 

Benefits to faculty included:

  • No more ethics essays to read!
  • Efficient feedback in real time as the video is developed
  • More time for direct engagement with students

 

The idea for this project came from a faculty learning community (FLC)  on career readiness. That FLC studied the National Association of Colleges & Employers’ Career Readiness Competencies. This project hit all eight of those competencies.

The project was funded by a grant from Dow Chemical.  The grant supports 4-5 faculty each year for innovation teaching projects. No money goes directly to faculty, but instead is used for resources & materials, conference attendance, and student pay at work study rates. 

For faculty who would like to try this approach, the speakers suggested two things:

  1. Identify the Need: Find a teaching situation where hands-on learning can replace traditional methods like writing or testing.
  2. Collaborate Across Disciplines: Partner with a colleague who can offer creative approaches to creating immersive, interdisciplinary learning experiences.

Presenters: Rajani Muraleedharan (ECE), Tommy Wedge (Theatre), Erik Trump (Director, Center for Excellence in Teaching and Learning) from Saginaw Valley State University

______________________________________________________________________________________________________________________________

FINAL NOTE:  If you are interested in learning more about the ethics of teaching with AI, be sure to register for the upcoming Faculty Workshop on the Big Questions in AI on November 12, 3:00-4:30 p.m. To register CLICK HERE. I also highly recommend the Lilly Conferences as great places to get evidence-based ideas for improving your teaching. They have annual conferences at five locations, two are within driving distance of Valparaiso.