Tag Archives: ai

Five areas of concern regarding AI in classrooms

When I was preparing to teach my Fall course, I was concerned about AI cheaters, and whether my lazy approach to getting students to do the reading would be totally outdated.  I came up with a “AI statement” for my syllabus that said students can use AI, but they have to tell me how they used it, and they have to take responsibility for the text they turn in, even if they used an AI in the process of generating it.

Now that the fall quarter has come and gone, it seems like a good time to reflect on things.  On third of the UW School of Public Health courses last fall had AI statements, with 15 saying “do not use” and 30 saying use in some way (such as “use with permission”, or “use with disclosure”).

In hindsight, AI cheating was not the thing I should have been worrying about.  Here are five areas of concern that I learned about from my students and colleagues that I will be paying more attention to next time around:

1. Access and equity – there is a risk with the “pay to play” state of the technology right now.  How shall we guard against a new digital divide between those who have access to state-of-the-art AI and those who do not?  IHME has ChatGPT-4 for all staff, but only the Health Metrics Sciences students who have IHME Research Assistantship can use it.  As far as I can tell, the Epi Department students all have to buy access.  From what I can tell, the University of Michigan is solving this, are other schools?


“When I speak in front of groups and ask them to raise their hands if they used the free version of ChatGPT, almost every hand goes up. When I ask the same group how many use GPT-4, almost no one raises their hand. I increasingly think the decision of OpenAI to make the “bad” AI free is causing people to miss why AI seems like such a huge deal to a minority of people that use advanced systems and elicits a shrug from everyone else.” —Ethan Mollick

2. Interfering with the “novice-to-expert” progression – will we no longer have expert disease modelers, because novice disease modelers who rely on AI do not progress beyond novice level modeling?

3. Environmental impact – what does running a language model cost in terms of energy consumption? Is it worth the impact?

4. Implicit bias – language models repeat and reinforce systems of oppression present in training data.  How can we guard against this harming society?

5. Privacy and confidentiality – everything we type into an online system might be used as “training data” for future systems.  What are the risks of this practice, and how can we act responsibly?

Comments Off on Five areas of concern regarding AI in classrooms

Filed under education

AI and Intro Epidemic Models: Navigating the New Frontier of Education

Last June, I happened to attend an ACM Tech Talk about LLMs in Intro Programing which left me very optimistic about the prospects of AI-assisted programming for my Introduction to Epidemic Modeling course.

I read the book that the tech talk speakers were writing and decided that it was not really what my epi students needed. But it left me hopeful that someone is working on that book, too.

In case no one writes it soon, I’ve also been trying to teach myself how to use AI to do disease modeling and data science tasks. I just wrapped up my disease modeling course for the quarter, though, and I did not figure it out in time to teach it to my students anything useful.

In my copious spare time since I finished lecturing, I’ve been using ChatGPT to solve Advent of Code challenges, and it has been a good education. I have a mental model of the output of a language model as the Platonic ideal of Bullshit (in the philosophical sense), and using it to solve carefully crafted coding challenges is a bit like trying to get an eager high school intern to help with my research.

Here is an example chat from my effort to solve the challenge from Day 2, which is pretty typical for how things have gone for me:

The text it generates is easy to read and well formatted. Unfortunately, it includes code that usually doesn’t work:

It might not work, it might be BS (in the philosophical sense), but it might still be useful! I left Zingaro and Porter’s talk convinced that AI-assisted programmers are going to need to build super skills in testing and debugging, and this last week of self-study has reinforced my belief.

As luck would have it, I was able to attend another (somewhat) relevant ACM Talk this week, titled “Unpredictable Black Boxes are Terrible Interfaces”. It was not as optimistic as the intro programming one, but it did get me thinking about how useful dialog is when working with eager interns. It is very important that humans feel comfortable saying they don’t understand and asking clarifying questions. I have trouble getting interns to contribute to my research when they are afraid to ask questions. If I understand correctly, Agrawala’s preferred interface for Generative AIs would be a system that asked clarifying questions before generating an image from his prompt. It turns out that I have seen a recipe for that:

I am going to try the next week of AoC with this Flipped Interaction Pattern. Here is my prompt, which is a work in progress, and here is my GPT, if you want to give it a try, too.

Comments Off on AI and Intro Epidemic Models: Navigating the New Frontier of Education

Filed under disease modeling, education