Tag Archives: ai

AI for Interview Prep

I’m back on social media — https://bsky.app/profile/healthyalgo.bsky.social

It is fun to get a social feed again, but I’ve got some curating to do — I’ve got a lot of interesting people to follow, but not a lot of science peers to interact with. I guess that took me about 15 years to build on twitter.

One thing about my feed at this point is that there is a lot of AI-skeptic content in my feed. I’m always interested, but it does not quite match my experience. I will be the first to say that Generative AI is bullshit (in the philosophical sense), but BS can be useful some times. And preparing for interviews is one of those times.

If you are a scientist and have to talk to the media, I hope you have some strategies for this already. If you want to practice for this interview and you don’t have someone to practice with, try asking your chatbot to pretend to be an interviewer. Here is a prompt I’ve used in my own preparations.

“Please help me prepare to provide an interview to a journalist about X. You play the role of the journalist and ask me questions, which I will provide answers to for you. Some example questions are: Y, Z, W”

I’ve also used this to prepare for hiring interviews. It seems that some child in my family has been using this approach to prepare for a job interview to be a lion tamer as well.

Got improvements? Let me know.

Comments Off on AI for Interview Prep

Filed under Uncategorized

AI in Epi 554 (parts 4 and 5 of 5)

Coding and testing when you know what you want the computer to do

Reminder of Intro and Debugging 1 & 2: AI is BS, but not useless; You can prompt it with an error message and get suggestions, you can give it an example of code that is not doing what you think it should do.  Tell it who you want it to be and keep asking questions if necessary.

Next idea: If you have not written the code, if you know what you want it to do, you can ask a chatbot to write it for you.

This works particularly well with functions. Example, a function for the SEIR model in a closed cohort, start with:

  fx_seir <- function(t, y, params) {

You are responsible for what comes out.  That means you must test it.  Link to an example to explore.

Don’t know just how to test?  A chatbot can help with that, too, e.g. prompt of how can I test fx_seir to make sure it is doing what I want?

You can write the tests first (“test-driven development” is the name for this in software engineering); this turns the process of writing code into a game (and if you like this game, you might be a software engineer!)

Lauren Wilner (Epi 560 TA) says: Try to use AI to help you build on things you already know. For example, if you are told to write a loop but you already know how to do what you want to do by copying and pasting 10 times, write out the code you would write (in 10 lines with copy and pasting and changing little things) and then ask ChatGPT to give you advice on putting it into a loop. That way, you can run your code both using what you wrote and what ChatGPT gave you.

Lauren Wilner (Epi 560 TA) says: for testing code I have found that ChatGPT is very good at simulating data. I will often simulate my own data, write a function, and at the point I think that I’m done, I will give my function to ChatGPT and tell it to simulate data with whatever parameters I am using and see if it is able to replicate my work. This isn’t relevant for our students necessarily, but I do think it is very helpful – it acts sort of like a second pair of eyes.

Understanding, explaining, and documenting code (aka “what is this doing?”)

Reminder of Intro and Debugging 1 & 2: AI is BS, but not useless; You can prompt it with an error message and get suggestions, you can give it an example of code that is not doing what you think it should do.  Tell it who you want it to be and keep asking questions if necessary.

Final idea: How to ask an AI to explain code. You can use this to help yourself understand code in the assignments, and you can even use it to document your code to help others, including your future self, understand your code.

For example, “Explain this code:

# Call lsoda function with initial conditions, times, SIR function, and parameters defined above.
output = pd.DataFrame(lsoda(y=init, times=t, func=fx_sir_bd, parms=params))

Lauren Wilner (Epi 560 TA) says: start prompt according to first precept, e.g. “Can you please pretend to be my stats professor and explain to me what the code below is doing and what the expected output is and what this code would be used for? Are there things you would change or improve?”

Keep asking questions if necessary!

Lauren Wilner (Epi 560 TA) says: If I inherit code, I generally ask ChatGPT to comment each line or function or code chunk with what it is doing and have found that very helpful.

You can also ask it to document or improve documentation in code you have written (or received).

For example, “Please improve the documentation in this code:

# Melt dataset for easy plotting
output_melted <- melt(data = output, id.vars = "time",
                                           measure.vars = c("S", "I", "R"))
 
# Plot output - plot (1)
ggplot(data = output_melted) +
  geom_line(aes(x = time, y = value, col = variable)) +
  scale_color_manual(breaks = c("S", "I", "R"), labels = c("Susceptible", "Infected", "Recovered"), values = c("green", "red", "purple")) +
  labs(x = "Time", y = "Number of individuals", col = "State") +
  theme(panel.background = element_blank(),
        panel.grid = element_blank(),
        panel.border = element_rect(color = "black", fill = NA),
        axis.title = element_text(family = "Times"),
        axis.text = element_text(family = "Times"),
        legend.key = element_blank(),
        legend.text = element_text(family = "Times"),
        legend.title = element_text(family = "Times"))

Lauren Wilner (Epi 560 TA) says: an additional prompt for this might be: “Can you please pretend to be my mentor and explain to me what the code below is doing and what the expected output is and what this code would be used for? Are there things you would change or improve?”

Lauren Wilner (Epi 560 TA) says: I love giving ChatGPT my code that I have finished and asking it to tell me what my code is doing. This both helps me with documentation as well as ensures that the code I wrote is doing what I think it is doing. If ChatGPT tells me something surprising about what my code is doing, I generally start a conversation saying “I thought I was doing X, can you explain to me why you think I am doing Y? Am I also doing X?” or something like that.

Final thought: Remember that you are responsible for what comes out.  So don’t stop with what you get… read and edit it to improve. What might you improve in this result? https://chat.openai.com/share/5f118cc6-71f6-4dcb-b906-0a940f1eca16

Comments Off on AI in Epi 554 (parts 4 and 5 of 5)

Filed under education

AI in Epi 554 (part 3)

Debugging 2: When the code does not fail, but still is wrong

Reminder of Intro and Debugging 1: AI is BS, but not useless; Prompt with error, get suggestions, keep asking, if necessary.

This time, what if there is no error, but something is wrong?

  1. Create a MCVE (minimal complete verifiable example), possibly including a screenshot
  2. Describe the issue to a ChatBot, and keep iterating if necessary. (This will be necessary more often than for Debugging when Code Fails, because then you have an error message to work with!)

The anatomy of an MCVE (follow link for more detail):

  1. Minimal: Use as little code as possible that still produces the same problem
  2. Complete: Provide all parts someone else needs to reproduce your problem in the question itself
  3. Reproducible: Test the code you’re about to provide to make sure it reproduces the problem

Remember the first precept of prompt engineering:  Tell AI who you want it to imitate e.g. “You are a friendly and expert teaching assistant.”

Lauren Wilner (Epi 560 TA) says: I advise the students to tell ChatGPT something like: “Please pretend to be my TA who I go to when I have trouble with my coding in my Data Management class. I am coding in R, and she helps me by giving me tips on rewriting my code and gives me sample lines of code I can use and ensure they do what I want. She also tells me how to know whether the code that she gave me is doing what I expected. The code I am working does X. Currently, I have the following lines of code: Y. Can you please help me rewrite this into a loop?”

Watch out for a common pitfall with BS Bots! Lauren Wilner says, “I find that if I ask ChatGPT what is wrong with something or how something can be improved or whether something is X, it will never say “looks good” or “no problems” or “no improvements needed” – it will always make some change or update. Sometimes that update makes things worse! So, overuse can be an issue.

“Again, it comes back to knowing what you expect or want to see and using ChatGPT to assist you and not do things for you.”

In summary: AI is BS, not useless; Useful for debugging; don’t stay stuck for long, prompt with MCVE code example, and polite request for help. Keep convo going if necessary. 

Comments Off on AI in Epi 554 (part 3)

Filed under education, Uncategorized

AI in Epi 554 (part 2)

Following up on the general guidance I offered Epi 554 last week, this week I tried to get specific about how to use AI assistance in debugging. I think there is room for improvement, but I’m going to get it out to you, and maybe you’ll tell me how to improve.

Debugging 1: When the code fails

Maybe I’ve told you that AI is BS.  But that doesn’t make it useless.

Useful for debugging: use it so that you don’t stay stuck for long
example: error in code from Lab 2, shown below:

(if you know how to fix this… don’t worry you’ll have an error msg that is less obvi soon; and if you are above-average in debugging… this approach might make you worse!)

Teach an advanced AI technique called “Prompt Engineering“, example: paste error, type “why?” — aside: be polite in your prompts, for a better world and for better answers.  Let’s not go through the details of the answer in detail — I want to focus on how and when to ask

  • You can be more verbose, e.g. you can explain what you were trying to do, paste your error, and ask why you got this error and if it has ideas on how to fix the error that you got.
  • You can also use the first precept of prompt engineering: tell AI who you want it to be.  e.g. “You are a friendly and expert teaching assistant.” or “You are a busy and distracted professor.” (?) 
  • Customize as preferred, e.g. if appropriate, you can start with “you answer in English, but you know that I speak Spanish as a native and English is not my first language.”

You try: here is an error to work with [[I didn’t actually come up with this]], and an answer that still doesn’t fix it.  What might you ask next?

Lauren Wilner (Epi 560 TA) says: For debugging, I have found that ChatGPT is mediocre. I give it the code I ran and the error I got, generally, but I find that often it gives me either (1) code that has the same error again or (2) new code that has a different error.

Summary: AI is BS, not useless; Useful for debugging; don’t stay stuck for long, prompt with code example, and polite request for help. Keep convo going if necess.  It is just imitating the way words often hang together in online text, like stack overflow and cross validated, but if it gets your code to run… then you have running code!

Comments Off on AI in Epi 554 (part 2)

Filed under education

AI in Epi 554 (part 1)

I offered students in Introduction To Epidemic Modeling For Infectious Diseases some general guidance on using AI this week, and I thought I’d share it more broadly. We are going to get into specifics for AI-assisted coding over the next few weeks, because that is one area where I think this stuff might really help them in this course.

Introduction to Generative AI in Epi 554

  • Not magic — “just” next word prediction
  • But it is next word prediction so good that it keeps me up at night, like in existential crisis
  • How does it do it?  Statistical language model — a conditional probability distribution
  • This is the platonic ideal of the philosophical notion of “bullshit”
  • BS does not mean useless; it can be useful!  For somethings… but…
  • Studies have found that AI is good for helping people with average-to-poor skills in an area attain slightly-above-average performance — so what are you average-to-poor at???
  • Prof Steve Mooney says: for 554 (and all classes), the only point of doing coursework is to learn what you’re doing.  If you learn to use AI to help you code in general, great!  That’s a useful skill.  BUT if you only learn to use AI to help you complete this coursework, it’s like you just overfit your model – you can’t project forward usefully with it.  So, your proper focus ought to be on how AI helps you avoid busywork/debug faster/more deeply understand what you’re doing, not just how to get done sooner.
  • ChatBots can help you build on your existing skills. Lauren Wilner (Epi 560 TA) says: I find ChatBots useful for things that are at the edge of what I know. That is slightly different, I think, than areas where I have average-to-poor skills.
  • I will be demonstrating its use through ChatBots and not through coding assistants (like GitHub co-pilot); I think this is best matched to this class, but I’m still learning!
  • Take responsibility. Lauren Wilner (Epi 560 TA) says: I know it sounds obvious, but I would emphasize that they need to read and edit what ChatGPT or other AI tools give them. A lot of them seem to skip that step and trust it blindly, and I would strongly remind them that they need to use it for advice and feedback, not for answers.
  • Don’t hide it. Lauren Wilner (Epi 560 TA) says: The students who are the most transparent with me in office hours or elsewhere in terms of where they are stuck, what they asked ChatGPT, and why they are still stuck, are the students that I find to be the most successful. The students who hide their use of AI struggle more because they often don’t really understand what they are doing. Whatever you can do to create a welcoming environment in terms of AI tools but also a cautionary tale that they can’t just give you straight ChatGPT output, the better results I think you will have!
  • Build on what you know. Lauren Wilner (Epi 560 TA) says: Try to use AI to help you build on things you already know. For example, if you are told to write a loop but already know how to do what you want to do by copying and pasting 10 times, write out what you would do and then ask ChatGPT to give you advice on putting it into a loop. That way, you can run your code both using what you wrote and what ChatGPT gave you.

Comments Off on AI in Epi 554 (part 1)

Filed under education

Some thoughts on how epidemiology students might use AI

For the last year, I have been a faculty advisor for the START Center at UW, which gives students an opportunity to do consulting projects, mostly in global health, mostly for the Gates Foundation.

Since I have also been obsessed with the opportunities and threats of generative AI for the last year, it was only a matter of time before I developed some opinions about how these students might use chatbots in their work.

I thought I’d share them here as well, lightly edited:

Thoughts on how START should be using AI

Abraham D. Flaxman, 2024-07-12

(15 minutes for synopsis and discussion)

Should START be using AI?

  • Gates Foundation (and Bill Gates) very optimistic about value of recent AI breakthroughs
  • “Generative AI” – new term for the things people are excited about, e.g. ChatGPT
  • GenAI changing fast – if you checked in when you first heard about it, it is time to check again
  • Generative AI is bullshit*
    *in the technical, philosophical sense
  • Not magic — “just” next word prediction
    • But it is next word prediction so good that it keeps me up at night, like in existential crisis

Uses relevant to START projects:

  • Use anywhere you would use BS*
    *in the technical, philosophical sense
    • AI-assisted coding, e.g. creating visual representations of quantitative information
    • Ideation (brainstorming) – “Come up with 20 ideas for X”
    • Editing – “Can you provide a suggested edit that follows the style guidelines of Strunk & White?”
    • Explanation – “Can you rephrase that so a fifth grader would understand it?”
  • Studies have found that AI is good for helping people with average-to-poor skills in an area attain slightly-above-average performance — so what are we average-to-poor at?
    • Summarization – “What were the findings?”
    • Specific translation tasks, such as
      • For non-native English speakers – “Write a professional email expressing these points in English”
      • For specific terms – “What does “sueños blancos” mean in English?”
  • ChatBots can help you build on your existing skills. Lauren Wilner (Epi 560 TA) says: I find ChatBots useful for things that are at the edge of what I know. That is slightly different, I think, than areas where I have average-to-poor skills. 
  • Other ideas?

Ethical principles:

  • Take responsibility. Lauren Wilner (Epi 560 TA) says: I know it sounds obvious, but I would emphasize that they need to read and edit what ChatGPT or other AI tools give them. A lot of them seem to skip that step and trust it blindly, and I would strongly remind them that they need to use it for advice and feedback, not for answers.
  • Don’t hide it. Lauren Wilner (Epi 560 TA) says: The students who are the most transparent with me in office hours or elsewhere in terms of where they are stuck, what they asked ChatGPT, and why they are still stuck, are the students that I find to be the most successful. The students who hide their use of AI struggle more because they often don’t really understand what they are doing. Whatever you can do to create a welcoming environment in terms of AI tools but also a cautionary tale that they can’t just give you straight ChatGPT output, the better results I think you will have!

Resources:

Some of the example prompts above are from Jeremy N. Smith, You, Me, and AI presentation, http://jeremynsmith.com/  

Notes on what I should add to this, based on discussion with other START faculty last week:

  • A section on educational resources and observations, including the pernicious effect of chatbots on student question-asking; the cool exercises that a history prof came up with, where the educational task was identifying what was truth and what was fiction in chatbot generated text; invite-AI-to-the-table ideology — have a chatbot ask questions during class??
  • START students often conduct interviews with Key Informants; they could practice this ahead of time with a chatbot
  • With more time, I should include some real examples of where AI has been useful and not useful, such as:
    • Useful: writing code for new Vivarium components with a claude.ai project that knows all of the vivarium public health codebase
    • Not so good: generating a summer reading list about feminist approaches to cost-effectiveness analysis (most of the suggested papers do yet exist yet!)

Comments Off on Some thoughts on how epidemiology students might use AI

Filed under education, global health

Five areas of concern regarding AI in classrooms

When I was preparing to teach my Fall course, I was concerned about AI cheaters, and whether my lazy approach to getting students to do the reading would be totally outdated.  I came up with a “AI statement” for my syllabus that said students can use AI, but they have to tell me how they used it, and they have to take responsibility for the text they turn in, even if they used an AI in the process of generating it.

Now that the fall quarter has come and gone, it seems like a good time to reflect on things.  On third of the UW School of Public Health courses last fall had AI statements, with 15 saying “do not use” and 30 saying use in some way (such as “use with permission”, or “use with disclosure”).

In hindsight, AI cheating was not the thing I should have been worrying about.  Here are five areas of concern that I learned about from my students and colleagues that I will be paying more attention to next time around:

1. Access and equity – there is a risk with the “pay to play” state of the technology right now.  How shall we guard against a new digital divide between those who have access to state-of-the-art AI and those who do not?  IHME has ChatGPT-4 for all staff, but only the Health Metrics Sciences students who have IHME Research Assistantship can use it.  As far as I can tell, the Epi Department students all have to buy access.  From what I can tell, the University of Michigan is solving this, are other schools?


“When I speak in front of groups and ask them to raise their hands if they used the free version of ChatGPT, almost every hand goes up. When I ask the same group how many use GPT-4, almost no one raises their hand. I increasingly think the decision of OpenAI to make the “bad” AI free is causing people to miss why AI seems like such a huge deal to a minority of people that use advanced systems and elicits a shrug from everyone else.” —Ethan Mollick

2. Interfering with the “novice-to-expert” progression – will we no longer have expert disease modelers, because novice disease modelers who rely on AI do not progress beyond novice level modeling?

3. Environmental impact – what does running a language model cost in terms of energy consumption? Is it worth the impact?

4. Implicit bias – language models repeat and reinforce systems of oppression present in training data.  How can we guard against this harming society?

5. Privacy and confidentiality – everything we type into an online system might be used as “training data” for future systems.  What are the risks of this practice, and how can we act responsibly?

Comments Off on Five areas of concern regarding AI in classrooms

Filed under education

AI and Intro Epidemic Models: Navigating the New Frontier of Education

Last June, I happened to attend an ACM Tech Talk about LLMs in Intro Programing which left me very optimistic about the prospects of AI-assisted programming for my Introduction to Epidemic Modeling course.

I read the book that the tech talk speakers were writing and decided that it was not really what my epi students needed. But it left me hopeful that someone is working on that book, too.

In case no one writes it soon, I’ve also been trying to teach myself how to use AI to do disease modeling and data science tasks. I just wrapped up my disease modeling course for the quarter, though, and I did not figure it out in time to teach it to my students anything useful.

In my copious spare time since I finished lecturing, I’ve been using ChatGPT to solve Advent of Code challenges, and it has been a good education. I have a mental model of the output of a language model as the Platonic ideal of Bullshit (in the philosophical sense), and using it to solve carefully crafted coding challenges is a bit like trying to get an eager high school intern to help with my research.

Here is an example chat from my effort to solve the challenge from Day 2, which is pretty typical for how things have gone for me:

The text it generates is easy to read and well formatted. Unfortunately, it includes code that usually doesn’t work:

It might not work, it might be BS (in the philosophical sense), but it might still be useful! I left Zingaro and Porter’s talk convinced that AI-assisted programmers are going to need to build super skills in testing and debugging, and this last week of self-study has reinforced my belief.

As luck would have it, I was able to attend another (somewhat) relevant ACM Talk this week, titled “Unpredictable Black Boxes are Terrible Interfaces”. It was not as optimistic as the intro programming one, but it did get me thinking about how useful dialog is when working with eager interns. It is very important that humans feel comfortable saying they don’t understand and asking clarifying questions. I have trouble getting interns to contribute to my research when they are afraid to ask questions. If I understand correctly, Agrawala’s preferred interface for Generative AIs would be a system that asked clarifying questions before generating an image from his prompt. It turns out that I have seen a recipe for that:

I am going to try the next week of AoC with this Flipped Interaction Pattern. Here is my prompt, which is a work in progress, and here is my GPT, if you want to give it a try, too.

Comments Off on AI and Intro Epidemic Models: Navigating the New Frontier of Education

Filed under disease modeling, education