For the last year, I have been a faculty advisor for the START Center at UW, which gives students an opportunity to do consulting projects, mostly in global health, mostly for the Gates Foundation.
Since I have also been obsessed with the opportunities and threats of generative AI for the last year, it was only a matter of time before I developed some opinions about how these students might use chatbots in their work.
I thought I’d share them here as well, lightly edited:
Thoughts on how START should be using AI
Abraham D. Flaxman, 2024-07-12
(15 minutes for synopsis and discussion)
Should START be using AI?
- Gates Foundation (and Bill Gates) very optimistic about value of recent AI breakthroughs
- “Generative AI” – new term for the things people are excited about, e.g. ChatGPT
- GenAI changing fast – if you checked in when you first heard about it, it is time to check again
- Generative AI is bullshit*
*in the technical, philosophical sense - Not magic — “just” next word prediction
- But it is next word prediction so good that it keeps me up at night, like in existential crisis
Uses relevant to START projects:
- Use anywhere you would use BS*
*in the technical, philosophical sense- AI-assisted coding, e.g. creating visual representations of quantitative information
- Ideation (brainstorming) – “Come up with 20 ideas for X”
- Editing – “Can you provide a suggested edit that follows the style guidelines of Strunk & White?”
- Explanation – “Can you rephrase that so a fifth grader would understand it?”
- Studies have found that AI is good for helping people with average-to-poor skills in an area attain slightly-above-average performance — so what are we average-to-poor at?
- Summarization – “What were the findings?”
- Specific translation tasks, such as
- For non-native English speakers – “Write a professional email expressing these points in English”
- For specific terms – “What does “sueños blancos” mean in English?”
- ChatBots can help you build on your existing skills. Lauren Wilner (Epi 560 TA) says: I find ChatBots useful for things that are at the edge of what I know. That is slightly different, I think, than areas where I have average-to-poor skills.
- Other ideas?
Ethical principles:
- Take responsibility. Lauren Wilner (Epi 560 TA) says: I know it sounds obvious, but I would emphasize that they need to read and edit what ChatGPT or other AI tools give them. A lot of them seem to skip that step and trust it blindly, and I would strongly remind them that they need to use it for advice and feedback, not for answers.
- Don’t hide it. Lauren Wilner (Epi 560 TA) says: The students who are the most transparent with me in office hours or elsewhere in terms of where they are stuck, what they asked ChatGPT, and why they are still stuck, are the students that I find to be the most successful. The students who hide their use of AI struggle more because they often don’t really understand what they are doing. Whatever you can do to create a welcoming environment in terms of AI tools but also a cautionary tale that they can’t just give you straight ChatGPT output, the better results I think you will have!
Resources:
- Claude — https://claude.ai
- ChatGPT — https://chat.openai.com
- Gemini — https://gemini.google.com/app
- ChatGPT is bullshit
Hicks, M.T., Humphries, J. & Slater, J. Ethics Inf Technol 26, 38 (2024).
https://doi.org/10.1007/s10676-024-09775-5
- ChatGPT Maker’s Prompt Writing Guide
OpenAI
https://platform.openai.com/docs/guides/prompt-engineering/ - Andrew Ng’s Free Online Courses for Using AI
DeepLearning.AI
https://learn.deeplearning.ai/ - Ethan and Lilach Mollick’s AI Resources
More Useful Things: AI Resources
https://www.moreusefulthings.com/ - Emily Bender and Alex Hanna’s AI Hype Podcast
Mystery AI Hype Theater 3000
https://www.buzzsprout.com/2126417
Some of the example prompts above are from Jeremy N. Smith, You, Me, and AI presentation, http://jeremynsmith.com/
Notes on what I should add to this, based on discussion with other START faculty last week:
- A section on educational resources and observations, including the pernicious effect of chatbots on student question-asking; the cool exercises that a history prof came up with, where the educational task was identifying what was truth and what was fiction in chatbot generated text; invite-AI-to-the-table ideology — have a chatbot ask questions during class??
- START students often conduct interviews with Key Informants; they could practice this ahead of time with a chatbot
- With more time, I should include some real examples of where AI has been useful and not useful, such as:
- Useful: writing code for new Vivarium components with a claude.ai project that knows all of the vivarium public health codebase
- Not so good: generating a summer reading list about feminist approaches to cost-effectiveness analysis (most of the suggested papers do yet exist yet!)