1. Opening a conversation on responsible environmental data science in the age of large language models
- Author
-
Ruth Y. Oliver, Melissa Chapman, Nathan Emery, Lauren Gillespie, Natasha Gownaris, Sophia Leiker, Anna C. Nisi, David Ayers, Ian Breckheimer, Hannah Blondin, Ava Hoffman, Camille M.L.S. Pagniello, Megan Raisle, and Naupaka Zimmerman
- Subjects
bias ,ChatGPT ,data ethics ,generative AI ,pedagogy ,Environmental sciences ,GE1-350 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The general public and scientific community alike are abuzz over the release of ChatGPT and GPT-4. Among many concerns being raised about the emergence and widespread use of tools based on large language models (LLMs) is the potential for them to propagate biases and inequities. We hope to open a conversation within the environmental data science community to encourage the circumspect and responsible use of LLMs. Here, we pose a series of questions aimed at fostering discussion and initiating a larger dialogue. To improve literacy on these tools, we provide background information on the LLMs that underpin tools like ChatGPT. We identify key areas in research and teaching in environmental data science where these tools may be applied, and discuss limitations to their use and points of concern. We also discuss ethical considerations surrounding the use of LLMs to ensure that as environmental data scientists, researchers, and instructors, we can make well-considered and informed choices about engagement with these tools. Our goal is to spark forward-looking discussion and research on how as a community we can responsibly integrate generative AI technologies into our work.
- Published
- 2024
- Full Text
- View/download PDF