Skip to Main Content

Generative AI at UVA

This guide features links and information about generative AI, including ethical use, citations, considerations for use, and more.

Evaluating AI Tools and Content

The myriad uses of generative AI can often seem to offset the potential pitfalls. However, AI content cannot be used uncritically; a thoughtful interrogation of the source material is essential. There are a number of variables to evaluate, including knowledge gaps, currency, and the specific prompt used to generate the content. In addition to the risks of plagiarism and perpetuating misinformation, complex concepts of bias, privacy, and equity should be considered. 

 

Given the abundance of generative AI tools available to explore and use, determining the appropriateness of their use, how to successfully attain a useful response, and whether that response is accurate and appropriate for your needs can be challenging. While we can employ various strategies to evaluate the output provided by a given tool, it's essential to understand where the information is coming from and to have sufficient proficiency in the subject matter to be able to assess its accuracy. Among other things, you should consider whether the information the AI is producing is accurate, if the tool is drawing from a diverse range of data, and monitor the information returned by the tool for bias. Sarah Lebovitz, Hila Lifshitz-Assaf, and Natalia Levina write in the MITSloan Management Review that it is critical to find the ground truth on which the AI has been trained and validated." (Lebovitz et al., 2023) Digging in further, you can consider who the owner of the AI tool is and determine if that ownership reflect bias in the results. Consider reviewing the resources maintained by the DAIR (Distributed AI Research) Institute. DAIR examines AI tools and issues through a community-rooted lens; maintains a list of publications related to social justice, privacy, and bias; and, conducts research projects free from the influence of Big Tech. 

 

Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. (2023). The No. 1 Question to Ask When Evaluating AI Tools. MIT Sloan Management Review64(3). https://sloanreview.mit.edu/article/the-no-1-question-to-ask-when-evaluating-ai-tools/

Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test

Navigating the Landscape of Generative AI

The AI Risk Repository - MIT

  • An accessible overview of the AI risk landscape.
  • A regularly updated source of information about new risks and research.
  • A common frame of reference for researchers, developers, businesses, evaluators, auditors, policymakers, and regulators.
  • A resource to help develop research, curricula, audits, and policy.
  • An easy way to find relevant risks and research.

More Reading

Social Justice

Brayne, Sarah. (2020, December 20). Enter the Dragnethttps://logicmag.io/commons/enter-the-dragnet/

Dzieza, J. (2023, June 20). Inside the AI Factory: The Humans That Make Tech Seem Humanhttps://nymag.com/intelligencer/article/ai-artificial-intelligence-humans-technology-business-factory.html

Inside the AI Factory: The Humans That Make Tech Seem Human. (n.d.). Retrieved August 18, 2023, from https://nymag.com/intelligencer/article/ai-artificial-intelligence-humans-technology-business-factory.html

Meet The Trio Of Artists Suing AI Image Generators. (n.d.). Retrieved August 18, 2023, from https://www.buzzfeednews.com/article/pranavdixit/ai-art-generators-lawsuit-stable-diffusion-midjourney

OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time. (n.d.). Retrieved August 18, 2023, from https://time.com/6247678/openai-chatgpt-kenya-workers/

Equity

Ciurria, M. (2023, March 30). Ableism and ChatGPT: Why People Fear It Versus Why They Should Fear It | Blog of the APAhttps://blog.apaonline.org/2023/03/30/ableism-and-chatgpt-why-people-fear-it-versus-why-they-should-fear-it/

D’Agostino, S. (n.d.). How AI Tools Both Help and Hinder Equity. Inside Higher Ed. Retrieved August 18, 2023, from https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/06/05/how-ai-tools-both-help-and-hinder-equity

D’Agostino, S. (n.d.). AI Has a Language Diversity Problem. Humans Do, Too. Inside Higher Ed. Retrieved March 11, 2024, from https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/07/10/ai-has-language-diversity-problem

How AI reduces the world to stereotypes. (2023, October 10). Rest of World. https://restofworld.org/2023/ai-image-stereotypes/

Howard, A., & Borenstein, J. (2018). The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity. Science and Engineering Ethics24(5), 1521–1536. https://doi.org/10.1007/s11948-017-9975-2

Environmental Costs

AI already had a terrible carbon footprint. Now it’s way worse. (n.d.). Retrieved August 18, 2023, from https://www.sfchronicle.com/opinion/openforum/article/ai-chatgpt-climate-environment-18282910.phpAn

An, J., Ding, W., & Lin, C. (2023). ChatGPT: Tackle the growing carbon footprint of generative AI. Nature615(7953), 586–586. https://doi.org/10.1038/d41586-023-00843-2

Coleman, J. (n.d.). AI’s Climate Impact Goes beyond Its Emissions. Scientific American. Retrieved March 7, 2024, from https://www.scientificamerican.com/article/ais-climate-impact-goes-beyond-its-emissions/

Falk, S., & van Wynsberghe, A. (2023). Challenging AI for Sustainability: What ought it mean? AI and Ethicshttps://doi.org/10.1007/s43681-023-00323-3

George, A. S., George, A. S. H., & Martin, A. S. G. (2023). The Environmental Impact of AI: A Case Study of Water Consumption by Chat GPT. Partners Universal International Innovation Journal1(2), Article 2. https://doi.org/10.5281/zenodo.7855594

Jafari, A., Gordon, A., & Higgs, C. (2023, July 19). The hidden cost of the AI boom: Social and environmental exploitation. The Conversation. http://theconversation.com/the-hidden-cost-of-the-ai-boom-social-and-environmental-exploitation-208669

The Environmental Impact of AI. (2023, May 8). Global Research and Consulting Group Insights. https://insights.grcglobalgroup.com/the-environmental-impact-of-ai/

The mounting human and environmental costs of generative AI | Ars Technica. (n.d.). Retrieved March 11, 2024, from https://arstechnica.com/gadgets/2023/04/generative-ai-is-cool-but-lets-not-forget-its-human-and-environmental-costs/

Wong, M. (2023, August 23). The Internet’s Next Great Power Suck. The Atlantic. https://www.theatlantic.com/technology/archive/2023/08/ai-carbon-emissions-data-centers/675094/

Bias

Benson, T. (n.d.). This Disinformation Is Just for You. Wired. Retrieved March 11, 2024, from https://www.wired.com/story/generative-ai-custom-disinformation/

Bias in AI: What it is, Types, Examples & 6 Ways to Fix it in 2023. (n.d.). AIMultiple: High Tech Use Cases & Tools to Grow Your Business. Retrieved March 11, 2024, from https://research.aimultiple.com/ai-bias/

Nicoletti, L., & Equality, D. B. T. +. (2023, August 1). Humans Are Biased. Generative AI Is Even Worse. Bloomberg.Comhttps://www.bloomberg.com/graphics/2023-generative-ai-bias/

Small, Z. (2023, July 4). Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History. The New York Timeshttps://www.nytimes.com/2023/07/04/arts/design/black-artists-bias-ai.html

Simonite, T (2020, October 26)  How an Algorithm Blocked Kidney Transplants to Black Patients. Wired. https://www.wired.com/story/how-algorithm-blocked-kidney-transplants-black-patients/

Accuracy

Bhattacharyya, M., Miller, V. M., Bhattacharyya, D., & Miller, L. E. (n.d.). High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content. Cureus15(5), e39238. https://doi.org/10.7759/cureus.39238

Don’t be surprised by AI chatbots creating fake citations. (n.d.). Marketplace. Retrieved August 18, 2023, from https://www.marketplace.org/shows/marketplace-tech/dont-be-surprised-by-ai-chatbots-creating-fake-citations/

Hallucinations Could Blunt ChatGPT’s Success—IEEE Spectrum. (n.d.). Retrieved March 11, 2024, from https://spectrum.ieee.org/ai-hallucination