Skip to Main Content

Generative AI at UVA

This guide features links and information about generative AI, including ethical use, citations, considerations for use, and more.

Evaluating AI Tools and Content

The myriad uses of generative AI can often seem to offset the potential pitfalls. However, AI content cannot be used uncritically; a thoughtful interrogation of the source material is essential. There are a number of variables to evaluate, including knowledge gaps, currency, and the specific prompt used to generate the content. In addition to the risks of plagiarism and perpetuating misinformation, complex concepts of bias, privacy, and equity should be considered. 

 

Given the abundance of generative AI tools available to explore and use, determining the appropriateness of their use, how to successfully attain a useful response, and whether that response is accurate and appropriate for your needs can be challenging. While we can employ various strategies to evaluate the output provided by a given tool, it's essential to understand where the information is coming from and to have sufficient proficiency in the subject matter to be able to assess its accuracy. Among other things, you should consider whether the information the AI is producing is accurate, if the tool is drawing from a diverse range of data, and monitor the information returned by the tool for bias. Sarah Lebovitz, Hila Lifshitz-Assaf, and Natalia Levina write in the MITSloan Management Review that it is critical to find the ground truth on which the AI has been trained and validated." (Lebovitz et al., 2023) Digging in further, you can consider who the owner of the AI tool is and determine if that ownership reflect bias in the results. Consider reviewing the resources maintained by the DAIR (Distributed AI Research) Institute. DAIR examines AI tools and issues through a community-rooted lens; maintains a list of publications related to social justice, privacy, and bias; and, conducts research projects free from the influence of Big Tech. 

 

Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. (2023). The No. 1 Question to Ask When Evaluating AI Tools. MIT Sloan Management Review64(3). https://sloanreview.mit.edu/article/the-no-1-question-to-ask-when-evaluating-ai-tools/

Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test

More Reading

Social Justice

Enter the Dragnet. (n.d.). Retrieved August 18, 2023, from https://logicmag.io/commons/enter-the-dragnet/

Inside the AI Factory: The Humans That Make Tech Seem Human. (n.d.). Retrieved August 18, 2023, from https://nymag.com/intelligencer/article/ai-artificial-intelligence-humans-technology-business-factory.html

Meet The Trio Of Artists Suing AI Image Generators. (n.d.). Retrieved August 18, 2023, from https://www.buzzfeednews.com/article/pranavdixit/ai-art-generators-lawsuit-stable-diffusion-midjourney

OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time. (n.d.). Retrieved August 18, 2023, from https://time.com/6247678/openai-chatgpt-kenya-workers/

Equity

D’Agostino, S. (n.d.). How AI Tools Both Help and Hinder Equity. Inside Higher Ed. Retrieved August 18, 2023, from https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/06/05/how-ai-tools-both-help-and-hinder-equity

Environmental Costs

AI already had a terrible carbon footprint. Now it’s way worse. (n.d.). Retrieved August 18, 2023, from https://www.sfchronicle.com/opinion/openforum/article/ai-chatgpt-climate-environment-18282910.phpAn

Jafari, A., Gordon, A., & Higgs, C. (2023, July 19). The hidden cost of the AI boom: Social and environmental exploitation. The Conversation. http://theconversation.com/the-hidden-cost-of-the-ai-boom-social-and-environmental-exploitation-208669

The Environmental Impact of AI. (2023, May 8). Global Research and Consulting Group Insights. https://insights.grcglobalgroup.com/the-environmental-impact-of-ai/

Bias

Nicoletti, L., & Equality, D. B. T. +. (2023, August 1). Humans Are Biased. Generative AI Is Even Worse. Bloomberg.Comhttps://www.bloomberg.com/graphics/2023-generative-ai-bias/

Small, Z. (2023, July 4). Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History. The New York Timeshttps://www.nytimes.com/2023/07/04/arts/design/black-artists-bias-ai.html

Simonite, T (2020, October 26)  How an Algorithm Blocked Kidney Transplants to Black Patients. Wired. https://www.wired.com/story/how-algorithm-blocked-kidney-transplants-black-patients/

Accuracy

Bhattacharyya, M., Miller, V. M., Bhattacharyya, D., & Miller, L. E. (n.d.). High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content. Cureus15(5), e39238. https://doi.org/10.7759/cureus.39238

Don’t be surprised by AI chatbots creating fake citations. (n.d.). Marketplace. Retrieved August 18, 2023, from https://www.marketplace.org/shows/marketplace-tech/dont-be-surprised-by-ai-chatbots-creating-fake-citations/