Skip to Main Content

Generative AI at UVA

This guide features links and information about generative AI, including ethical use, citations, considerations for use, and more.

Evaluating AI Tools and Content

The myriad uses of generative AI can often seem to offset the potential pitfalls. However, AI content cannot be used uncritically; a thoughtful interrogation of the source material is essential. There are a number of variables to evaluate, including knowledge gaps, currency, and the specific prompt used to generate the content. In addition to the risks of plagiarism and perpetuating misinformation, complex concepts of bias, privacy, and equity should be considered. 


Given the abundance of generative AI tools available to explore and use, determining the appropriateness of their use, how to successfully attain a useful response, and whether that response is accurate and appropriate for your needs can be challenging. While we can employ various strategies to evaluate the output provided by a given tool, it's essential to understand where the information is coming from and to have sufficient proficiency in the subject matter to be able to assess its accuracy. Among other things, you should consider whether the information the AI is producing is accurate, if the tool is drawing from a diverse range of data, and monitor the information returned by the tool for bias. Sarah Lebovitz, Hila Lifshitz-Assaf, and Natalia Levina write in the MITSloan Management Review that it is critical to find the ground truth on which the AI has been trained and validated." (Lebovitz et al., 2023) Digging in further, you can consider who the owner of the AI tool is and determine if that ownership reflect bias in the results. Consider reviewing the resources maintained by the DAIR (Distributed AI Research) Institute. DAIR examines AI tools and issues through a community-rooted lens; maintains a list of publications related to social justice, privacy, and bias; and, conducts research projects free from the influence of Big Tech. 


Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. (2023). The No. 1 Question to Ask When Evaluating AI Tools. MIT Sloan Management Review64(3).

Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry.

More Reading

Social Justice

Enter the Dragnet. (n.d.). Retrieved August 18, 2023, from

Inside the AI Factory: The Humans That Make Tech Seem Human. (n.d.). Retrieved August 18, 2023, from

Meet The Trio Of Artists Suing AI Image Generators. (n.d.). Retrieved August 18, 2023, from

OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time. (n.d.). Retrieved August 18, 2023, from


D’Agostino, S. (n.d.). How AI Tools Both Help and Hinder Equity. Inside Higher Ed. Retrieved August 18, 2023, from

Environmental Costs

AI already had a terrible carbon footprint. Now it’s way worse. (n.d.). Retrieved August 18, 2023, from

Jafari, A., Gordon, A., & Higgs, C. (2023, July 19). The hidden cost of the AI boom: Social and environmental exploitation. The Conversation.

The Environmental Impact of AI. (2023, May 8). Global Research and Consulting Group Insights.


Nicoletti, L., & Equality, D. B. T. +. (2023, August 1). Humans Are Biased. Generative AI Is Even Worse. Bloomberg.Com

Small, Z. (2023, July 4). Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History. The New York Times

Simonite, T (2020, October 26)  How an Algorithm Blocked Kidney Transplants to Black Patients. Wired.


Bhattacharyya, M., Miller, V. M., Bhattacharyya, D., & Miller, L. E. (n.d.). High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content. Cureus15(5), e39238.

Don’t be surprised by AI chatbots creating fake citations. (n.d.). Marketplace. Retrieved August 18, 2023, from