Connect with our knowledgeable library staff at libraryai@virginia.edu for guidance, resources, and assistance as you explore Generative AI.
The myriad uses of generative AI can often seem to offset the potential pitfalls. However, AI content cannot be used uncritically; a thoughtful interrogation of the source material is essential. There are a number of variables to evaluate, including knowledge gaps, currency, and the specific prompt used to generate the content. In addition to the risks of plagiarism and perpetuating misinformation, complex concepts of bias, privacy, and equity should be considered.
Given the abundance of generative AI tools available to explore and use, determining the appropriateness of their use, how to successfully attain a useful response, and whether that response is accurate and appropriate for your needs can be challenging. While we can employ various strategies to evaluate the output provided by a given tool, it's essential to understand where the information is coming from and to have sufficient proficiency in the subject matter to be able to assess its accuracy. Among other things, you should consider whether the information the AI is producing is accurate, if the tool is drawing from a diverse range of data, and monitor the information returned by the tool for bias. Sarah Lebovitz, Hila Lifshitz-Assaf, and Natalia Levina write in the MITSloan Management Review that it is critical to find the ground truth on which the AI has been trained and validated." (Lebovitz et al., 2023) Digging in further, you can consider who the owner of the AI tool is and determine if that ownership reflect bias in the results. Consider reviewing the resources maintained by the DAIR (Distributed AI Research) Institute. DAIR examines AI tools and issues through a community-rooted lens; maintains a list of publications related to social justice, privacy, and bias; and, conducts research projects free from the influence of Big Tech.
"NIST GenAI is a new evaluation program administered by the NIST Information Technology Laboratory to assess generative AI technologies developed by the research community from around the world. NIST GenAI is an umbrella program that supports various evaluations for research and measurement science in Generative AI by providing a platform for Test and Evaluation."
Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. (2023). The No. 1 Question to Ask When Evaluating AI Tools. MIT Sloan Management Review, 64(3). https://sloanreview.mit.edu/article/the-no-1-question-to-ask-when-evaluating-ai-tools/
Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test
This collection of resources delves into the challenges of maintaining data integrity, identifying and addressing biases in AI models, and promoting algorithmic transparency.
Ferrara, E. (2023). Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models. First Monday, 28. https://doi.org/10.5210/fm.v28i11.13346
Mitigating Bias in Artificial Intelligence. (n.d.). Berkeley Haas. Retrieved March 14, 2025, https://haas.berkeley.edu/equity/resources/playbooks/mitigating-bias-in-ai/.
NIST Trustworthy & Responsible Artificial Intelligence Resource Center (AIRC)
"The NIST Trustworthy and Responsible Artificial Intelligence Resource Center (AIRC) is a platform to support people and organizations in government, industry, and academia—both in the U.S. and internationally—driving technical and scientific innovation in AI."
The resources provided in this list offer insights into the potential applications of AI in monitoring and mitigating environmental challenges, as well as the importance of considering the environmental footprint of AI systems themselves.
Bashir, N., Donti, P., Cuff, J., Sroka, S., Ilic, M., Sze, V., Delimitrou, C., & Olivetti, E. (2024). The Climate and Sustainability Implications of Generative AI. An MIT Exploration of Generative AI. https://mit-genai.pubpub.org/pub/8ulgrckc/release/2
Nishant, R., Kennedy, M., & Corbett, J. (2020). Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. Int. J. Inf. Manag., 53, 102104. https://doi.org/10.1016/j.ijinfomgt.2020.102104.
"The arrival of AI disrupted the status quo in pretty much every industry. Since the boom ignited by the release of ChatGPT in late 2022, many have hailed the potential of the technology in aiding in all categories of global challenges, none the least climate change. However, AI comes with its own set of environmental concerns. With the AI market size forecast to grow six-fold by 2030, can its benefits counteract its environmental impacts?"
*Part of the UVA Library collection*
This curated list of resources explores the importance of fairness, transparency, and inclusivity in AI systems, as well as strategies for mitigating bias
Equity
Abrams, Z. (2024, January 8). Addressing equity and ethics in artificial intelligence. Monitor on Psychology, 55(3). https://www.apa.org/monitor/2024/04/addressing-equity-ethics-artificial-intelligence
Social Justice
Al-Kfairy, M., Mustafa, D., Kshetri, N., Insiew, M., & Alfandi, O. (2024). Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective. Informatics. https://doi.org/10.3390/informatics11030058
Brayne, Sarah. (2020, December 20). Enter the Dragnet. https://logicmag.io/commons/enter-the-dragnet/
Ethical considerations have become increasingly important to ensure fairness and transparency in AI decision-making processes, addressing issues of bias and discrimination, and considering the potential impacts of AI on employment, privacy, and human autonomy.
"The Institute for Ethics in AI brings together world-leading philosophers and other experts in the humanities with the technical developers and users of AI in academia, business and government."
"Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these."
"UNESCO has led the international effort to ensure that science and technology develop with strong ethical guardrails for decades. Ethical concerns arise from the potential AI systems have to embed biases, contribute to climate degradation, threaten human rights and more. Such risks associated with AI have already begun to compound on top of existing inequalities, resulting in further harm to already marginalized groups."
"The World Health Organization (WHO)'s guidance on the ethics and governance of large multi-modal models (LMMs) – a type of fast growing generative artificial intelligence (AI) technology with applications across health care."
This list of resources offers valuable insights into various risks, such as the generation of misinformation and deepfakes, privacy concerns, and the potential negative impacts on society and the economy.
A comprehensive living database of over 1000 AI risks categorized by their cause and risk domain that includes:
"The initial evaluation (ARIA) will be conducted as a pilot effort to fully exercise the NIST ARIA test environment. ARIA will focus on risks and impacts associated with large language models (LLMs). Future iterations of ARIA may consider other types of generative AI technologies such as text-to-image models, or other forms of AI such as recommender systems or decision support tools. A compelling and exploratory set of tasks will aim to elicit pre-specified (and non-specified) risks and impacts across three levels of testing: model testing, red-teaming, and field testing."