Skip to Main Content

Generative AI at UVA

This guide features links and information about generative AI, including ethical use, citations, considerations for use, and more.

Ethical Considerations in Artificial Intelligence

As artificial intelligence becomes increasingly integrated into research, teaching, scholarship, and institutional decision-making, understanding its ethical implications is critical. From algorithmic bias and surveillance to questions of accountability and global impact, the ethical landscape of AI is complex and evolving.

This guide provides a curated selection of resources to support those who are:

  • Exploring the ethical dimensions of AI in their teaching or research
  • Integrating AI tools into coursework or advising students
  • Engaging in institutional conversations about data, automation, and policy

Resources are organized around four major areas of concern:

  • Fairness and Bias
  • Privacy and Security
  • Accountability, Transparency, and Oversight
  • Societal Impact

Each section is structured into three tiers:

  • Overview – High-level overviews and foundational materials.
  • Learn More – Additional context and explanation for expanded understanding.
  • Dig Deeper – In-depth, detailed insights for readers looking for comprehensive information.

 

AI Ethics Fundamentals

Ethical considerations ensure fairness and transparency in AI decision-making processes, address issues of bias and discrimination, and consider the potential impacts of AI on employment, privacy, and human autonomy. These resources offer a balanced introduction to the ethical considerations surrounding AI, covering various aspects from fundamental concepts to societal impacts and practical considerations. 

Here are three key resources to build your knowledge of AI and ethical considerations: 

The Oxford Institute for Ethics in AI brings together leading philosophers and other experts in the humanities with technical developers and users of AI in academia, business and government.

The DAIR (Distributed AI Research) Institute examines AI tools and issues through a community-rooted lens; maintains a list of publications related to social justice, privacy, and bias; and, conducts research projects free from the influence of Big Tech.  

An Overview of Artificial Intelligence Ethics published in IEEE Transactions on Artificial Intelligence provides a comprehensive overview of AI ethics, summarizing and analyzing ethical risks and issues, ethical guidelines and principles, approaches for addressing ethical issues, and methods for evaluating the ethics of AI.

UNESCO published their Recommendation on the Ethics of Artificial intelligence in 2022. The report focuses on the preservation of human rights and dignity and includes several Policy Action Areas, including Action Area 8: Education and Research on page 33. 

The 2023 article, Survey on AI Ethics: A Socio-technical Perspective unifies current and future ethical concerns of deploying AI into society, addressing each principle from both technical and social perspectives. 

Exploring The Ethical Landscape Of AI: Ethical And Moral Considerations explores fundamental ethical concepts, specific AI ethics considerations, and the challenges and societal impacts of AI development. It emphasizes the importance of building ethical AI systems to ensure accountability, mitigate biases, and uphold human rights, while also addressing concerns like transparency and data privacy.

Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective published in the August 2024 issues of Informatics offers a high level perspective on the challenges of generative AI. From the abstract, "This paper conducts a systematic review and interdisciplinary analysis of the ethical challenges of generative AI technologies (N = 37), highlighting significant concerns such as privacy, data protection, copyright infringement, misinformation, biases, and societal inequalities." 

 

World Health Organization AI Ethics Governance Guidance: The World Health Organization (WHO)'s guidance on the ethics and governance of large multi-modal models looks specifically at the use of generative AI in health care.

The Ethical Framework for AI in Education published by The Institute for Ethical AI in Education ensures that learners are protected while engaging with AI while incorporating the perspectives of designers and developers. The Framework offers nine Objectives for those selecting and implementing AI tools.

The 2023 AI Ethics textbook (available as an ebook through UVA Libraries) by Patricia Boddington is available for students and faculty looking for in-depth considerations across many areas of ethical impact. From the About section, "This book introduces readers to critical ethical concerns in the development and use of artificial intelligence. Offering clear and accessible information on central concepts and debates in AI ethics, it explores how related problems are now forcing us to address fundamental, age-old questions about human life, value, and meaning. In addition, the book shows how foundational and theoretical issues relate to concrete controversies, with an emphasis on understanding how ethical questions play out in practice."

Fairness and Bias

This collection of resources delves into the importance of fairness, transparency, challenges of maintaining data integrity, identifying and addressing biases in AI models, as well as strategies for mitigating bias.

Here are three key resources to build your knowledge: 

Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models published in 2023 offers methods for identifying and mitigating bias and stresses the importance of ethical principles and human oversight in AI development to ensure fairness and minimize unintended consequences.

The Butterfly Effect in Artificial Intelligence Systems: Implications for AI bias and fairness uses the Butterfly Effect chaos theory to examine potential pitfalls in AI research and the need for rigorous testing and validation to prevent unintended biases. It offers insights into the complexities of AI bias and fairness and emphasizes the importance of an interdisciplinary approach to resolving ethical challenges.

In their article Navigating the Ethical Challenges of Artificial Intelligence in Higher Education: An Analysis of Seven Global AI Ethics Policies, Slimi & Carballido explore the ethical challenges of AI in higher education, focusing on biased algorithms, AI in decision-making, and human displacement, and emphasizing fairness, transparency, and accountability in AI systems.

 

Mitigating Bias in Artificial Intelligence published by the BerkleyHaas Center for Equity, Gender & Leadership is a playbook with strategies for combatting the bias in AI. While intended for those in business, the seven areas of interest and the mini-guides cover everything from responsible dataset development to internal policies to mitigate bias. Email sign-up required.

Navigating the Moral Maze: Ethical Challenges and Opportunities of Generative Chatbots in Global Higher Education (Applied Computational Intelligence & Soft Computing 2025) examines the ethical challenges and opportunities of using generative AI chatbots in higher education, using a Hybrid Thematic SWOT analysis to highlight benefits like personalized learning and risks like algorithmic bias and data security concerns. It emphasizes the need for responsible AI policies, faculty training, and equitable implementation strategies to ensure AI enhances education worldwide while safeguarding academic integrity.

Policy advice and best practices on bias and fairness in AI published in Ethics and Information Technology in 2024 surveys fair-AI methods, discusses policy suggestions from the NoBIAS project, and highlights legal challenges, bias understanding, mitigation, and accounting for bias, offering guidance for researchers.

 

 

Machine Bias: A Survey of Issues published in 2024 by Farič & Bratko in Informatica covers many of the key issues of bias in AI tools using the COMPAS legal decision-making system as a jumping off point.

The 2024 Inside Higher Ed article AI Has a Language Diversity Problem. Humans Do, Too. discusses the language diversity problem in AI writing tools, which often "correct" students' dialects, potentially undermining their sense of self and devaluing their home languages. It also highlights the importance of understanding and promoting linguistic diversity in the classroom, guiding students in using AI tools respectfully and inclusively, and recognizing the value of students' unique language varieties.

How AI reduces the world to stereotypes discusses how generative AI image generators, like Midjourney, produce stereotypical and biased images of countries and cultures due to the data they are trained on leading to misrepresentation and the reinforcement of bias.

Privacy and Security

AI systems often collect, analyze, and act on vast amounts of personal and sensitive data—making robust safeguards essential to protect individual rights and maintain trust. Without strong privacy controls and security measures, AI deployments can enable surveillance, data breaches, and misuse of information with far‑reaching consequences.

Here are three key resources to build your knowledge:

Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World from Stanford HAI (Human-Centered Artificial Intelligence) explores the impact of current and future privacy and data protection legislation on AI development and suggests methods to mitigate privacy harms. It focuses on shifting to opt-in data collection, improving AI data supply chain transparency, and creating new data governance mechanisms.

The Q&A Protecting Data Privacy as a Baseline for Responsible AI from the Center for Strategic and International Studies discusses provides an overview of AI data privacy and potential regulatory alignment between the U.S. and the EU. It examines policy actions taken by the United States and the European Union to address privacy risks associated with AI development and deployment.

AI Human Rights Literacy published in Current Issues in Comparative Education in 2024 emphasizes the importance of addressing challenges related to AI and human rights as a moral imperative for educators AI benefits all learners equally. It emphasizes the need for with rigorous oversight, robust regulatory frameworks, and an ethical commitment from all stakeholders to uphold the dignity and autonomy of students.

Privacy and Security Concerns in Generative AI: A Comprehensive Survey examines the privacy and security challenges inherent in Generative AI, offering five pivotal perspectives and emphasizing the involvement of users, developers, institutions, and policymakers in developing sustainable solutions. It also addresses potential negative implications, such as deepfake incidents, privacy breaches in synthetic data, and adversarial attacks and proposes mitigation strategies.

Evaluating Privacy, Security, and Trust Perceptions in Conversational AI: A Systematic Review looks specifically at conversational AI, examines user perceptions of privacy, security, and trust in conversational AI and offers insights and future research directions.

A survey on large language model (LLM) security and privacy: The Good, The Bad, and The Ugly categorizes the current research  around LLMs and data privacy into beneficial applications, offensive uses, and inherent vulnerabilities. 

Yu-An Tran's 2025 ebook, AI-Driven Network Security & Privacy, discusses new-generation network attacks, defense technology, and secure cryptographic algorithms, data security and privacy protection technology, network and communication security protocol, security analysis, and the evaluation of new application scenarios.

Beyond the Algorithm: AI, Security, Privacy, and Ethics

Privacy-preserving Computing: For Big Data Analytics and AI From the abstract - the ebook "shows how to use privacy-preserving computing in real-world problems in data analytics and AI, and includes applications in statistics, database queries, and machine learning. The book begins by introducing cryptographic techniques such as secret sharing, homomorphic encryption, and oblivious transfer, and then broadens its focus to more widely applicable techniques such as differential privacy, trusted execution environment, and federated learning. The book ends with privacy-preserving computing in practice in areas like finance, online advertising, and healthcare, and finally offers a vision for the future of the field."

 

Accountability, Transparency, and Oversight

Understanding how AI systems make decisions and who is responsible for their actions. Maintaining human control over AI systems and preventing unintended consequences.

Here are three key resources to build your knowledge:

AI's Trust Problem: Twelve persistent risks of AI that are driving skepticism (Harvard Business Review, 2024) As AI's capabilities grow, it discusses the challenges and concerns related to the increasing power of AI, including disinformation, safety and security, ethical concerns, and bias.

On the Quest for Effectiveness in Human Oversight: Interdisciplinary Perspectives This 2024 paper provides a multidisciplinary perspective on effective human oversight, drawing on psychology, law, philosophy, and technology. It's valuable for understanding the multifaceted nature of the challenge. 

Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and higher education: A systematic review examines Fairness, Accountability, Transparency, and Ethics (FATE) in AI within higher education literature highlighting the need for more comprehensive discussions and clearer explanations of FATE terms, particularly accountability and transparency, to bridge the gap between laypeople and experts.


 

Accountability in artificial intelligence: what it is and how it works (AI & Society, 2023) From the abstract: Accountability "is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, process, and implications). We analyze this architecture through four accountability goals (compliance, report, oversight, and enforcement). We argue that these goals are often complementary and that policy-makers emphasize or prioritize some over others depending on the proactive or reactive use of accountability and the missions of AI governance."

Why transparency is key to unlocking AI’s full potential (2025) argues that transparency and responsible frameworks are essential to building trust in AI, ensuring its fair, safe, and inclusive use. This article identifies six clear objectives for leaders implementing AI tools.

Designing with AI, Not Around It – Human-Centric Architecture in the Age of Intelligence (2025) through discussion of core principles, implementation challenges and solutions, and future directions, this article focuses on using AI to enhance human capabilities instead of replacing them. It also explores the principles, patterns, and challenges of creating AI systems where humans retain control and leverage machine intelligence effectively. 

Ensuring human oversight in high-performance AI systems: A framework for control and accountability (Frenette, 2023): This paper directly addresses the challenge of maintaining human control over advanced AI systems while preserving efficiency. It's a strong starting point for understanding the complexities involved. 

Human-in-the-Loop LLMOps: Balancing automation and control (Madicharla, 2025):This paper explores the use of Human-in-the-Loop (HITL) strategies in Large Language Model operations (LLMOps) to balance automation with human judgment, which is important due to the increasing risks related to bias, ethics, and compliance. 

 

Societal Impact

Here are three key resources to build your knowledge:

The blended future of automation and AI: Examining some long-term societal and ethical impact features (Khogali & Mekid, 2023): This study reviews how automation and AI affect businesses and jobs, investigating the long-term consequences of AI on human civilization, including job losses, employee well-being, and the dehumanization of jobs.

"Human-AI Interactions and Societal Pitfalls (Castro et al., 2023): This paper examines the societal pitfalls of human-AI interactions and highlights the importance of a human-centric approach to AI to avoid homogenization and bias, emphasizing that user interaction and diverse data training are crucial for balancing productivity and human diversity

AI Horizons: Shaping a Better Future Through Responsible Innovation and Human Collaboration: Featuring real-world case studies across disciplines, this 2024 illustrates AI's transformative power, emphasizes the importance of human-AI collaboration, and examines the need for human-centric design while also exploring AI governance and the role of multi-stakeholder collaboration in shaping the future.