Skip to Main Content

Generative AI at UVA

This guide features links and information about generative AI, including ethical use, citations, considerations for use, and more.

Understanding Generative AI

Generative AI is a subset of artificial intelligence designed to generate novel content--such as text, imagery, or music--based upon specific user prompts. Generative AI models like ChatGPT (for text and conversation) and DALL-E (for images) generate this content by relying on patterns found in large datasets used as training input. These datasets in turn rely on labeled metadata, created and curated by human agents, in order to be processed by the algorithms that perform the fundamental pattern recognition and predictive completion of the core AI models.
 

John Warner, author of Why They Can't Write and The Biblioracle Recommends, summarizes ChatGPT in the following way: “It’s important to understand what ChatGPT is, as well as what it can do. ChatGPT is a Large Language Model (LLM) that is trained on a set of data to respond to questions in natural language. The algorithm does not “know” anything. All it can do is assemble patterns according to other patterns it has seen when prompted by a request. It is not programmed with the rules of grammar. It does not sort, or evaluate the content. It does not “read”; it does not write.” (Warner, 2022).

Additional Resources

Generative AI Courses available through LinkedIn Learning (free to UVA users)

This collection features a selection of some dozen courses and standalone videos from the LinkedIn Learning Library that offer an overview of generative AI, specific GenAI Chatbots, query writing, the responsible use of AI, using AI in a research context, and more! 

After watching this video, you will understand what generative AI is and what it is not.

Whether you work in film, marketing, healthcare, automobile, or real-estate, generative AI is changing the way your job is executed, and those who adapt early will reap its benefits sooner. All professions will be affected by generative AI. Its invention can be compared to the invention of photography, a true creative revolution. If you want to be part of the leaders that are advancing this revolution, this course can get you started on your learning journey.

Large Language Models

NYU Shanghai Library's Machines and Society guide has a fantastic overview of Large Language Models that we highly recommend reviewing in full.

This excerpt cover the basics: 


What Large Language Models Are

Large Language Models (LLMs) refer to large general-purpose language models that can be pre-trained and then fine-tuned for specific purposes. They are trained to solve common language problems, such as text classification, question answering, document summarization, and text generation. The models can then be adapted to solve specific problems in different fields using a relatively small size of field datasets via fine-tuning.

The ability of LLMs to take the knowledge learnt from one task and apply it to another task is enabled by transfer learning. Pre-training is the dominant approach to transfer learning in deep learning.

LLMs predict the probabilities of next word (token), given an input string of text, based on the language in the training data. Typical training corpora for LLMs include natural language (e.g. web data). But LLMs can also be trained on other types of languages (e.g. programming languages).

(Dai, 2023)


Introduction to Large Language Models

 

This 15 minute video from Google Cloud Tech provides a more thorough introduction to Large Language Models. For more on this topic, review the LinkedIn Learning Generative AI Collection linked in Additional Resources section above.