In 1955, McCarthy et al. coined the term “artificial intelligence” (AI) as part of their 1956 Dartmouth Summer Research Project on Artificial Intelligence proposal. With it, they sought to explore ways “to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves” (McCarthy et al., 2006).
Although AI has become a part of our lives since then in ways we don't even realize (like spell check), the introduction of OpenAI's ChatGPT in 2022, brought AI, especially generative AI, to the forefront. As such, AI has become the centre of growing controversy, concern, and debate.
As AI and generative AI become more commonplace, the need to engage with them critically increases! This guide will serve as an introduction to the responsible use of AI and generative AI at Prairie College.
Artificial Intelligence (AI) is a broad term that is used to cover a general branch of technology. While there are many definitions, it is understood as the machines, programs, or software that can perform tasks that typically require human intelligence.
AGI (Artificial General Intelligence) is a type of AI that would be able to understand, learn, and apply knowledge across a wide range of tasks that would match or exceed human intelligence. AGI would be capable of general reasoning, problem-solving, and adapting to entirely new situations without training. However, today’s AI, called narrow or weak AI, currently only handles specific tasks and cannot take knowledge from one area and apply it to new, different areas without training.
A corpus (plural: corpora) refers to the large collection of text, images, speeches, or data on which LLM training is built.
Fabrications, sometimes called hallucinations, occur when AI tools generate outputs that sound plausible, but are not true, factual, or grounded in training data. For example, this might be the creation of citations that seem logical or plausible but do not exist or even attributing the wrong content to real articles.
Generative AI (GenAI) is a type of AI that uses large language models (LLMs) or machine learning models trained on large data sets to generate new content based on user prompts. As the next generation of AI, GenAI produces new images, data, and content it was not trained to do, rather than performing the same task repeatedly.
Generative Pre-Trained Transformers, or GPTs, are types of large language models that use deep learning techniques to perform natural language processing (NPL) tasks, such as translation, question-answering, and content creation. It is designed to generate human-like text by predicting words given previous words in a text.
Large Language Models (LLMs) are a type of AI that uses neural networks, NPL, and machine learning to replicate human language and human neural activity. They are trained on and use larger data sets (corpora) to develop their ability to translate languages, predict text, and generate content. LLMs start by using tokens to create the relationships between words in contextual examples to create relationships and sentences. Sentences form through the selection of tokens based on statistics performed during its training.
Natural Language Processing (NPL) models are a field of AI that uses machine learning to process and interpret text and data, giving it the ability to understand human language as it is spoken and written. LLMs are NPL tools.
Neural networks are computer systems that are inspired by the way human brains work. The neural networks allow computers to learn from and recognize patterns.
Prompts or "inputs" are the questions, instructions, or text that users give to GenAI tools that guide the model in generating an output.
Tokens are tiny units of data that AI models use to make sense of language and prompts. Tokens can be words, characters, or punctuation marks. They serve as the building blocks that allow AI to understand and generate content in a way that makes sense to us. Each LLM tool has its own parameters for tokens (how it reads them, how many it can process at once, maximum token limits, etc.). Knowing the parameter limits can sometimes help understand how much text/content/tokens a LLM can use when generating outputs. These parameters can be found in the tool's documentation and can sometimes be adjusted in the settings.
As approaches to AI vary widely, it is important to check the course syllabus or speak with your instructor to better understand the expectations for (in)appropriate AI use in each class. Additionally, AI is changing quickly, and this guide can’t always keep up with every new tool or update. Think of it as a starting point: use what’s here, but also stay curious and check out the latest information from other reliable sources.