Unless you are deeply embedded in the AI development space, some of these terms may be unfamiliar to you. Even if you’ve seen them, you may not know exactly what they mean. I figured it might be a good idea to publish a brief glossary as a reference guide. I will add to this as anything new comes up. Let me know in the comments or on Notes if there’s anything you think I should add.
Core Acronyms & Abbreviations
AI (Artificial Intelligence)
A broad field focused on building machines or software that can perform tasks requiring human-like intelligence, such as reasoning, perception, and learning.
ML (Machine Learning)
A subset of AI in which systems learn from data and improve their performance over time without being explicitly programmed for every task.
DL (Deep Learning)
A type of machine learning that uses deep neural networks to model complex patterns in data. It powers much of modern AI, including image recognition and natural language processing.
LLM (Large Language Model)
A large neural network trained on vast amounts of text data to understand and generate human language. Examples include GPT-4, Claude, and Gemini.
SLM (Small Language Model)
A smaller, more efficient version of an LLM. Designed to run on devices with limited computing power or to serve lightweight, privacy-focused use cases.
NLP (Natural Language Processing)
The field within AI focused on enabling machines to understand, interpret, and generate human language.
AGI (Artificial General Intelligence)
A theoretical AI system with general-purpose reasoning capabilities—able to learn or perform any intellectual task a human can.
RL (Reinforcement Learning)
A machine learning paradigm in which agents learn by interacting with an environment and receiving feedback in the form of rewards or penalties.
RAG (Retrieval-Augmented Generation)
A technique where an AI model pulls in external information (like documents or databases) to produce more accurate and up-to-date answers.
Popular Concepts & Terms
AI Agent
A software system powered by AI that can act semi-autonomously, often across tools or environments, to complete tasks like research, scheduling, or automation.
Hallucination
When an AI model produces outputs that are fluent but factually incorrect or made up. Common in language models when they generate responses not grounded in truth.
Prompt
The input text or query given to a model. Prompt engineering is the practice of designing these inputs to achieve better or more precise outputs.
Prompt Engineer
A person who specializes in crafting, testing, and refining prompts to get high-quality results from AI models, especially large language models. This role blends creativity, problem-solving, and a deep understanding of how models interpret language.
Fine-tuning
Training a pre-existing model further on a narrower dataset to tailor it for a specific task or domain.
Zero-shot / Few-shot Learning
The ability of a model to perform a task without any specific examples (zero-shot) or with just a few examples (few-shot) provided in the prompt.
Token
A unit of text (often a word or part of a word) used by language models for processing. Models operate on tokens rather than raw characters or full words.
Context Window
The maximum amount of text (in tokens) a model can consider at once. A larger context window means the model can "remember" more in a single interaction.
Inference
The process of using a trained AI model to generate outputs or make predictions based on new input data. Distinct from training, which is how the model learns.
Multimodal AI
AI systems that can understand and generate more than one type of input or output—for example, combining text, images, and audio. Models like GPT-4V and Gemini are multimodal.
Tools & Ecosystem Terms
API (Application Programming Interface)
A way for developers to access and integrate AI capabilities into apps or workflows programmatically.
Open-source
Describes models or tools whose code and/or training data are publicly available, allowing anyone to use, modify, or build on them. Hugging Face is a major platform for open-source AI.
Foundation Model
A large, general-purpose model trained on a broad dataset. It can be adapted (via fine-tuning or prompting) for a wide range of specific tasks.
Model Weights
The internal parameters learned during training that define how an AI model makes decisions. These are what get shared or fine-tuned when models are published.