Enterprise AI Glossary
The definitive dictionary of Artificial Intelligence concepts, technologies, and strategies, decoded for business leaders.
100 of 100 results
A
Agentic AI
Agentic AI: AI systems designed to act autonomously, making decisions and executing complex workflows without constant human input.
AGI (Artificial General Intelligence)
AGI (Artificial General Intelligence): A hypothetical AI system capable of understanding, learning, and applying intelligence across a wide range of tasks at a human or superhuman level.
Alignment
Alignment: The process of ensuring an AI system's goals and behaviors match human values and intentions.
API (Application Programming Interface)
API (Application Programming Interface): A set of protocols that allows different software systems, such as enterprise software and AI models, to communicate.
API Gateway
API Gateway: A management tool that sits between a client and a collection of backend AI services.
Attention Mechanism
Attention Mechanism: A technique in neural networks that allows models to dynamically focus on specific parts of the input sequence.
B
Backpropagation
Backpropagation: An algorithm used in neural networks to calculate gradients and update weights by working backward from errors.
Batch Processing
Batch Processing: Processing a large volume of data or AI inferences all at once rather than in real-time.
Bias in AI
Bias in AI: Systematic errors in AI outputs caused by prejudiced assumptions or imbalanced data in the training set.
C
Chatbot
Chatbot: A software application designed to simulate human conversation, often powered by an LLM.
Chinchilla Scaling Laws
Chinchilla Scaling Laws: Research findings that dictate the optimal ratio of model parameters to training data size for efficient AI training.
Cloud Computing
Cloud Computing: The delivery of computing services—including servers, storage, databases, and AI—over the Internet.
Computer Vision
Computer Vision: A field of AI that enables computers to interpret and make decisions based on visual data like images and videos.
D
Data Governance
Data Governance: The overall management of data availability, usability, integrity, and security in enterprise environments.
Data Lake
Data Lake: A centralized repository that allows you to store all your structured and unstructured data at any scale.
Data Pipeline
Data Pipeline: A set of automated processes that extract, transform, and load data from one system to another.
Deep Learning
Deep Learning: A subset of machine learning based on artificial neural networks with multiple layers.
Diffusion Models
Diffusion Models: Generative AI models that learn to create data (like images) by reversing a gradual noise-adding process.
E
Edge AI
Edge AI: Running AI algorithms locally on a hardware device rather than relying on a centralized cloud infrastructure.
Embeddings
Embeddings: Numerical representations of concepts or words that allow AI to understand semantic relationships.
Epoch
Epoch: One complete pass of the training dataset through the machine learning algorithm.
Evaluating AI
Evaluating AI: The systematic assessment of an AI model's performance, safety, and ethical alignment before deployment.
F
Feature Engineering
Feature Engineering: The process of using domain knowledge to extract features from raw data for use in machine learning.
Feature Store
Feature Store: A centralized repository for storing, managing, and serving features used in machine learning models.
Federated Learning
Federated Learning: A decentralized approach to training AI where the model is trained across multiple edge devices holding local data.
Few-Shot Learning
Few-Shot Learning: Training an AI model to perform a task by providing only a small number of examples in the prompt.
Fine-Tuning
Fine-Tuning: The process of adjusting a pre-trained AI model on a specific dataset to improve its performance on targeted tasks.
Foundation Model
Foundation Model: Large-scale AI models trained on a vast quantity of unlabelled data that can be adapted to many downstream tasks.
G
Generative Adversarial Network (GAN)
Generative Adversarial Network (GAN): A class of ML frameworks consisting of two neural networks contesting with each other to generate realistic data.
Generative AI
Generative AI: AI systems capable of generating novel text, images, code, or other media based on learned patterns.
GPU (Graphics Processing Unit)
GPU (Graphics Processing Unit): Specialized hardware essential for parallel processing tasks, such as training and running AI models.
Gradient Descent
Gradient Descent: An optimization algorithm used to minimize the error in a neural network by adjusting weights iteratively.
Grounding
Grounding: Connecting an AI model's outputs to verifiable facts or external databases to prevent hallucination.
H
Hallucination
Hallucination: When an AI model generates false, nonsensical, or unverified information while presenting it as fact.
Heuristic
Heuristic: A practical, rule-of-thumb approach to problem-solving within algorithms that may not be optimal but is sufficient.
Human-in-the-Loop (HITL)
Human-in-the-Loop (HITL): A workflow where human oversight is integrated into an AI process to ensure accuracy and ethical alignment.
Hyperautomation
Hyperautomation: The combination of AI, RPA, and machine learning to rapidly identify and automate complex business processes.
Hyperparameter
Hyperparameter: A parameter whose value is used to control the learning process of an AI model.
I
Inference
Inference: The phase where a trained AI model processes new data to make predictions or generate outputs.
Instruction Tuning
Instruction Tuning: Fine-tuning an AI model specifically to follow user instructions and commands.
Intent Recognition
Intent Recognition: The ability of an AI system to deduce the intention of a user's input, often used in conversational AI.
Interoperability
Interoperability: The ability of different AI systems, software, and enterprise tools to communicate and work together seamlessly.
J
K
L
LangChain
LangChain: A popular open-source framework designed to simplify the creation of applications using large language models.
Latency
Latency: The time delay between a user submitting a prompt and the AI model returning a response.
Llama
Llama: A family of open-weight large language models developed and released by Meta.
LLM (Large Language Model)
LLM (Large Language Model): A foundational AI model trained on vast amounts of text data to understand and generate human-like language.
Loss Function
Loss Function: A mathematical function that measures how far an AI model's predictions deviate from the actual true values.
M
Machine Learning (ML)
Machine Learning (ML): A subset of AI where systems learn to improve performance on a task through experience and data.
MCP (Model Context Protocol)
MCP (Model Context Protocol): An open standard connecting AI models directly to external data sources and tools to eliminate hardcoded integrations.
Model Decay
Model Decay: The degradation of an AI model's predictive performance over time due to changes in real-world data.
Model Ops (MLOps)
Model Ops (MLOps): A set of practices that aims to deploy and maintain machine learning models in production reliably and efficiently.
Model Registration
Model Registration: The process of cataloging and versioning AI models within an MLOps pipeline for tracking and deployment.
Multimodal AI
Multimodal AI: AI systems capable of processing and generating multiple types of data simultaneously, such as text, images, and audio.
N
Named Entity Recognition (NER)
Named Entity Recognition (NER): An NLP technique that identifies and classifies key entities in text into predefined categories.
Neural Network
Neural Network: A computing system inspired by the biological neural networks that constitute animal brains.
NLP (Natural Language Processing)
NLP (Natural Language Processing): The branch of AI concerned with giving computers the ability to understand and interpret human language.
O
Ontology
Ontology: A formal representation of knowledge as a set of concepts and the relationships between them.
Open Source AI
Open Source AI: AI models and tools whose source code and weights are made publicly available for use and modification.
OpenAI
OpenAI: An AI research and deployment company known for creating the GPT series of large language models.
Orchestration
Orchestration: The automated configuration, management, and coordination of complex computer systems and AI services.
Overfitting
Overfitting: When an AI model learns its training data too well, failing to generalize to new, unseen data.
P
Parameter-Efficient Fine-Tuning (PEFT)
Parameter-Efficient Fine-Tuning (PEFT): Techniques that adapt large models by only training a small number of extra parameters, saving compute costs.
Parameters
Parameters: The internal variables and weights that an AI model learns during training to make predictions and generate text.
Pattern Recognition
Pattern Recognition: The automated recognition of patterns and regularities in data using machine learning algorithms.
Predictive Analytics
Predictive Analytics: The use of historical data, statistical algorithms, and ML to identify the likelihood of future outcomes.
Prompt Engineering
Prompt Engineering: The practice of designing and refining input queries to elicit optimal and accurate responses from AI models.
Proof of Concept (PoC)
Proof of Concept (PoC): A small-scale project used to verify that an AI concept has practical potential before full enterprise deployment.
Q
Q-Learning
Q-Learning: A model-free reinforcement learning algorithm used to find the best action to take given a current state.
Qlora
Qlora: An efficient fine-tuning approach that reduces memory usage enough to fine-tune a large model on a single GPU.
Quantization
Quantization: Compressing an AI model by reducing the precision of its weights, allowing it to run faster on weaker hardware.
R
RAG (Retrieval-Augmented Generation)
RAG (Retrieval-Augmented Generation): A technique that improves AI accuracy by fetching relevant data from a vector database before generating an answer.
Recurrent Neural Network (RNN)
Recurrent Neural Network (RNN): A type of neural network specialized for processing sequential data like text or time series.
Red Teaming
Red Teaming: Actively testing an AI system by simulating adversarial attacks to discover vulnerabilities.
Reinforcement Learning (RL)
Reinforcement Learning (RL): Training AI by rewarding desired behaviors and punishing undesired ones.
Return on AI (ROAI)
Return on AI (ROAI): The financial and operational value generated by an AI initiative compared to its implementation costs.
S
Semantic Search
Semantic Search: A search technique that aims to understand the searcher's intent and contextual meaning rather than just matching keywords.
Synthetic Data
Synthetic Data: Data generated artificially by algorithms rather than by real-world events, used to train AI safely.
System Prompt
System Prompt: The foundational instructions given to an AI model that define its persona, constraints, and overarching goals.
T
Temperature
Temperature: A parameter that controls the randomness or creativity of an AI model's output.
Tokenization
Tokenization: The process of breaking down text into smaller units (tokens) that AI models can process and understand.
Training Data
Training Data: The initial dataset used to teach an AI model how to make predictions or generate text.
Transfer Learning
Transfer Learning: Taking an AI model trained on one task and repurposing it as the starting point for a different but related task.
Transformer Architecture
Transformer Architecture: The underlying neural network design used in modern LLMs that allows them to process entire sequences of data in parallel.
U
Unsupervised Learning
Unsupervised Learning: Training an AI on data that has no labels, allowing it to find hidden patterns and structures on its own.
Use Case
Use Case: A specific scenario or business problem where AI can be applied to deliver measurable value.
Use Case Discovery
Use Case Discovery: The strategic process of identifying and prioritizing where AI can deliver the highest ROI in an enterprise.
V
Validation Set
Validation Set: A separate dataset used during training to tune parameters and evaluate a model's performance without overfitting.
Vector Database
Vector Database: A specialized database that stores data as high-dimensional vectors, enabling fast similarity search for AI applications.
Vision-Language Model (VLM)
Vision-Language Model (VLM): An AI model capable of understanding both images and text simultaneously.
W
Weight Decay
Weight Decay: A regularization technique used during training to prevent overfitting by penalizing large weights.
Weights
Weights: The numeric values inside a neural network that are adjusted during training to minimize errors.
Word2Vec
Word2Vec: A technique used to produce word embeddings, representing words as vectors in a continuous mathematical space.
Z
Zero-Shot Learning
Zero-Shot Learning: When an AI model successfully completes a task without any specific prior examples or training data for that exact task.
Zero-Trust Architecture
Zero-Trust Architecture: A security model requiring strict identity verification for every person and device trying to access enterprise AI systems.