What is Foundation Model?

Foundation Model — Large-scale AI models trained on a vast quantity of unlabelled data that can be adapted to many downstream tasks.

Foundation models like GPT-4, Claude, and Llama serve as base platforms that can be adapted for specific tasks through fine-tuning or prompt engineering. Training one from scratch costs millions of dollars, so most organizations use existing foundation models as starting points.

Frequently Asked Questions

Should my company build its own foundation model?

Almost certainly no. Training a foundation model requires millions in compute. Instead, use existing models and customize them through fine-tuning or RAG for your specific needs.

What is the difference between a foundation model and a fine-tuned model?

A foundation model is the general-purpose base. A fine-tuned model is a foundation model that has been further trained on your specific data to improve performance on targeted tasks.

Are open-source foundation models as good as proprietary ones?

The gap is narrowing rapidly. Models like Llama 3 and Mistral compete with proprietary models on many benchmarks, especially after fine-tuning for specific use cases.

← Back to Glossary

Enterprise Diagnostics

Where does your
organization stand?

Take our comprehensive 5-minute readiness assessment to uncover critical gaps across Strategy, Data, Infrastructure, Governance, and Workforce.