Platform Guide

Palantir AIP / Foundry

The enterprise data integration and operational AI platform — Ontology-grounded LLMs via AIP Logic, model-agnostic k-LLM architecture, FedRAMP High through IL6, the $10B Army Enterprise Agreement, and Maven Smart System as a Pentagon Program of Record.

FedRAMP High DoD IL4 / IL5 / IL6 AIP Logic Agent Studio Maven Smart System
Authorization
FedRAMP High (Dec 2024)
DoD IL
IL4, IL5, IL6
Army Contract
$10B / 10-year (Jul 2025)
FedStart Includes
Claude (Apr 2025)
Maven Status
Pentagon POR (Mar 2026)
Library
palantir_models (not foundry_ml)

Platform Overview

The Product Family

Palantir ships four distinct platforms that federal data scientists will encounter:

Foundry is the enterprise data integration, analytics, and AI deployment platform. Civil government agencies — DHS, HHS, NIH, NASA, the Department of Justice — run on Foundry. So does a large slice of commercial enterprise. If someone at your program office says "we're on Palantir," they almost certainly mean Foundry.

Gotham is the original Palantir product, built for defense and intelligence. Where Foundry thinks in datasets and pipelines, Gotham thinks in intelligence profiles and graph networks — linked entities, pattern-of-life analysis, counter-terrorism workflows. The Army's $10 billion Enterprise Agreement signed in July 2025 covers both Foundry and Gotham.

AIP — the Artificial Intelligence Platform — is not a separate product you install. It is a layer on top of Foundry (and Gotham) that connects large language models to your Ontology. AIP Logic, Agent Studio, AIP Machinery: all of these are the mechanisms by which LLMs read from and write to your real organizational data rather than generating plausible-sounding nonsense into a chat window.

Apollo is the continuous delivery system that handles deployment, configuration management, and software updates across all Palantir environments — including classified networks with no internet connectivity. From your perspective as a data scientist, Apollo is largely invisible: it is the reason Foundry updates appear in your environment without requiring a ticket to your system administrator.

The Ontology: What Actually Differentiates This Platform

The differentiator is the Ontology. In a conventional data environment, you have a personnel table and a mission table connected by foreign keys that only make sense if you read the schema documentation — which is three versions out of date. The Ontology replaces that with a semantic layer that lives above the data.

  • Object Types — schema definitions of real-world entities: Aircraft, Supplier, Patient, Contract
  • Objects — instances: the actual F-35 with tail number 104, the actual contract N00024-25-C-4477
  • Properties — characteristics attached to those objects
  • Link Types — defined relationships: "Pilot flies Aircraft," "Contract awards to Supplier"
  • Actions — defined sets of changes that users can trigger: "Approve Purchase Order," "Update Mission Status"
  • Functions — TypeScript-authored business logic powering calculated properties and complex decision rules

When an LLM in AIP Logic queries your data, it queries through the Ontology. It gets objects with defined semantics, not raw column values from tables it cannot interpret. This is Palantir's answer to LLM hallucination on business-critical data: ground the model in a semantic layer that your organization controls and defined.

graph TD A[Raw Source Data
ERP, Sensors, APIs] --> B[Datasets / Transforms
Pipeline Builder and Code Repos] B --> C[Ontology
Objects, Links, Actions, Functions] C --> D[AIP Layer
Logic, Agent Studio, Machinery] C --> E[Applications Layer
Workshop, Quiver, Object Explorer] D --> E E --> F[End Users
Analysts, Operators, Commanders]

Foundry architecture from raw data to operational user. The Ontology sits in the middle of everything — it is not an optional abstraction layer.

Getting Access

FedRAMP, IL4/IL5/IL6

In December 2024, Palantir received FedRAMP High Baseline Authorization for its full product suite under the Palantir Federal Cloud Service (PFCS). This single authorization covers AIP, Apollo, Foundry, Gotham, FedStart, and Mission Manager.

  • IL4 covers Controlled Unclassified Information — most acquisition data, contractor-sensitive data, some personnel records. IL4 runs on Azure Government.
  • IL5 covers National Security Systems data that is unclassified but sensitive. Also runs on Azure Government.
  • IL6 covers Secret-level classified data. The August 2024 Palantir-Microsoft partnership made Palantir the first industry partner to deploy Azure OpenAI Service in Azure Government Top Secret (IL6).

The FedStart Program

FedStart is a Palantir offering that lets third-party software vendors deploy their products inside Palantir's existing security accreditation envelope. For a smaller ISV that would otherwise spend 18–24 months pursuing a separate FedRAMP authorization, FedStart compresses that timeline to weeks or months. Notable FedStart additions as of 2025:

  • Anthropic's Claude (April 2025) — available as an AIP-connected LLM in government environments
  • Google Cloud (April 2025) — streamlined FedRAMP High/IL5 accreditation for ISVs on Google Cloud
  • Unstructured.io (August 2025) — AI-ready document parsing at FedRAMP High and IL5

Data Science Tools

Know Which Environment to Use

Foundry has three code-based development environments that are not interchangeable:

Code Repositories is where production work lives. Full Git version control — branches, commits, pull requests, code review. Python, Java, and SQL are all supported. Anything that needs to be maintained, versioned, and scheduled belongs in a Code Repository.

Code Workspaces is the modern IDE environment for exploratory data science and model development. JupyterLab or RStudio in the browser, with direct access to Foundry datasets as training data. When you are ready to publish a trained model, use the Models sidebar to register it.

Code Workbook is the legacy environment, marked [Legacy] in Palantir documentation. New work should go in Code Workspaces.

Python in Foundry

python
# Foundry transform — production pipeline in a Code Repository
from transforms.api import transform, Input, Output
import pandas as pd


@transform(
    output=Output("/defense-programs/logistics/vehicle_status_clean"),
    raw=Input("/defense-programs/logistics/vehicle_status_raw"),
    reference=Input("/defense-programs/reference/vehicle_master"),
)
def compute(output, raw, reference):
    df = raw.pandas()
    ref = reference.pandas()

    # Drop records where vehicle_id cannot be resolved against master list
    valid_ids = set(ref["vehicle_id"])
    df_clean = df[df["vehicle_id"].isin(valid_ids)].copy()

    # Normalize status codes — the source system uses three different encodings
    status_map = {"SVCBL": "serviceable", "NS": "non_serviceable", "NMCS": "non_mission_capable"}
    df_clean["status_normalized"] = df_clean["status_code"].map(status_map).fillna("unknown")

    output.write_pandas(df_clean)

The @transform decorator is the core Foundry pattern. Your function declares its inputs and outputs; the platform handles scheduling, lineage tracking, and output versioning automatically. When the vehicle_status_raw dataset updates, Foundry knows this transform needs to re-run.

ML on Foundry

Use palantir_models, not foundry_ml. The foundry_ml library was deprecated on October 31, 2025 and is no longer available. All ML work in Foundry uses palantir_models. If you inherit a program with foundry_ml code, migrating to palantir_models is the first priority before any new model development.
python
# Code Workspaces — training and publishing a model with palantir_models
import palantir_models as pm
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report

# Load training data directly from a Foundry dataset
training_data = pm.datasets.load("/defense-programs/supply-chain/maintenance_training_set")
df = training_data.pandas()

features = ["days_since_last_service", "total_flight_hours", "component_age_days",
            "operating_temp_avg", "vibration_anomaly_score"]
target = "failure_within_30_days"

X = df[features]
y = df[target]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

model = GradientBoostingClassifier(n_estimators=200, max_depth=4, learning_rate=0.05)
model.fit(X_train, y_train)

print(classification_report(y_test, model.predict(X_test)))

# Publish to Foundry model registry
# The model is now available to AIP Logic and downstream transforms
published = pm.models.publish(
    model=model,
    name="component_failure_predictor",
    description="GBM model predicting component failure within 30 days based on maintenance history",
    features=features,
    target=target,
)

print(f"Published model version: {published.version}")
print(f"Model RID: {published.rid}")

Model-Backed Ontology Objects

The most operationally powerful pattern in Foundry is the model-backed Ontology object. Rather than having a model output that lives in a dataset nobody checks, you surface model predictions as properties on Ontology objects that end users interact with daily. The Aircraft object type has a predicted_failure_probability property. Behind that property, a deployed palantir_models model runs on each object's maintenance history. Operators in Workshop see the prediction alongside the aircraft status. An AIP Logic block can flag high-probability failures in natural language. Actions allow a maintenance coordinator to trigger a work order directly from the same interface.

AIP Capabilities

AIP Logic

AIP Logic is where you build LLM-powered functions without writing LLM API calls or managing model infrastructure. The core building block is the Use LLM Block. AIP Logic functions can be equipped with:

  • Data tools — read from the Ontology, query objects and properties and links
  • Logic tools — execute Foundry Functions (TypeScript business logic)
  • Action tools — write back to the Ontology, trigger Actions safely

An AIP Logic function cannot take arbitrary write actions. It can only invoke Actions that have been explicitly defined and approved in the Ontology. This is the difference between an LLM that can do anything and an LLM that can do specifically what your organization has approved.

Agent Studio

Agent Studio extends AIP Logic to full conversational agents with memory, context, and multi-turn interaction. Agents built in Agent Studio can be deployed inside Foundry Workshop applications or externally via the OSDK. A Logic function executes once with a prompt and returns a result. An Agent maintains conversation state, can plan multi-step actions, remembers earlier turns, and can ask clarifying questions before taking action.

AIP Machinery (February 2025)

AIP Machinery is Palantir's product for human-in-the-loop workflows. You define which steps in a process require human approval, and the Workshop application surfaces those decisions to the appropriate reviewer. The automation handles the routine cases; the humans handle the exceptions. Over time, the confidence threshold adjusts as the model proves itself on a given decision type.

The k-LLM Philosophy

Palantir deliberately built AIP to be model-agnostic. You can configure multiple LLMs simultaneously in your enrollment — GPT-4, Claude, other models — and select different models for different Logic blocks or agents based on capability, cost, or policy requirements. Swapping models does not require rewriting your AIP Logic or agents. The Ontology grounding and the tools layer remain constant across model changes.

Data Integration

Pipeline Builder vs. Code Repositories

flowchart TD A{Is the transform logic
primarily joins/filters/
aggregations?} -->|Yes| B{Does a non-engineer
need to maintain it?} A -->|No| C[Code Repository
Python / Java / SQL] B -->|Yes| D[Pipeline Builder
No-code visual interface] B -->|No| E{Is it exploratory
or production?} E -->|Exploratory| F[Code Workspaces
JupyterLab] E -->|Production| C

Decision tree for choosing the right development environment in Foundry.

Databricks Partnership (March 2025)

The March 2025 Palantir-Databricks partnership introduced zero-copy Unity Catalog integration. Data governed in a Databricks Lakehouse can register directly in Foundry as Virtual Tables without ETL or duplication. From a Foundry workflow, a Virtual Table looks like any other dataset — it has lineage, it participates in transforms, it can be mapped to Ontology objects. The underlying data stays in Databricks Delta Lake format, governed by Unity Catalog, and Foundry reads it in place.

This is the pattern for DoD programs where some data lives on Advana (which runs Databricks) and other workflows run on Foundry. Databricks is where you build AI; Palantir is where you deploy AI into operations.

Government Adoption

The Army Enterprise Agreement

The U.S. Army signed a $10 billion, 10-year Enterprise Service Agreement with Palantir in July 2025. The deal consolidated 75 separate contracts — 15 prime contracts and 60 related contracts — into one framework with volume-based pricing, available to other DoD components beyond the Army itself.

Maven Smart System

Maven Smart System became a Pentagon Program of Record in March 2026, per a memo from Deputy Defense Secretary Steve Feinberg. That designation means stable, long-term funding. Oversight transferred from the National Geospatial-Intelligence Agency to the CDAO.

The publicly documented capabilities: Maven processes large volumes of battlefield data from satellites, radars, drones, sensors, and intelligence reports. It identifies potential targets and threats. It supports natural language queries through AIP — users ask questions in plain English and receive grounded answers without needing to know which dataset to query or how to write the filter. NATO acquired Maven Smart System NATO (MSS NATO) in April 2025; a $240 million DoD contract for battlefield decision support followed in January 2026.

Platform Comparison

DimensionPalantir Foundry/AIPDatabricksQlikNavy Jupiter
Core metaphorOntology (semantic objects, actions)Lakehouse (Delta tables, open format)BI and ETLDoN enterprise data environment
Primary strengthOperational AI deploymentModel building and training at scaleAnalytics and data integrationDON-specific data governance
AuthorizationFedRAMP High, IL4/IL5/IL6FedRAMP High, DoD IL5FedRAMP Moderate, IL2/IL4DON-specific (NIPRNET/SIPRNET/JWICS)
LLM integrationAIP (native, Ontology-grounded)Mosaic AILimited (Qlik Answers)Limited
No-code toolsStrong (Pipeline Builder, AIP Logic)Primarily notebook-basedStrong (Qlik Sense)Limited
Scale for model trainingModerateExcellent (Spark at scale)Not applicableLimited
Writeback/actionsNative (Actions, Workshop)Separate tooling requiredLimitedLimited

Where This Goes Wrong

Failure Mode 1: Skipping the Ontology

Teams build Foundry transforms exactly like any other data pipeline, producing datasets that feed dashboards — and never model the Ontology layer. Workshop applications read directly from datasets. Multiple datasets represent the same real-world entity with slightly different schemas. LLM-generated outputs have hallucinated field interpretations.

What to do instead: Define the Ontology before you build the application. It takes longer upfront. Programs that skip it spend months retrofitting, and retrofitting the Ontology after downstream applications are built is significantly harder than modeling it correctly the first time.

Failure Mode 2: Treating Code Workbook as a Production Environment

Exploratory analysis done in Code Workbook gets promoted to production by running it in Code Workbook on a schedule. Production datasets are being produced by notebooks without version history. Nobody can answer "what changed between last week's run and this week's run."

What to do instead: Use Code Workbook for exploration. When something in Code Workbook is worth running more than once, migrate it to a Code Repository. Foundry provides an export function for exactly this purpose.

Failure Mode 3: Using foundry_ml

Following older documentation, blog posts, or institutional knowledge that references the foundry_ml library. Your import statement says import foundry_ml. The library throws errors or is simply unavailable — it was deprecated October 31, 2025.

What to do instead: Use palantir_models. Full stop.

Platform Decision Checklist

QuestionIf YesIf No
Do you need to deploy AI into operational workflows (not just analysis)?Foundry/AIP is well-suitedConsider whether a simpler BI platform covers the need
Does your data have complex entity relationships that benefit from semantic modeling?The Ontology is worth the investmentYou may be over-engineering with Foundry
Do you need FedRAMP High or IL5/IL6 authorization?Foundry covers this; check your specific ATOSimpler platforms may work
Do you have Databricks data assets you need to surface without ETL?Use Unity Catalog Virtual Tables integrationStandard Pipeline Builder connectors
Do you need LLMs grounded in your specific operational data?AIP Logic + Ontology is the right architectureGeneric LLM APIs without grounding will hallucinate

Platform Close

The one thing to remember: The Ontology is not optional boilerplate you add after the real work is done — it is the architecture that makes AI grounding, operational writeback, and fine-grained access control possible in Foundry, and skipping it turns a sophisticated platform into an expensive pipeline tool.

What to do Monday morning: If you have access to a Foundry enrollment, go to learn.palantir.com and run the "Speedrun: Your First Agentic AIP Workflow" module — it takes under an hour and covers the complete path from dataset to Ontology object to AIP Logic function to Workshop application. If you are evaluating Foundry for a federal program, ask for an AIP Bootcamp slot with your own data rather than a vendor demo with canned data.