AI Spend IndexStanford Computer Science

About the Index

An independent benchmark for AI spend in software engineering

An external benchmark for AI tooling spend in engineering.

The problem

Nobody knows how their AI spend compares

Spend is rising fast

AI tooling costs grow every quarter with no sign of slowing down.

No way to compare

Leaders set budgets without knowing what similar companies actually spend.

Huge range across companies

Spend varies by orders of magnitude. Without context, your number means nothing.

Scope

What the index tracks

Software engineering AI spend only.

Included

  • Code assistant licenses (Copilot, Cursor, Claude Code, Codex, etc.)
  • AI code review and testing tools
  • AI documentation generation tools
  • Inference costs tied to development work
  • All figures normalized to spend per developer per month

Not included

  • AI used outside engineering (marketing, legal, HR, support, etc.)
  • GPU and ML infrastructure costs
  • AI consulting and one-time transformation projects

Why trust this data

Independent, private, and continuously updated

No commercial incentive

No vendor ties, no consulting revenue, no reason to skew results.

Built-in privacy

Company names are encrypted and not shown publicly or to other contributors.

Continuously updated

A living benchmark that keeps pace with how fast AI tooling evolves.

The AI Spend Index is an independent research project. It is not an official university product or endorsement.

Data protection

Privacy and data protection

Encrypted identities

Company names are encrypted and not shown publicly or to other contributors.

Aggregate public data

The public site only shows one overall market distribution across the full dataset.

Pseudonymous contributors

Contributor views use pseudonymous company aliases with bucketed industry, region, and engineering headcount.

No sale or AI training

Identifiable data is not sold or used for AI model training. Hosting, auth, and email providers process it only to operate the service.

Review the public Data Protection Commitment before sign-in if you need internal approval from legal, compliance, or security.

Current versions: Data Protection 2026-03-05, Terms 2026-03-05, Privacy 2026-03-05.

Common questions

Contributing, privacy, and access

What data do I need to submit?

Four required fields: company name, industry, engineering headcount, and total annual AI spend. Optional fields improve the benchmark but aren't required. The form usually takes a few minutes.

How is my company's identity protected?

Company names are encrypted at rest and are not shown publicly or to other contributors. Project operators can access identifiable submissions for review and operations. Contributors see pseudonymous aliases with bucketed dimensions. The public site shows only the overall distribution.

How long until I get access to the full benchmarks?

Submissions are reviewed by the team. Contributor access and the approval email are sent after an approved submission is released.

What is the difference between public and contributor views?

The public view shows one overall distribution histogram. Contributors get breakdowns by industry, size, and region, plus a personalized peer comparison on the My Company page.

View all questions

Ready to benchmark your spend?

Takes a few minutes. Questions? Get in touch.