Product
Five dimensions of AI economic visibility, on one unified model.
Metron ingests cost, product, customer, and pricing data and joins them into a single economic view of your AI business. Every dimension below answers a question your finance, product, or engineering lead is being asked right now.
Feature Economics
Which features are making money. Which are quietly destroying margin.
Every AI feature gets a full economic profile — total cost, cost per invocation, cost per active user, trend, and a margin health classification computed against the plan revenue it enables.
- Margin health: healthy, watch, at-risk, margin-negative
- Cost trend over time per feature
- Cost per active user by plan tier
Feature margin · per active user
March · Pro plan
- Chat assistant$2.10
- AI onboarding$3.40
- Semantic search$7.80
- Doc summarization$16.00
Cost / active user · health computed against plan revenue
Customer Profitability
Which accounts are underwater — before the renewal conversation, not after.
Every account gets a profitability profile: monthly AI cost, monthly revenue, AI cost as a percentage of revenue, and a health status. Catch the underwater enterprise contract four months before it renews.
- Per-account AI cost and revenue
- Underwater detection at configurable thresholds
- Top cost-driving features per customer
Top accounts by AI cost ratio
3 underwater · 2 at risk
- HEHelix LabsEnterprise$12,500$22,140177%Margin-negative
- NONorthwind AIEnterprise$10,000$8,31083%At risk
- NINimbus HealthEnterprise$15,000$11,80079%At risk
- ATAtlas RoboticsPro$2,400$64027%Healthy
- BEBeaconPro$2,400$31013%Healthy
Threshold default 40% of recognized revenue
Plan Economics
Whether your pricing actually matches how AI gets used.
Plan-level analysis: average AI cost per account, average revenue, contribution margin, standard deviation of cost across accounts, and overconsumption patterns. Surfaces both underpriced plans and broken tails.
- Contribution margin by plan
- Std deviation reveals dangerous tails
- Overconsumption patterns by tier
Pro plan · AI cost distribution
Per-account, last 30 days
Median cost
$11.40
P95
$48.20
Std deviation
$14.10
Avg margin healthy at $8.25 · tail (≥ $40) is 4% of accounts and 31% of cost
Waste & Savings
Specific, quantified, actionable. Not generic recommendations.
Each finding ties to a dollar estimate and a remediation path. Detected against your actual usage — model overkill, batch eligibility, prompt caching gaps, orphaned spend, staging on production models, free-tier subsidy.
- Six finding types, each with a dollar estimate
- Remediation path included
- Backed by your historical output quality
Waste & savings · prioritized
5 findings · $10,110 / mo identified
- Model overkill — semantic search−$4,200 / moHigh
- Batch eligibility — overnight summaries−$1,860 / moHigh
- Prompt caching gap — onboarding agent−$980 / moMedium
- Orphaned key — legacy classify-v1−$430 / moHigh
- Free-tier subsidy — summarization−$2,640 / moHigh
Each finding includes a remediation path and a confidence level
Forecasting & Scenarios
Model the unit economics before you ship the feature, not after.
Scenarios for feature rollouts, plan changes, model swaps, caching, repricing, and growth projections. Each output is a shareable economic projection — finance reviews margin impact before the launch, not at the end of the quarter.
- Feature rollout scenarios
- Pricing and plan-restructure modeling
- Per-plan unit economics under growth
Scenario · roll out summarization to all Pro accounts
Projection · 90 days
Monthly AI cost
+$32,388
Gross margin
−6.5 pts
Margin-positive features
No change
Underwater accounts
+8
Net new MRR
+$48,200
Contribution margin / account
−$43
Shareable economic projection · finance reviews before launch, not after
Attribution layer
Connect every token to a feature, a customer, and a plan.
The hard part of margin intelligence isn't reading provider bills — it's tying spend to the business objects that give it meaning. Metron offers three integration paths, ordered by effort.
API key mapping
Zero code
Label each API key with the feature and team it belongs to. Works immediately for any company already separating keys by service.
SDK tagging
Two lines
Drop-in wrapper around your provider client that attaches feature, customer, and workflow metadata to every call. Captured server-side, joined to spend.
Proxy mode
Routing change
All traffic routes through a Metron endpoint that intercepts calls, attaches attribution, enforces budget policies, and logs decisions.
SDK example · Python
Two lines of code. Every call attributed.
Wrap your existing provider client. Pass feature, customer, and plan metadata at construction. Every downstream call is captured, tagged, and joined to the cost record. TypeScript, Node, and Go SDKs ship the same shape.
- No changes to call sites
- Works alongside existing observability stacks
- Confidence levels surface partial attribution honestly
from metron import track
import openai
client = track(
openai.OpenAI(),
feature="document-summarization",
customer_id=account.id,
plan=account.plan,
)
# Every call through `client` is automatically attributed to a
# feature, a customer, and a plan. No further changes required.
response = client.chat.completions.create(...)Get started
Bring your AI economics into focus — in days, not quarters.
Most teams reach a usable economic view in 1–2 days and a complete attribution model within two weeks.