Dashboard & Models
Last updated on 2026-04-05
The dashboard provides a real-time overview of your AI operations -- model performance, token consumption, costs, and recent activity. The models page is the registry for managing all LLM models across your organization.
Dashboard
Route: /dashboard
The dashboard combines stat cards, three chart panels, and an activity feed to give AI ops teams a single-pane view of system health.
Stat Cards
Six StatCard components across the top row, each showing a metric with a trend indicator:
| Stat | Example Value | Trend |
|---|---|---|
| Active Models | 8 | +2 this month |
| Total Tokens (30d) | 12.4M | +18.3% |
| Monthly Cost | $2,847 | -5.2% |
| Avg Latency | 245ms | -12% |
| Error Rate | 0.8% | -0.3% |
| Active Users | 24 | +4 this week |
import { StatCard } from "@/components/dashboard/stat-card"
<StatCard
title="Total Tokens (30d)"
value="12.4M"
change="+18.3%"
icon={Zap}
/>
Token Usage Chart
A Recharts AreaChart showing daily token consumption over the past 30 days, split by input and output tokens. Uses --chart-1 (purple) for input tokens and --chart-2 (blue) for output tokens.
import { TokenUsageChart } from "@/components/dashboard/token-usage-chart"
<TokenUsageChart data={tokenUsageData} />
Cost Tracking Chart
A Recharts BarChart showing cost breakdown by model. Each bar represents a model's monthly spend, color-coded by provider (OpenAI, Anthropic, Google, etc.).
import { CostTrackingChart } from "@/components/dashboard/cost-tracking-chart"
<CostTrackingChart data={costData} />
Latency Trends Chart
A Recharts LineChart showing average response latency across models over time. P50, P95, and P99 latency lines with hover tooltips.
import { LatencyChart } from "@/components/dashboard/latency-chart"
<LatencyChart data={latencyData} />
Recent Activity Feed
A chronological list of recent AI ops events -- model deployments, prompt updates, error spikes, cost threshold alerts, and team changes.
import { ActivityFeed } from "@/components/dashboard/activity-feed"
<ActivityFeed items={recentActivity} />
Dashboard Layout
+---------------------------------------------------------+
| Stat Card x 6 (Active Models, Tokens, Cost, Latency, .)|
+----------------------------+----------------------------+
| Token Usage (AreaChart) | Cost by Model (BarChart) |
| | |
+----------------------------+----------------------------+
| Latency Trends (LineChart) | Recent Activity |
| | (activity feed list) |
+----------------------------+----------------------------+
Models
Route: /models
The models page is the central registry for all LLM models used across your organization. It provides deployment tracking, version management, and performance comparison.
Model Registry Table
A sortable, filterable table listing all registered models:
- Search -- filter models by name, provider, or ID
- Filters -- provider (OpenAI, Anthropic, Google, Meta), status (deployed, staging, deprecated), type (chat, completion, embedding)
- Columns -- model name, provider, version, status, avg latency, token cost, last deployed
- Row actions -- view details, compare, deploy, deprecate
import { ModelRegistryTable } from "@/components/models/model-registry-table"
<ModelRegistryTable models={models} />
Deployment Status
Each model displays a status badge indicating its current state:
| Status | Badge Color | Meaning |
|---|---|---|
| Deployed | Green | Active in production |
| Staging | Yellow | Testing before deployment |
| Deprecated | Red | Phased out, not recommended |
import { ModelStatusBadge } from "@/components/models/model-status-badge"
<ModelStatusBadge status="deployed" />
Version History
A timeline view showing the deployment history for a specific model. Each entry includes the version number, deployment date, deployer, and change notes.
import { VersionTimeline } from "@/components/models/version-timeline"
<VersionTimeline versions={model.versions} />
Performance Comparison
Side-by-side comparison of two or more models across key metrics:
- Latency -- P50, P95, P99 response times
- Cost -- cost per 1K input/output tokens
- Quality -- average response rating, accuracy score
- Throughput -- requests per minute capacity
- Error rate -- percentage of failed requests
import { PerformanceComparison } from "@/components/models/performance-comparison"
<PerformanceComparison models={selectedModels} />
Data Sources
| Data | Source | Location |
|---|---|---|
| Dashboard stats | dashboardStats |
data/seed.ts |
| Token usage trend | tokenUsageData |
data/seed.ts |
| Cost breakdown | costData |
data/seed.ts |
| Latency trends | latencyData |
data/seed.ts |
| Recent activity | activityItems |
data/seed.ts |
| Models | models |
data/seed.ts |
| Model versions | modelVersions |
data/seed.ts |
Next Steps
- Prompts & Playground -- prompt library and interactive testing
- Usage & Logs -- token analytics and request logs