FAQ
Last updated on 2026-04-05
What screens are included?
13 screens total: AI operations dashboard with stat cards and charts for token usage, cost, and latency; model registry with deployment status, version history, and performance comparison; prompt library with templates, version diffs, and A/B test results; usage analytics with token consumption, cost breakdown by model, and budget alerts; request/response logs with filtering, search, and expandable detail views; error tracking with frequency charts, error type breakdown, and resolution workflows; prompt playground for testing prompts against multiple models with parameter tuning; team management with roles, permissions, and API key administration; settings page; 3 auth pages (login, signup, forgot password); and a root page.
Does this include backend or API logic?
No. This is a frontend UI kit. All screens use mock seed data with realistic AI operations metrics -- model configurations, token usage records, API logs, error events, team members, and more. You connect your own backend, LLM provider SDKs, or observability platform. The UI layer handles display, interaction, and form structure.
Can I connect this to OpenAI or Anthropic?
Yes. All screens consume typed TypeScript interfaces. Replace the seed data imports with API calls to OpenAI, Anthropic, Google AI, or any REST/GraphQL endpoint. The playground is designed to be wired to real model endpoints -- add your API route handlers and the UI works immediately. See the Customization guide for integration examples with OpenAI and Anthropic SDKs.
Can I use this for real-time log monitoring?
The logs page is built with a filterable, sortable table that supports expandable rows for full request/response payloads. To enable real-time streaming, connect it to a WebSocket or Server-Sent Events endpoint. The seed data simulates realistic log entries -- swap in your actual log source and the table, filters, and detail panels all work as-is.
Does the playground work with real models?
The playground UI is fully functional -- prompt editor, model selector, parameter sliders, and output panel are all wired up. By default it uses seed data to simulate responses. To connect it to real LLM APIs, add a Next.js API route that forwards requests to your provider of choice. The comparison view supports testing the same prompt against multiple models simultaneously.
Is the UI accessible?
Yes. All screens use semantic HTML, keyboard navigation, proper ARIA labels, focus management, and WCAG AA contrast ratios in both light and dark modes. Five custom accessibility hooks are included for screen reader announcements, focus trapping, keyboard navigation, reduced motion detection, and mobile viewport handling. The log tables, playground editor, and parameter sliders are all fully keyboard navigable. See the Accessibility guide.
Can I customize the design and colors?
Absolutely. The entire color system uses oklch tokens in globals.css. Change the hue value and all 13 screens update instantly. Typography, spacing, and component styles are all token-driven through Tailwind CSS. See the Design Tokens guide.
What tech stack does this use?
Next.js 16 (App Router), React 19, Tailwind CSS v4, shadcn/ui v4 (with @base-ui/react), Recharts 3, date-fns, Lucide React, and next-themes. See Getting Started for the full stack table.
What license covers client work?
Solo and Team licenses are for internal projects. The Agency license allows unlimited developers and client delivery.
Can I add more model providers?
Yes. The model registry and playground support any number of providers. Add new providers by extending the TypeScript types and seed data. Each provider can have its own icon, color, and pricing configuration. See the Customization guide for details on adding custom providers.