
Overview | SDK & API | Fiddler | Documentation
Complete SDK documentation and REST API reference for Fiddler AI Observability Platform.
Overview | Fiddler | Documentation
Hands-on quick start guides for evaluating LLM applications, testing with custom LLM-as-a-Judge metrics, and comparing model outputs using Fiddler Evals.
docs.fiddler.ai
# Fiddler Query Language ## Overview [Custom Metrics](https://docs.fiddler.ai/observability/platform/custom-metrics) and …
Experiments | Developers | Fiddler | Documentation
Master LLM and AI application experiments with comprehensive tutorials covering the Fiddler Evals SDK, custom evaluators, model comparison, and custom experiment creation.
docs.fiddler.ai
# LLM Evaluation Prompt Specs ## The Challenge With LLM Evaluation When building production LLM applications, evaluation is critical, but it's often the most time-consuming bottleneck in your …
Advanced Prompt Specs | Developers | Fiddler | Documentation
Advanced guide to Fiddler's LLM-as-a-Judge capabilities, including custom prompting, model selection, performance optimization, and enterprise deployment patterns.
Overview | Fiddler | Documentation
The future of AI is agentic—autonomous systems that reason, plan, and coordinate across multiple agents to solve complex problems. Fiddler Observability is built for this future, providing …
docs.fiddler.ai
# Evaluations Building reliable AI applications requires systematic evaluation to ensure quality, safety, and consistent performance. This section provides ...
Evals SDK Advanced Guide | Developers | Fiddler | Documentation
Advanced experiment patterns for production LLM applications including multi-score evaluators, complex parameter mapping, and comprehensive experiment analysis.
docs.fiddler.ai
# Simple LLM Monitoring This guide will walk you through the basic onboarding steps required to use Fiddler for monitoring of LLM applications, **using sample data ...