Companies in the category 'AI Observability'
These are companies that provide open source tools for monitoring, understanding, and debugging the behavior and performance of artificial intelligence models in production.
Open-source LLM engineering platform
Latitude is an open-source platform for building and operating LLM features in production. It provides observability, evaluations, prompt management, and an eval-driven reliability loop to continuously improve AI products. Teams adopt Latitude to instrument existing LLM calls for observability and evaluation coverage, then use the reliability loop to turn production failures into repeatable fixes. Key capabilities include a prompt playground, datasets, LLM-as-judge evaluations, experiments, and a prompt optimizer (GEPA) that searches prompt variations against eval suites to reduce recurring failures.
AI evaluation & LLM observability platform
Evidently AI is an open-source Python library and platform that helps evaluate, test, and monitor data and AI-powered systems, from predictive ML models to complex LLM-powered systems.
AI development observability and tracing platform
OpenLIT provides a platform for simplifying AI development workflows, especially for generative AI and LLMs. It streamlines essential tasks like experimenting with LLMs, organizing and versioning prompts, and securely handling API keys.
COSS Weekly Newsletter
Stay up to date with the latest news, funding rounds, and announcements from the COSS universe.
Check out COSS Weekly on the web
