langfuse is an open-source LLM engineering platform that focuses on LLM observability, metrics, evaluations, prompt management, playgrounds, and datasets. It integrates with various technologies like OpenTelemetry, Langchain, OpenAI SDK, LiteLLM, and more. The repository is suitable for individuals interested in exploring large language models and observability in AI systems.
by Langfuse
6 views
8/31/2025
Content
Langfuse GitHub Banner
Langfuse is an open source LLM engineering platform. It helps teams collaboratively
develop, monitor, evaluate, and debug AI applications. Langfuse can be self-hosted in minutes and is battle-tested.
LLM Application Observability: Instrument your app and start ingesting traces to Langfuse, thereby tracking LLM calls and other relevant logic in your app such as retrieval, embedding, or agent actions. Inspect and debug complex logs and user sessions. Try the interactive demo to see this in action.
Prompt Management helps you centrally manage, version control, and collaboratively iterate on your prompts. Thanks to strong caching on server and client side, you can iterate on prompts without adding latency to your application.
Evaluations are key to the LLM application development workflow, and Langfuse adapts to your needs. It supports LLM-as-a-judge, user feedback collection, manual labeling, and custom evaluation pipelines via APIs/SDKs.
Datasets enable test sets and benchmarks for evaluating your LLM application. They support continuous improvement, pre-deployment testing, structured experiments, flexible evaluation, and seamless integration with frameworks like LangChain and LlamaIndex.
LLM Playground is a tool for testing and iterating on your prompts and model configurations, shortening the feedback loop and accelerating development. When you see a bad result in tracing, you can directly jump to the playground to iterate on it.
Comprehensive API: Langfuse is frequently used to power bespoke LLMOps workflows while using the building blocks provided by Langfuse via the API. OpenAPI spec, Postman collection, and typed SDKs for Python, JS/TS are available.
ð¦ Deploy Langfuse
Langfuse Deployment Options
Langfuse Cloud
Managed deployment by the Langfuse team, generous free-tier, no credit card required.
Local (docker compose): Run Langfuse on your own machine in 5 minutes using Docker Compose.
hljs bash
# Get a copy of the latest Langfuse repository
git clone https://github.com/langfuse/langfuse.git
cd langfuse
# Run the langfuse docker compose
docker compose up
VM: Run Langfuse on a single Virtual Machine using Docker Compose.
Kubernetes (Helm): Run Langfuse on a Kubernetes cluster using Helm. This is the preferred production deployment.
Use any LLM as a drop in replacement for GPT. Use Azure, OpenAI, Cohere, Anthropic, Ollama, VLLM, Sagemaker, HuggingFace, Replicate (100
LLMs). |
| Vercel AI SDK | JS/TS | TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js. |
| API | | Directly call the public API. OpenAPI spec available. |
Multi agent framework for agent collaboration and tool use.
ð Quickstart
Instrument your app and start ingesting traces to Langfuse, thereby tracking LLM calls and other relevant logic in your app such as retrieval, embedding, or agent actions. Inspect and debug complex logs and user sessions.
Create new API credentials in the project settings
2ï¸â£ Log your first LLM call
The @observe() decorator makes it easy to trace any Python LLM application. In this quickstart we also use the Langfuse OpenAI integration to automatically capture all model parameters.
[!TIP]
Not using OpenAI? Visit our documentation to learn how to log other models and frameworks.
text
bash
pip install langfuse openai
text
bash filename=".env"
LANGFUSE_SECRET_KEY="sk-lf-..."
LANGFUSE_PUBLIC_KEY="pk-lf-..."
LANGFUSE_HOST="https://cloud.langfuse.com" # ðªðº EU region
# LANGFUSE_HOST="https://us.cloud.langfuse.com" # ðºð¸ US region
Our documentation is the best place to start looking for answers. It is comprehensive, and we invest significant time into maintaining it. You can also suggest edits to the docs via GitHub.
Langfuse FAQs where the most common questions are answered.
Use "Ask AI" to get instant answers to your questions.
Support Channels:
Ask any question in our public Q&A on GitHub Discussions. Please include as much detail as possible (e.g. code snippets, screenshots, background information) to help us understand your question.
We take data security and privacy seriously. Please refer to our Security and Privacy page for more information.
Telemetry
By default, Langfuse automatically reports basic usage statistics of self-hosted instances to a centralized server (PostHog).
This helps us to:
Understand how Langfuse is used and improve the most relevant features.
Track overall usage for internal and external (e.g. fundraising) reporting.
None of the data is shared with third parties and does not include any sensitive information. We want to be super transparent about this and you can find the exact data we collect here.
You can opt-out by setting TELEMETRY_ENABLED=false.
Learning Level
Level 300
Builder
Building complete AI solutions and understanding architecture
Business, governance, and adoption-focused material. Real-world implementations, case studies, and industry impact.
Similar Resources
More resources at this level in Applications & Impact
guide
Identifying and Prioritizing Artificial Intelligence Use Cases ... - Medium
This article delves into the systematic process of identifying and prioritizing high-impact AI use cases for enterprise implementation, covering strategic imperatives, alignment with core strategies, measuring business value, feasibility, data readiness, risk management, ROI, scalability, ethical considerations, talent and skills, sustainability, adoption, and customer impact. It provides a comprehensive blueprint for leaders navigating AI adoption in organizations.
guide
My AI Deep Dive and The Use Cases for AI | by Travis Reeder
This article delves into the use cases of AI, focusing on image/media generation and AI customer support chat systems. The author shares insights from their AI deep dive, including building AI apps like chatbots on Telegram to showcase AI applications. AI developers can learn about practical AI implementations and the potential of AI in enhancing existing products.
news
From Productivity to Purpose: AI's Surprising New Use Cases in our ...
This article explores the evolving use cases of AI, particularly in personal and professional lives, focusing on applications like therapy/companionship, organizing life, and finding purpose. It delves into the surprising shift from productivity-driven uses to more growth-oriented and personal applications, shedding light on the human-AI interaction landscape.