Tracing
Observability & Tracing with Langfuse.
This document is contributed by our community contributor jannikmaierhoefer. 👏
Swipies AI ships with a built-in Langfuse integration so that you can inspect and debug every retrieval and generation step of your RAG pipelines in near real-time.
Langfuse stores traces, spans and prompt payloads in a purpose-built observability backend and offers filtering and visualisations on top.
• Swipies AI ≥ 0.18.0 (contains the Langfuse connector)
• A Langfuse workspace (cloud or self-hosted) with a Project Public Key and Secret Key
1. Collect your Langfuse credentials
- Sign in to your Langfuse dashboard.
- Open Settings ▸ Projects and either create a new project or select an existing one.
- Copy the Public Key and Secret Key.
- Note the Langfuse host (e.g.
https://cloud.langfuse.com). Use the base URL of your own installation if you self-host.
The keys are project-scoped: one pair of keys is enough for all environments that should write into the same project.
2. Add the keys to Swipies AI
Swipies AI stores the credentials per tenant. You can configure them either via the web UI or the HTTP API.
- Log in to Swipies AI and click your avatar in the top-right corner.
- Select API ▸ Scroll down to the bottom ▸ Langfuse Configuration.
- Fill in you Langfuse Host, Public Key and Secret Key.
- Click Save.

Once saved, Swipies AI starts emitting traces automatically – no code change required.
3. Run a pipeline and watch the traces
- Execute any chat or retrieval pipeline in Swipies AI (e.g. the Quickstart demo).
- Open your Langfuse project ▸ Traces.
- Filter by name ~
ragflow-*(Swipies AI prefixes each trace withragflow-).
For every user request you will see:
• a trace representing the overall request
• spans for retrieval, ranking and generation steps
• the complete prompts, retrieved documents and LLM responses as metadata

Use Langfuse's diff view to compare prompt versions or drill down into long-running retrievals to identify bottlenecks.