Skip to main content
Version: DEV

Tracing

Observability & Tracing with Langfuse.


KUDOS

This document is contributed by our community contributor jannikmaierhoefer. 👏

Swipies AI ships with a built-in Langfuse integration so that you can inspect and debug every retrieval and generation step of your RAG pipelines in near real-time.

Langfuse stores traces, spans and prompt payloads in a purpose-built observability backend and offers filtering and visualisations on top.

NOTE

• Swipies AI ≥ 0.18.0 (contains the Langfuse connector)
• A Langfuse workspace (cloud or self-hosted) with a Project Public Key and Secret Key


1. Collect your Langfuse credentials

  1. Sign in to your Langfuse dashboard.
  2. Open Settings ▸ Projects and either create a new project or select an existing one.
  3. Copy the Public Key and Secret Key.
  4. Note the Langfuse host (e.g. https://cloud.langfuse.com). Use the base URL of your own installation if you self-host.

The keys are project-scoped: one pair of keys is enough for all environments that should write into the same project.


2. Add the keys to Swipies AI

Swipies AI stores the credentials per tenant. You can configure them either via the web UI or the HTTP API.

  1. Log in to Swipies AI and click your avatar in the top-right corner.
  2. Select API ▸ Scroll down to the bottom ▸ Langfuse Configuration.
  3. Fill in you Langfuse Host, Public Key and Secret Key.
  4. Click Save.

Example Swipies AI trace in Langfuse

Once saved, Swipies AI starts emitting traces automatically – no code change required.


3. Run a pipeline and watch the traces

  1. Execute any chat or retrieval pipeline in Swipies AI (e.g. the Quickstart demo).
  2. Open your Langfuse project ▸ Traces.
  3. Filter by name ~ ragflow-* (Swipies AI prefixes each trace with ragflow-).

For every user request you will see:

• a trace representing the overall request
spans for retrieval, ranking and generation steps
• the complete prompts, retrieved documents and LLM responses as metadata

Example Swipies AI trace in Langfuse

(Example trace in Langfuse)

NOTE

Use Langfuse's diff view to compare prompt versions or drill down into long-running retrievals to identify bottlenecks.