Migrating Observability from Datadog to Kloudfuse
Published on
Aug 26, 2025
Table of Contents
When organizations outgrow their current observability platform due to different reasons like (cost constraints, feature limitations, or strategic shifts) the prospect of migration can seem daunting. Moving from Datadog to Kloudfuse represents a key concern:
How do you transition an entire observability stack without losing visibility into your systems or disrupting your team's workflows?
The good news is that Kloudfuse is designed to be Datadog-compatible. This guide walks you through migrating from Datadog to Kloudfuse across the three pillars of observability: metrics, logs, and traces.
Critically, Kloudfuse makes it easy to migrate your existing organizational assets - dashboards, alerts, and other observability configurations that represent valuable tribal knowledge built up over time. Unlike migrations that require complete re-instrumentation, most of the work involves simply reconfiguring endpoints and adjusting a few settings. We will see how to swap your Datadog agent’s pipelines to point at the Kloudfuse ingester, which will migrate the observability stack to Kloudfuse.
Why Migrate to Kloudfuse?
Datadog’s architecture and pricing often struggle to keep up with modern, high-scale engineering needs. Here are some reasons to migrate:
Holistic Telemetry for Faster Troubleshooting
With Datadog, engineers have to struggle with correlation issues as telemetry data is stored in different silos. Kloudfuse eliminates these issues by integrating all telemetry data in a single backend. This allows one query to surface related traces, logs, and metrics and reduces context-switching or stitching data together.
Predictable, Flat-Rate Pricing
The scaling nature of competitive services like Datadog leads to unanticipated spikes in costs, difficult decisions on retention rates, and increased chances for telemetry sampling artifacts. Kloudfuse ensures flat-rate deployment based billing which includes automatic deduplication and inline-reporting compression. This provides budget-friendly full observability as your telemetry footprint expands.
VPC Deployment with Open Standards
Kloudfuse runs directly inside your cloud environment. This eliminates egress costs and gives you complete control over your data. It is built on OpenTelemetry, meaning it supports open instrumentation, which includes existing deployed Datadog agents. This means you won’t need to rip out perfectly functional infrastructure during migration.
Now, let's look at how to migrate our Datadog observability stack to Kloudfuse.
Migration Strategy and Planning
Before discussing the technical steps, decide on a migration approach. Many teams use the following:
Dual-shipping: Let the Datadog agent send data to both Datadog and Kloudfuse in parallel. This lets you verify data in Kloudfuse while still operating in Datadog. If you have an existing Datadog agent, you can use it to send your data streams to the Kloudfuse platform.
Phased-cutover: In this approach, data streams are cut over one environment at a time to minimize risk. Agent configurations are updated in stages, metrics are verified in Kloudfuse, and Datadog is switched off for each stream. This ensures dashboards stay active and prevents gaps.
Whether you choose dual-shipping or cutover, remember that endpoint changes in the Datadog agent direct all streams (metrics, logs, APM) into Kloudfuse. So migrating one component at a time (e.g., metrics now, logs later) is possible.
Plan carefully to avoid missing data. Once endpoints are updated, Kloudfuse will receive exactly the same data streams the agent was sending. This strategy ensures the continuity of monitoring data.
Migrating the Datadog Agent
A key advantage of Kloudfuse is that you can keep using the Datadog agent on your hosts or Kubernetes nodes. Kloudfuse supports dual shipping, allowing you to send data to Kloudfuse instead of, or in addition to, Datadog. In practice, you only need to adjust the agent’s configuration, redirecting the agent’s endpoints to Kloudfuse’s ingester URLs.
Kubernetes
If you deploy the Datadog Agent via Helm on Kubernetes, Kloudfuse provides a sample values file (dd-values-kfuse.yaml
). The key changes in this file involve fields ending in _url
: namely, dd_url
, logs_dd_url
, apm_dd_url
, and orchestrator_dd_url
. For example, in a same-VPC scenario, you might set:
|
A default Kubernetes values file shows exactly these fields being replaced. For instance, an internal IP example is:
|
And for a custom DNS name, it’s:
|
The same pattern applies to APM (traces) and orchestrator telemetry URLs. Essentially, search the Helm values for _url
keys and update them to your Kloudfuse host/IP. Then install (or upgrade) the agent using those values. A sample Helm command is:
|
If you're using Datadog's APM SDKs (for example: ddtrace
in Python, Java, or Node), the good news is you don’t need to replace them to start sending traces to Kloudfuse. It supports ingesting traces emitted by Datadog APM SDKs with minimal configuration changes, mostly just updating the endpoint URLs that the agent forwards traces to. No code-level re-instrumentation is required.
Under the hood, Datadog APM SDKs send trace data to the Datadog agent. Kloudfuse simply receives this data by intercepting the agent output via a different backend endpoint:
|
Unless you're planning to switch SDKs or modify trace semantics, you can keep your existing instrumentation as-is and just reroute the agent.
After this, the Kubernetes nodes will continue running the Datadog agent, but all metrics, logs, and traces will stream into Kloudfuse instead of Datadog (or both, if dual-shipping).
Kloudfuse also provides guidance on handling more complex Kubernetes network layouts (same/different VPC or cluster) by using an internal Kubernetes service for Kloudfuse and the appropriate IPs. However, the core idea remains the same. The only major change is the endpoint URLs, while everything else, such as checks, tags, and custom metrics, stays configured as before.
Host & Container Agents
Similarly, for non-Kubernetes hosts (VMs, Docker, etc.), you update the agent’s datadog.yaml
(or values file) with the Kloudfuse endpoints. The pattern is to set dd_url
, logs_dd_url
, apm_dd_url
, orchestrator_dd_url
to point at your Kloudfuse ingester.
For example, in a multi-AWS-VPC scenario, replace these fields with the VPC endpoint DNS name. If your agents are already deployed, you can also retroactively update their config or use the “existing agent” instructions Kloudfuse provides.
Validating Agent Data
After reconfiguring, verify that data flows to Kloudfuse. Check the Kloudfuse UI or use API calls to see if host metrics (CPU, memory), Kubernetes metrics (pod/container stats), and logs start appearing. Because you haven’t changed which metrics are collected, the numeric values and tags should match what you saw in Datadog. Any missing metric likely means the agent endpoint wasn’t updated correctly.
Use Kloudfuse’s query explorer or dashboards (see below) to confirm each stream.
Cloud Service Integrations
Datadog’s cloud integrations (CloudWatch, Azure Monitor, GCP, etc.) must be manually set up in Kloudfuse too. In Datadog, many cloud metrics/logs are enabled from the web console. In Kloudfuse, you configure them via the cloud providers (AWS, Azure, GCP). Here’s how to migrate each:
AWS
Different services from AWS can be migrated in the following way:
CloudWatch Logs: Set up a Kinesis Firehose delivery stream in AWS for your logs, with destination “HTTP Endpoint”.

Fig 1: Creating Firehose Stream in Amazon
Enter the Kloudfuse API endpoint for logs:
|

Fig 2: Enter the HTTP Endpoint URL
Now, in the AWS Console, click on CloudWatch Logs and pick your Log Groups:

Fig 3: Picking Log Groups in Cloudwatch Logs
Next, create Subscription Filters that use the Firehose you just created. Grant an IAM role so CloudWatch Logs can publish to Firehose.

Fig 4: Creating Amazon Data Firehose Subscription Filter
Once linked, every new log entry (from Lambda, VMs, RDS, etc.) flows through Firehose into Kloudfuse. Kloudfuse handles them via its logs ingester and parsing pipeline.
CloudWatch Metrics: In AWS, use CloudWatch Metric Streams or Firehose to send metrics to Kloudfuse.

Fig 5: Creating Metric Streams in CloudWatch
We need to configure a CloudWatch Metrics stream to deliver metrics data to your Kloudfuse endpoint. This involves creating a Metric Stream destination (Kinesis Firehose) that points to:
|

Fig 6: Creating Custom Setup with Firehose
Once set up, AWS will continuously stream EC2/ALB/RDS metrics into Kloudfuse. Optionally, enable Kloudfuse’s AWS enrichment to pull extra metadata. This requires adding the line in your Kloudfuse values (custom-values.yaml)
and setting up a scraping IAM role:
|
This gives your logs richer AWS context (tags, resource names).
To capture AWS events (like CloudTrail logs) and send them to Kloudfuse, set up an EventBridge API Destination. This enables AWS to stream events directly into Kloudfuse’s
/ingester/eventbridge
endpoint. Ensure that your Kloudfuse cluster has an external HTTPS endpoint, and retrieve the external IP as follows:
|
Next, we will create a connection. In AWS EventBridge > Connections, click Create connection.

Fig 7: Selecting Connection in AWS EventBridge
Set the following values for the options:
Authorization type: API key
Key name:
Kf-Api-Key
Value: Your Kloudfuse API key

Fig 8: Creating Connection in AWS EventBridge
Now, in API Destinations, click Create API destination. Fill in the endpoint as follows:
|
Set the HTTP method as POST and select the connection you previously created.

Fig 9: Creating API Destination in AWS Event Bridge
Now we will create a Rule in EventBridge > Rules andclick Create rule. After this, create a use pattern form and add the following JSON:
|
Select your new API Destination as target and create a new Execution role when prompted. Once set up, AWS will send matching events directly to Kloudfuse Events, enabling real-time observability.
After AWS integration, Kloudfuse’s metrics dashboards for EC2, Lambda, etc., will populate automatically. In summary, the migration involves recreating in AWS the same pipelines you had for Datadog (streams, subscriptions, event rules), but pointing them at Kloudfuse endpoints. Then Kloudfuse will start consuming those metrics/logs.
Azure
In this section, we will see how to migrate Azure metrics and logs to Kloudfuse:
Azure Metrics: Kloudfuse pulls metrics from Azure by using a built-in cloud-exporter. For this, enable cloud-exporter in the global section in
custom-values.yaml
:
|
You must also create an Azure service principal (Client ID/Secret, Tenant ID) and list your subscriptions. Kloudfuse will then use those credentials to fetch metrics (VMs, Azure SQL, etc.) via the Datadog agent’s Azure integration under the hood. The configuration looks like:
|
Once applied, Kloudfuse will import Azure Monitor metrics automatically.
Azure Logs: For Azure logs (Activity Logs and Diagnostics), Kloudfuse uses Azure Event Hubs and Functions. First, create an Azure Event Hubs namespace and an Event Hub. Then, create an Azure Function app that triggers on the Event Hub and configures the function to forward messages to Kloudfuse.
Essentially, logs from Azure are sent to Event Hubs, where the Function picks them up and POSTs them to Kloudfuse’s API. Kloudfuse provides sample code (in an index.js
) where you set KF_API_KEY
and KF_URL
to your values. Finally, you create Diagnostic Settings in Azure for the resources you want and point them to the Event Hub. After all this, Azure logs stream into Kloudfuse as they did in Datadog.
In short, Azure’s metric migration uses Kloudfuse’s agent-based exporter, while Azure’s log migration uses an Event Hub → Function pipeline. These replace Datadog’s built-in Azure connectors.
GCP
In this section, we will see how to migrate GCP metrics and logs to Kloudfuse:
GCP Metrics
Kloudfuse integrates with Google Cloud to collect Stackdriver metrics from your GCP projects. The process involves creating credentials, adding them to your cluster as a secret, and configuring the Helm values. Follow these key steps:
In the Google Cloud Console, go to IAM & Admin > Service Accounts.
Create or select a service account

Fig 10: Selecting Service Accounts in IAM & Admin
Grant it the Monitoring Viewer role

Fig 11: Selecting Monitoring Viewer in Permission Section
Go to Keys > Add Key > Create new key

Fig 12: Creating a New Key in the Project
Select JSON, then download the key file (
credentials.json
)
Upload the credentials to your Kubernetes cluster:
|
Update your Helm chart values with the following config snippet:
|
Replace
YOUR_PROJECT_ID
with your actual GCP project IDAdjust
typePrefixes
based on the GCP services you want to monitor
Once configured, Kloudfuse will begin collecting metrics from your GCP environment via the Stackdriver exporter.
GCP Logs
Kloudfuse collects logs from Google Cloud using Cloud Pub/Sub. This lets you stream logs (e.g., from Cloud Logging) directly into Kloudfuse for analysis. To set it up, follow these steps:
Create a Pub/Sub Topic and Subscription
In the GCP Console, go to Pub/Sub
Create a new topic (e.g.,
MyLogsCollector
)

Fig 13: Creating a New Topic
Create a subscription (e.g.,
kloudfuse-gcp-subscription
) and link it to the topic

Fig 14: Creating a New Subscription
Create a Log Sink
Go to Logs Explorer > More actions > Create sink

Fig 15: Creating a Sink
Set the sink destination to Cloud Pub/Sub, and choose the topic you created
Optionally apply filters to include/exclude specific logs
Click Create sink
In your Helm
custom-values.yaml
, enable GCP log ingestion:
|
subscriptionId
: Your Pub/Sub subscription IDpubsubKey
: JSON key for a service account with Pub/Sub Subscriber permissions
Once applied, Kloudfuse connects to the subscription and begins ingesting logs in real time, similar to how Datadog ingests logs via Stackdriver.
Dashboards and Alerts
After your data streams into Kloudfuse, you’ll want dashboards and alerts. Kloudfuse comes with pre-built dashboards and alerts for common systems, and also provides tools to migrate your custom ones.
Built-in Dashboards/Alerts: Kloudfuse has ready-made Grafana dashboards for Linux hosts, Kubernetes clusters, AWS services, etc. For example, their Kubernetes cluster CPU/Memory dashboard or an EC2 instance health dashboard will automatically populate with your data streams. You can preview these in the Kloudfuse playground by going to Dashboard → Dashboard List:

Fig 16: Pre-built Dashboard List in Kloudfuse
Similarly, Kloudfuse includes alert templates (e.g., high CPU on node, pod restarts, EC2 failures) out of the box. These are equivalent to Datadog’s standard monitors. Even without any migration, as soon as your metrics appear in Kloudfuse, you can use or customize these built-in Grafana dashboards and alert rules.
Migrating Custom Dashboards/Alerts: If you want to preserve Datadog dashboards or monitors, Kloudfuse offers migration tools. You can use its proprietary converter and Kloudfuse-provided Python scripts (dashboard.py
, alert.py
) to translate Datadog JSON and upload it to Grafana. Here’s the implementation plan for migration:
Download the Datadog asset (dashboard or alert) in JSON format.
Upload the converted JSON into Kloudfuse:
For automated importing, use the publicly available Python scripts (e.g.,
dashboard.py
,alert.py
).For now, you can also upload single dashboards via the Kloudfuse UI. (Support for alert JSON uploads via the UI will be added in a future release.)
|
Similarly, for alerts, Kloudfuse provides alert.py
. Here's how to upload an alert definition:
|
You can also upload entire directories, manage multiple folders, or delete alerts in bulk. These scripts are especially useful when migrating alerts from Datadog monitors, letting you preserve folder structure and organize rules efficiently in Grafana.
Note: Kloudfuse uses the Grafana format for migrations to ensure dashboards and alerts remain compatible with PromQL, making onboarding easier by leveraging a widely adopted query language
Do note that some manual editing of JSON files might be required,Grafana and Datadog use different query models and alerting logic. However, these tools automate the majority of the migration, especially for bulk tasks. After uploading to Grafana, we can consume it into Kloudfuse using the following guide. This approach is better when there is a bulk of dashboards and importing manually doesn’t scale well.
Note: Because Kloudfuse uses Grafana’s alerting (or its own built-in alert engine), some Datadog monitor logic may not map 1:1. Check each imported alert carefully.
In practice, many teams start by reusing Kloudfuse’s default dashboards to cover common needs, and selectively migrate only critical custom ones. But rest assured: Kloudfuse anticipates this need and equips you with conversion helpers.
Conclusion
Migrating observability to Kloudfuse involves minimal per-agent changes plus some cloud setup and dashboard conversion. In short:
Reconfigure the Datadog agent(s): Update
dd_url
,logs_dd_url
,apm_dd_url
, etc., to point to Kloudfuse’s ingester. Then install/upgrade the agent. This ensures all host/cluster metrics, logs, and traces begin flowing into Kloudfuse.Set up cloud data streams: For AWS, Azure, and GCP, recreate your Datadog cloud integrations:
Use AWS Kinesis Firehose for CloudWatch Logs and EventBridge for events.
Use Kloudfuse’s Azure exporter for Monitor metrics and Event Hubs/Functions for Azure logs.
Use a GCP Logging Sink to Pub/Sub and Kloudfuse’s Pub/Sub log parser for Stackdriver logs.
Verify data arrival: After configuring, check Kloudfuse dashboards or query interfaces. Compare against Datadog to ensure no gaps. Use dual-shipping if needed during validation.
Migrate dashboards and alerts: Start with Kloudfuse’s built-in dashboards. For custom visuals or monitors, export them from Datadog and import them into Kloudfuse’s Grafana using the provided scripts. Adjust any mismatches.
Cut over and decommission: Once all vital data and alerts are validated in Kloudfuse, you can retire Datadog by stopping the agent’s Datadog API key or uninstalling the agent. Monitor the transition closely for a short period to ensure no missing telemetry.
Throughout this process, the only required agent changes are endpoint URLs, an easy, low-friction cutover. Cloud integrations are “set-and-forget” once configured. The result is a smooth migration where all your observability data (metrics, logs, and traces) end up in Kloudfuse with your dashboards and alerts intact.