Edit

Share via


OpenTelemetry in Azure SDK for Rust crates

When you work with Azure SDK for Rust crates, you need visibility into SDK operations to debug issues, monitor performance, and understand how your application interacts with Azure services. This article shows you how to implement effective OpenTelemetry-based logging and telemetry strategies that provide insights into the inner workings of Rust applications on Azure.

Telemetry for Azure developers

The Azure SDK for Rust crates provide comprehensive observability through OpenTelemetry integration, which we recommend for monitoring and distributed tracing scenarios. Whether you're troubleshooting authentication flows, monitoring API request cycles, or analyzing performance bottlenecks, this guide covers the OpenTelemetry tools and techniques you need to gain visibility into your Azure SDK operations.

Azure SDK for Rust crates use OpenTelemetry as the standard approach to observability, providing:

  • Industry-standard telemetry: Use OpenTelemetry formats compatible with monitoring platforms
  • Distributed tracing: Track requests across multiple services and Azure resources
  • Advanced exporters: Send data to Jaeger, Prometheus, Grafana, and other observability platforms
  • Correlation across services: Automatically propagate trace context between microservices
  • Production monitoring: Built for high-scale production environments with sampling and performance optimizations

Important

Currently, Microsoft does not provide a direct Azure Monitor OpenTelemetry exporter for Rust applications. The Azure Monitor OpenTelemetry Distro only supports .NET, Java, Node.js, and Python. For Rust applications, you need to export OpenTelemetry data to an intermediate system (such as Azure Storage, Event Hubs, or the OpenTelemetry Collector) and then import that data into Azure Monitor using supported ingestion methods.

Set up OpenTelemetry logging

To use OpenTelemetry, you need the azure_core_opentelemetry crate. The azure_core package alone doesn't include OpenTelemetry support.

  1. Log in to Azure CLI:

    az login
    
  2. Create Azure Monitor resources by using Azure CLI:

    # Set variables
    RESOURCE_GROUP="rust-telemetry-rg"
    LOCATION="eastus"
    APP_INSIGHTS_NAME="rust-app-insights"
    LOG_ANALYTICS_WORKSPACE="rust-logs-workspace"
    
    # Create resource group
    az group create --name $RESOURCE_GROUP --location $LOCATION
    
    # Create Log Analytics workspace
    WORKSPACE_ID=$(az monitor log-analytics workspace create \
      --resource-group $RESOURCE_GROUP \
      --workspace-name $LOG_ANALYTICS_WORKSPACE \
      --location $LOCATION \
      --query id -o tsv)
    
    # Create Application Insights instance
    az extension add --name application-insights
    INSTRUMENTATION_KEY=$(az monitor app-insights component create \
      --app $APP_INSIGHTS_NAME \
      --location $LOCATION \
      --resource-group $RESOURCE_GROUP \
      --workspace $WORKSPACE_ID \
      --query instrumentationKey -o tsv)
    
    # Get connection string
    CONNECTION_STRING=$(az monitor app-insights component show \
      --app $APP_INSIGHTS_NAME \
      --resource-group $RESOURCE_GROUP \
      --query connectionString -o tsv)
    
    echo "Application Insights Connection String: $CONNECTION_STRING"
    
  3. Configure your Rust project. Add the required dependencies to your Cargo.toml:

    [dependencies]
    azure_core_opentelemetry = "*"
    azure_security_keyvault_secrets = "*"
    azure_identity = "*"
    opentelemetry = "0.31"
    opentelemetry_sdk = "0.31"
    opentelemetry-otlp = "0.31"  # For exporting to OpenTelemetry Collector
    tokio = { version = "1.47.1", features = ["full"] }
    

    Note

    The opentelemetry-otlp crate is included for exporting telemetry data to an OpenTelemetry Collector, which can then forward the data to Azure Monitor. Direct Azure Monitor export from Rust applications is not supported.

  4. Create your main application with OpenTelemetry configuration. See the azure_core_opentelemetry documentation for details.

  5. Set the required environment variables and run your application:

    # Set Key Vault URL (replace with your actual Key Vault URL)
    export AZURE_KEYVAULT_URL="https://mykeyvault.vault.azure.net/"
    
    # Run the application
    cargo run
    

After you configure OpenTelemetry in your application and run it, you can add custom instrumentation and monitor the telemetry data.

Export telemetry to Azure Monitor

Since Rust doesn't have a direct Azure Monitor OpenTelemetry exporter, you need to implement an indirect approach to get your telemetry data into Azure Monitor. Here are the recommended methods:

The OpenTelemetry Collector acts as a middle layer that can receive telemetry from your Rust application and forward it to Azure Monitor:

  1. Deploy the OpenTelemetry Collector in your environment (as a sidecar, agent, or gateway)
  2. Configure your Rust application to export to the Collector using OTLP (OpenTelemetry Protocol)
  3. Configure the Collector with the Azure Monitor exporter to forward data to Application Insights

Option 2: Azure Storage + Data Ingestion API

For scenarios where you need more control over data processing:

  1. Export telemetry to Azure Storage (Blob Storage or Data Lake)
  2. Process the data using Azure Functions, Logic Apps, or custom applications
  3. Ingest processed data into Azure Monitor using the Logs Ingestion API

Option 3: Event Hubs Streaming

For real-time telemetry processing:

  1. Stream telemetry to Azure Event Hubs from your Rust application
  2. Process events using Azure Stream Analytics, Azure Functions, or custom consumers
  3. Forward processed telemetry to Azure Monitor or Application Insights

Customize telemetry data

OpenTelemetry provides a flexible framework for customizing telemetry data to suit your application's needs. Use these strategies to enhance your telemetry:

Instrumenting your application code

Adding custom instrumentation to your application code helps you correlate your business logic with Azure SDK operations. This correlation makes it easier to understand the complete flow of operations.

Technique Purpose Implementation
Custom spans for Azure operations Create a clear hierarchy that shows how application logic relates to Azure operations Wrap Azure SDK calls by using OpenTelemetry span creation methods
Correlate application logic with SDK calls Connect business operations with underlying Azure SDK calls Use span context to link business operations with triggered Azure service calls
Create diagnostic breadcrumbs Capture important context for telemetry across workflows Add structured fields (user IDs, request IDs, business object identifiers) to spans

Performance analysis

OpenTelemetry provides detailed insights into Azure SDK performance patterns. These insights help you identify and resolve performance bottlenecks.

Analysis Type What It Reveals How to Use
SDK operation duration How long different Azure operations take Use span timing that OpenTelemetry captures automatically to identify slow operations
Service call bottlenecks Where your application spends time waiting for Azure responses Compare timing across Azure services and operations to find performance issues
Concurrent operation patterns Overlap and dependencies between operations Analyze telemetry data to understand parallelization opportunities when making multiple Azure calls

Error diagnosis

OpenTelemetry captures rich error context that goes beyond simple error messages. This context helps you understand not just what failed, but why and under what circumstances.

Understand SDK error propagation: Trace how errors bubble up through your application code and the Azure SDK layers. This trace helps you understand the complete error path and identify the root cause.

Log transient vs. permanent failures: Distinguish between temporary failures (like network timeouts that might succeed on retry) and permanent failures (like authentication errors that need configuration changes). This distinction helps you build resilient applications.

Understand logs, metrics, and alerts

Your applications and services generate telemetry data to help you monitor their health, performance, and usage. Azure categorizes this telemetry into logs, metrics, and alerts.

Azure offers four kinds of telemetry:

Telemetry type What it gives you Where to find it for each service
Metrics Numeric, time-series data (CPU, memory, etc.) Metrics in portal or az monitor metrics CLI
Alerts Proactive notifications when thresholds hit Alerts in portal or az monitor metrics alert CLI
Logs Text-based events and diagnostics (web, app) App Service Logs, Functions Monitor, Container Apps Diagnostics
Custom logs Your own application telemetry via App Insights Your Application Insights resource's Logs (Trace) table

Pick the right telemetry for your question:

Scenario Use logs… Use metrics… Use alerts…
"Did my web app start and respond?" App Service web-server logs (Logs) N/A N/A
"Is my function timing out or failing?" Function invocation logs (Monitor) Function execution duration metric Alert on "Function Errors >0"
"How busy is my service and can it scale?" N/A Service throughput/CPU in Metrics Autoscale alert on CPU% > 70%
"What exceptions is my code throwing?" Custom Trace logs in Application Insights N/A Alert on "ServerExceptions >0"
"Have I exceeded my transaction or quota limits?" N/A Quota-related metrics (Transactions, Throttling) Alert on "ThrottlingCount >0"

View the telemetry data in Azure Monitor

After setting up OpenTelemetry in your Rust application and configuring an intermediate export mechanism, you can view the telemetry data in Azure Monitor through Application Insights. Since Rust doesn't have direct Azure Monitor export capabilities, you'll need to implement one of these approaches:

  • OpenTelemetry Collector: Configure the OpenTelemetry Collector to receive data from your Rust application and forward it to Azure Monitor
  • Azure Storage integration: Export telemetry to Azure Storage and use Azure Monitor data ingestion APIs to import the data
  • Event Hubs streaming: Stream telemetry through Azure Event Hubs and process it for Azure Monitor ingestion

Once your telemetry data reaches Azure Monitor through one of these methods, you can analyze it:

  1. Navigate to Application Insights in the Azure portal:

    az monitor app-insights component show \
      --app $APP_INSIGHTS_NAME \
      --resource-group $RESOURCE_GROUP \
      --query "{name:name,appId:appId,instrumentationKey:instrumentationKey}"
    
  2. View traces and logs:

    • Go to Application Insights > Transaction search
    • Look for traces with operation names like get_keyvault_secrets
    • Check Logs section and run KQL queries:
    traces
    | where timestamp > ago(1h)
    | where message contains "Azure operations" or message contains "secrets"
    | order by timestamp desc
    
  3. View distributed traces:

    • Go to Application Map to see service dependencies
    • Select Performance to see operation timing
    • Use End-to-end transaction details to see complete request flows
  4. Custom KQL queries for your Rust application:

    // View all custom logs from your Rust app
    traces
    | where customDimensions.["service.name"] == "rust-azure-app"
    | order by timestamp desc
    
    // View Azure SDK HTTP operations
    dependencies
    | where type == "HTTP"
    | where target contains "vault.azure.net"
    | order by timestamp desc
    
    // Monitor error rates
    traces
    | where severityLevel >= 3  // Warning and above
    | summarize count() by bin(timestamp, 1m), severityLevel
    | render timechart
    

Monitor in real-time

Set up live monitoring to see data as it arrives:

# Stream live logs (requires Azure CLI)
az monitor app-insights events show \
  --app $APP_INSIGHTS_NAME \
  --resource-group $RESOURCE_GROUP \
  --event traces \
  --start-time $(date -u -d '1 hour ago' +%Y-%m-%dT%H:%M:%S)

Cost optimization

You can significantly reduce your cost for Azure Monitor by understanding best practices for configuration options and opportunities to reduce the amount of data Azure Monitor collects.

Key strategies for Rust applications:

  • Use appropriate log levels: Configure OpenTelemetry log levels appropriately for production to reduce volume
  • Implement sampling: Configure OpenTelemetry sampling for high-volume applications
  • Filter sensitive data: Avoid logging secrets, tokens, or large payloads that increase costs
  • Monitor data ingestion: Regularly review your Application Insights data usage and costs

Resources and next steps