Unable to start a compute cluster with settings 'enable_elastic_disk' set to false

Michael Guirao 0 Reputation points
2025-10-21T08:25:00.1933333+00:00

I want to reduce the disk consumption of the VM in my compute cluster because the metrics shows that none of the disk is used.
Databricks documentation says I can leverage this option : enable_elastic_disk

So I created a 2 pools with following configuration (1 using spot for workers and 1 standard for driver) :

{
  "azure_attributes": {
    "availability":"SPOT_AZURE",
    "spot_bid_max_price":-1
  },
  "default_tags": {
    "DatabricksInstanceGroupId":"-6460463107945088700",
    "DatabricksInstancePoolCreatorId":"7867248731067327",
    "DatabricksInstancePoolId":"1020-203139-fix1-pool-k5c20trm",
    "Vendor":"Databricks"
  },
  "enable_elastic_disk":false,
  "idle_instance_autotermination_minutes":15,
  "instance_pool_id":"1020-203139-fix1-pool-k5c20trm",
  "instance_pool_name":"4c-8m-pool-spot",
  "max_capacity":75,
  "min_idle_instances":1
}

Then I created a cluster with nodes assigned to the pools, and when I try to start it, it fails with the following error :

User's image

Here are the details:


{
  "reason": {
    "code": "CONTAINER_LAUNCH_FAILURE",
    "type": "SERVICE_FAULT",
    "parameters": {
      "databricks_error_message": "The VM launch failed due to container setup failure, please try again later. [details] CONTAINER_LAUNCH_FAILURE: Container failed to be launched on the instance",
      "azure_error_code": "CONTAINER_LAUNCH_FAILURE",
      "azure_error_message": "Container failed to be launched on the instance"
    }
  },
  "add_node_failure_details": {
    "failure_count": 2,
    "resource_type": "container",
    "will_retry": true
  }
}

Please advise,

Thank you in advance,

Michael

Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
{count} votes

1 answer

Sort by: Most helpful
  1. VRISHABHANATH PATIL 1,295 Reputation points Microsoft External Staff Moderator
    2025-10-21T08:36:52.4966667+00:00

    Hi @Michael Guirao

    Thank you for posting your question on Microsoft Q&A. Here are some troubleshooting steps that may help address your issue.

    You are encountering a CONTAINER_LAUNCH_FAILURE error when starting a Databricks cluster with enable_elastic_disk set to false. This typically happens because the cluster cannot provision the required disk configuration when elastic disk is disabled.

    Why This Happens

    • Elastic Disk Behavior: By default, Databricks uses elastic disk to dynamically attach additional storage when the local disk runs out of space. Disabling it (enable_elastic_disk=false) forces the cluster to rely solely on the ephemeral local disk provided by the VM SKU.
    • Impact of Disabling Elastic Disk: If the selected VM SKU does not have sufficient local storage for Databricks system containers and Spark workloads, the container launch will fail. This is especially common with spot instances or smaller VM sizes.
    • Azure Container Setup Failure: The error message indicates that the container could not be launched because the node did not meet the storage requirements for Databricks runtime when elastic disk was disabled.

    Suggested Solutions

    Takeaways

    • Disabling enable_elastic_disk is only safe when you are certain the VM SKU provides enough local storage for Databricks system and Spark workloads.

    For most production and development scenarios, keep elastic disk enabled to avoid container launch failures.

    Hope the above steps were helpful. If you have any other questions, please feel free to contact us.

    Thanks,
    Vrishabh

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.