One-time Data migration from Cosmos DB to Storage Account (data lake) using Azure Synapse Error converting value "MultiHash" to type 'Microsoft.Azure.Documents.PartitionKind'

Galin Jelev 0 Reputation points
2025-10-13T21:32:23.0566667+00:00

Hello,

I'm trying to perform a data migration from CosmosDB to Storage Account, but on the mapping step, I'm facing the following issue:

Error converting value "MultiHash" to type 'Microsoft.Azure.Documents.PartitionKind'. Path 'partitionKey.kind', line 1, position 245. Requested value 'MultiHash' was not found. Activity ID: 8a9ad9d6-be32-46c1-9132-9826819384fb

Any ideas or workarounds are welcome.
Thanks :)

User's image

Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Swapnesh Panchal 740 Reputation points Microsoft External Staff Moderator
    2025-10-13T23:33:37.3833333+00:00

    Hi Galin Jelev,
    Welcome to the Microsoft Q&A and thank you for posting your questions here.
    We have to correct mapping. it’s the Cosmos DB connector and your container uses hierarchical partition keys, so its metadata reports partitionKey.kind = “MultiHash”.
    The Synapse Copy Data path still validates against the older Microsoft.Azure.Documents.PartitionKind enum (expects Hash, not MultiHash) and throws “Requested value ‘MultiHash’ was not found.” That’s why the wizard fails during schema mapping.

    For a one-time export to Data Lake, switch to a path that understands hierarchical partition keys:

    • Synapse Spark or Databricks with the azure-cosmos-spark v4 connector (OLTP). Read the container to a dataframe and write Parquet/CSV to ADLS.
    • If analytical store is enabled, use Synapse Link (serverless SQL) and do a CETAS to Parquet in your lake. This avoids the OLTP connector entirely and works fine with hierarchical PK.
    • If neither is available, do a quick SDK export (Cosmos .NET/Java/Node v3) and write to ADLS; the SDKs fully support MultiHash.

    Tweaks like removing the partition key from the mapping or upgrading the Copy wizard typically won’t help here, because the failure happens when the connector parses container metadata. If you ever need to keep using the wizard, the only durable fix is exporting from a container that uses a supported PartitionKind (e.g., Hash), which usually means staging data elsewhere first.

    If you want, I can share a minimal Spark config or a sample statement.


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.