Hi Galin Jelev,
Welcome to the Microsoft Q&A and thank you for posting your questions here.
We have to correct mapping. it’s the Cosmos DB connector and your container uses hierarchical partition keys, so its metadata reports partitionKey.kind = “MultiHash”.
The Synapse Copy Data path still validates against the older Microsoft.Azure.Documents.PartitionKind enum (expects Hash, not MultiHash) and throws “Requested value ‘MultiHash’ was not found.” That’s why the wizard fails during schema mapping.
For a one-time export to Data Lake, switch to a path that understands hierarchical partition keys:
- Synapse Spark or Databricks with the azure-cosmos-spark v4 connector (OLTP). Read the container to a dataframe and write Parquet/CSV to ADLS.
- If analytical store is enabled, use Synapse Link (serverless SQL) and do a CETAS to Parquet in your lake. This avoids the OLTP connector entirely and works fine with hierarchical PK.
- If neither is available, do a quick SDK export (Cosmos .NET/Java/Node v3) and write to ADLS; the SDKs fully support MultiHash.
Tweaks like removing the partition key from the mapping or upgrading the Copy wizard typically won’t help here, because the failure happens when the connector parses container metadata. If you ever need to keep using the wizard, the only durable fix is exporting from a container that uses a supported PartitionKind (e.g., Hash), which usually means staging data elsewhere first.
If you want, I can share a minimal Spark config or a sample statement.