Thanks for sharing those details you are absolutely right to notice that difference in the documentation. I checked this closely, and here what’s really going on.
The PostgreSQL V2 connector currently supports reading data (source) only. It doesn’t support writing data to PostgreSQL (sink) in the Copy Activity, which is why your PostgreSqlV2Table dataset doesn’t appear in the sink dropdown, and why you get the “name not allowed” error when you try to add it manually. This isn’t a configuration issue on your side it’s just a limitation of the current V2 connector.
You have two possible ways to load data into PostgreSQL:
If your PostgreSQL database is hosted on Azure Use the Azure Database for PostgreSQL connector instead of the V2 one. It fully supports copy operations into PostgreSQL as a sink. Just create an AzurePostgreSql linked service and dataset, and select that in your Copy activity sink.
If your PostgreSQL database is on-premises You can connect through ODBC using your Self-Hosted Integration Runtime (SHIR).
Install the PostgreSQL ODBC driver on your SHIR machine.
Create an ODBC linked service and an OdbcTable dataset for your target table.
Use OdbcSink in your Copy activity. This is the supported way to write data to on-premises PostgreSQL from ADF.
The reason the docs seem confusing is because one page lists PostgreSQL in the general “supported data stores,” but the detailed PostgreSQL V2 connector page mentions that it works as a source only, not as a sink.
There’s nothing wrong with your setup. The connector itself just doesn’t support writes yet. Switching to the AzurePostgreSQL connector (for Azure) or ODBC (for on-prem) will solve this.
Do let us know if you have questions and queries.
Thanks,
Manoj