Share via


Run Apache Sqoop jobs in HDInsight with Curl

Learn how to use Curl to run Apache Sqoop jobs on an Apache Hadoop cluster in HDInsight. This article demonstrates how to export data from Azure Storage and import it into a SQL Server database using Curl. Dit artikel is een vervolg van Apache Sqoop gebruiken met Hadoop in HDInsight.

Curl is used to demonstrate how you can interact with HDInsight by using raw HTTP requests to run, monitor, and retrieve the results of Sqoop jobs. This works by using the WebHCat REST API (formerly known as Templeton) provided by your HDInsight cluster.

Vereiste voorwaarden

Submit Apache Sqoop jobs by using Curl

Use Curl to export data using Apache Sqoop jobs from Azure Storage to SQL Server.

Opmerking

When using Curl or any other REST communication with WebHCat, you must authenticate the requests by providing the user name and password for the HDInsight cluster administrator. You must also use the cluster name as part of the Uniform Resource Identifier (URI) used to send the requests to the server.

For the commands in this section, replace USERNAME with the user to authenticate to the cluster, and replace PASSWORD with the password for the user account. Vervang CLUSTERNAME door de naam van uw cluster.

De REST API is beveiligd via basisverificatie. You should always make requests by using Secure HTTP (HTTPS) to help ensure that your credentials are securely sent to the server.

  1. For ease of use, set the variables below. This example is based on a Windows environment, revise as needed for your environment.

    set CLUSTERNAME=
    set USERNAME=admin
    set PASSWORD=
    set SQLDATABASESERVERNAME=
    set SQLDATABASENAME=
    set SQLPASSWORD=
    set SQLUSER=sqluser
    
  2. From a command line, use the following command to verify that you can connect to your HDInsight cluster:

    curl -u %USERNAME%:%PASSWORD% -G https://%CLUSTERNAME%.azurehdinsight.net/templeton/v1/status
    

    You should receive a response similar to the following:

    {"status":"ok","version":"v1"}
    
  3. Use the following to submit a sqoop job:

    curl -u %USERNAME%:%PASSWORD% -d user.name=%USERNAME% -d command="export --connect jdbc:sqlserver://%SQLDATABASESERVERNAME%.database.windows.net;user=%SQLUSER%@%SQLDATABASESERVERNAME%;password=%PASSWORD%;database=%SQLDATABASENAME% --table log4jlogs --export-dir /example/data/sample.log --input-fields-terminated-by \0x20 -m 1" -d statusdir="wasb:///example/data/sqoop/curl" https://%CLUSTERNAME%.azurehdinsight.net/templeton/v1/sqoop
    

    In deze opdracht worden de volgende parameters gebruikt:

    • -d - Since -G isn't used, the request defaults to the POST method. -d specifies the data values that are sent with the request.

      • user.name - The user that is running the command.

      • command - The Sqoop command to execute.

      • statusdir - The directory that the status for this job will be written to.

      This command will return a job ID that can be used to check the status of the job.

      {"id":"job_1415651640909_0026"}
      
  4. To check the status of the job, use the following command. Replace JOBID with the value returned in the previous step. For example, if the return value was {"id":"job_1415651640909_0026"}, then JOBID would be job_1415651640909_0026. Revise location of jq as needed.

    set JOBID=job_1415651640909_0026
    
    curl -G -u %USERNAME%:%PASSWORD% -d user.name=%USERNAME% https://%CLUSTERNAME%.azurehdinsight.net/templeton/v1/jobs/%JOBID% | C:\HDI\jq-win64.exe .status.state
    

    If the job has finished, the state will be SUCCEEDED.

    Opmerking

    This Curl request returns a JavaScript Object Notation (JSON) document with information about the job; jq is used to retrieve only the state value.

  5. Zodra de status van de taak is gewijzigd in SUCCEEDED, kunt u de resultaten van de taak ophalen uit Azure Blob Storage. De statusdir parameter die met de query is doorgegeven, bevat de locatie van het uitvoerbestand; in dit geval wasb:///example/data/sqoop/curl. This address stores the output of the job in the example/data/sqoop/curl directory on the default storage container used by your HDInsight cluster.

    You can use the Azure portal to access stderr and stdout blobs.

  6. To verify that data was exported, use the following queries from your SQL client to view the exported data:

    SELECT COUNT(*) FROM [dbo].[log4jlogs] WITH (NOLOCK);
    SELECT TOP(25) * FROM [dbo].[log4jlogs] WITH (NOLOCK);
    

Beperkingen

  • Bulk export - With Linux-based HDInsight, the Sqoop connector used to export data to Microsoft SQL Server or Azure SQL Database doesn't currently support bulk inserts.
  • Batching - With Linux-based HDInsight, When using the -batch switch when performing inserts, Sqoop will perform multiple inserts instead of batching the insert operations.

Samenvatting

As demonstrated in this document, you can use a raw HTTP request to run, monitor, and view the results of Sqoop jobs on your HDInsight cluster.

For more information on the REST interface used in this article, see the Apache Sqoop REST API guide.

Volgende stappen

Use Apache Sqoop with Apache Hadoop on HDInsight

For other HDInsight articles involving curl: