Salesforce Connector for Azure Data Factory (Pipeline) How to Make Generic API Request (Bulk Write)

Introduction

In this article we will delve deeper into Salesforce and Azure Data Factory (Pipeline) integration, and will learn how to make generic api request (bulk write). We are continuing from where we left off. By this time, you must have installed ODBC PowerPack, created ODBC Data Source, and configured authentication settings in your Salesforce account .

So, let's not waste time and begin.

Use Query Builder to generate SQL query

  1. The first thing you have to do is open Query Builder:

    ZappySys API Driver - Salesforce
    Amazon Ads Connector can be used to get Amazon advertisements using Amazon Advertisements API, download various advertisement-related reports.
    SalesforceDSN
    Open Query Builder in API ODBC Driver to read and write data to REST API
  2. Then simply select the Make Generic API Request (Bulk Write) endpoint (action).

  3. Continue by configuring the Required parameters. You can also set optional parameters too.

  4. Move on by hitting Preview Data button to preview the results.

  5. If you see the results you need, simply copy the generated query:

    Make Generic API Request (Bulk Write)
    Optional Parameters
    Url /something/123
    IsMultiPart
    Filter
    Headers Accept: */* || Cache-Control: no-cache
    Advanced Properties
    Request Method POST
    Request Format (Content-Type) Default
    Body {$rows$}
    JsonOutputFormat Multicontent
    DoNotOutputNullProperty
    Batch Size (Default=1) 1
    Meta Detection Order StaticDynamicVirtual
    Input Columns - For Mapping (e.g. MyCol1:string(10); MyCol2:int32 ...) - Use bool, int32, int64, datetime, decimal, double
    Output Columns (e.g. MyCol1:string(10); MyCol2:int32 ...) - Use bool, int32, int64, datetime, decimal, double
    Request Format
    Response Format Default
    Csv - Column Delimiter ,
    Csv - Row Delimiter {NEWLINE}
    Csv - Quote Around Value True
    Csv - Always Quote regardless type
    Encoding
    CharacterSet
    Writer DateTime Format
    Csv - Has Header Row True
    Xml - ElementsToTreatAsArray
    Layout Map <?xml version="1.0" encoding="utf-8"?> <!-- Example#1: Output all columns --> <settings> <dataset id="root" main="True" readfrominput="True" /> <map src="*" /> </settings> <!-- Example#2: Records under array <?xml version="1.0" encoding="utf-8"?> <settings singledataset="True"> <dataset id="root" main="True" readfrominput="True" /> <map name="MyArray" dataset="root" maptype="DocArray"> <map src="OrderID" name="OrderID" /> <map src="OrderDate" name="OrderDate" /> </map> </settings> --> <!-- Example#3: Records under nested section <?xml version="1.0" encoding="utf-8"?> <settings> <dataset id="dsRoot" main="True" readfrominput="True" /> <map name="NestedSection"> <map src="OrderID" name="OrderID_MyLabel" /> <map src="OrderDate" name="OrderDate_MyLabel" /> </map> </settings> -->
    SELECT * FROM __DynamicRequest__
    WITH
    (
        "Url" = '/something/123',
        "RequestMethod" = 'POST'
    )
    Query Builder
  6. That's it! You can use this query in Azure Data Factory (Pipeline).

Let's not stop here and explore SQL query examples, including how to use them in Stored Procedures and Views (virtual tables) in the next steps.

SQL query examples

Use these SQL queries in your Azure Data Factory (Pipeline) data source:

How to Make generic api request

SELECT * FROM __DynamicRequest__
WITH
(
    "Url" = '/something/123',
    "RequestMethod" = 'POST'
)

generic_request_bulk_write endpoint belongs to __DynamicRequest__ table(s), and can therefore be used via those table(s).

Make Generic API Request (Bulk Write) in Azure Data Factory (Pipeline)

  1. Sign in to Azure Portal

    • Open your browser and go to: https://portal.azure.com

    • Enter your Azure credentials and complete MFA if required.

    • After login, go to Data factories.

    Azure Portal
  2. Under Azure Data Factory Resource - Create or select the Data Factory you want to work with.

    Select the Data Factory
  3. Inside the Data Factory resource page, click Launch studio.

    Launch Azure Data Factory Studio
  4. Create a New Linked service:

    • In the Manage section (left menu).

    • Under Connections, select Linked services.

    • Click + New to create a new Linked service based on ODBC.

    Add new Linked service
  5. Select ODBC service:

    Add new ODBC service
  6. Configure new ODBC service. Use the same DSN name we used in the previous step and copy it to Connection string box:

    SalesforceDSN
    DSN=SalesforceDSN
    Configure new ODBC service
  7. For created ODBC service create ODBC-based dataset:

    Add new ODBC dataset
  8. Go to your pipeline and add Copy data connector into the flow. In Source section use OdbcDataset we created as a source dataset:

    Set source in Copy data
  9. Then go to Sink section and select a destination/sink dataset. In this example we use precreated AzureBlobStorageDataset which saves data into an Azure Blob:

    Set sink in Copy data
  10. Finally, run the pipeline and see data being transferred from OdbcDataset to your destination dataset:

    Run the flow

More actions supported by Salesforce Connector

Learn how to perform other actions directly in Azure Data Factory (Pipeline) with these how-to guides:

More integrations

All
Data Integration
Database
BI & Reporting
Productivity
Programming Languages
Automation & Scripting
ODBC applications