Apache Spark Connector for Informatica

Apache Spark Connector lets you connect to Apache Spark, a unified engine for large-scale data analytics.

In this article you will learn how to quickly and efficiently integrate Apache Spark data in Informatica without coding. We will use high-performance Apache Spark Connector to easily connect to Apache Spark and then access the data inside Informatica.

Let's follow the steps below to see how we can accomplish that!

Download Documentation

Prerequisites

Before we begin, make sure you meet the following prerequisite:

If you already have a JRE installed, you can try using it too. However, if you experience any issues, we recommend using one of the distributions mentioned above (you can install an additional JRE next to the existing one; just don't forget to configure the default Java in the Windows Environment Variables).

Download Apache Spark JDBC driver

To connect to Apache Spark in Informatica, you will have to download JDBC driver for it, which we will use in later steps. It is recommended to use JDBC driver compiled for Java 8, if possible. Let's perform these little steps right away:

  1. Visit MVN Repository.
  2. Download the JDBC driver, and save it locally, e.g. to D:\Drivers\JDBC\hive-jdbc-standalone.jar.
  3. Make sure to download the standalone version of the Apache Hive JDBC driver to avoid Java library dependency errors, e.g., hive-jdbc-4.0.1-standalone.jar (commonly used driver to connect to Spark).
  4. Done! That was easy, wasn't it? Let's proceed to the next step.

Introduction

Informatica PowerCenter Logo JSON / REST API is becoming more and more popular each day as everyone embrace cloud-centric services. This article is primarily focused on Informatica users who want to access Apache Spark data or may be other API Integration in Informatica. However many tips and techniques described in this article will help you to understand how to integrate Apache Spark / XML SOAP / JSON / REST API in other ETL / Reporting apps such as Tableau, Power BI, SSRS, Talend, Excel and many more.

After going through this article you will learn how to Read Apache Spark / JSON / REST API data in Informatica and understand the concept of JSON / REST API. We will go through many screenshots and step-by-step examples to demonstrate  Apache Spark or REST API integration in Informatica PowerCenter.

XML / JSON can come from a local file or REST API service (internal or public) so we will include both examples in this article (i.e. Read JSON files in Informatica,  Import REST API in Informatica). So let’s get started. Next article will focus on how to write data to API in Informatica (POST / PUT data)

If you need to consume API which is not listed on connector library page then please refer to the below article links. It talks about how to read / write pretty much any API and not just Apache Spark API. It explains various API tips / tricks using our other Universal Drivers not mentioned in this article (i.e. ZappySys JSON / XML and CSV Drivers).

Requirements

This article assumes that you have full filled following basic requirements.

  1. Download Install ZappySys ODBC PowerPack (API Driver for Apache Spark included)
  2. Install Informatica PowerCenter Client Tools (e.g. Workflow and Mapping Designers)
  3. Access to a Relational database such as SQL Server (or use any of your choice e.g. Oracle, MySQL, DB2 ). If nothing available then you can use flat file target.

High level Steps for Import Apache Spark data using Informatica (Read Apache Spark API data)

Before we dive deep to learn how to load Apache Spark data in Informatica (i.e. Apache Spark to SQL Table), Here the summary of high-level steps you need to perform to import Apache Spark in Informatica (same steps for Import JSON / XML / REST API).

  1. Download and Install ZappySys API Driver (for connecting to Apache Spark)
  2. Create ODBC DSN using ZappySys API driver and choose Apache Spark Connector during Wizard
  3. Create Relational > ODBC Connection in Informatica Workflow designer (Point to DSN we created in the previous step)
  4. Import Apache Spark Source Definition in the Informatica Mapping Designer > Sources Tab
  5. Import Target Table Definition in the Informatica Mapping Designer > Targets Tab
  6. Create source to target mapping in Mappings tab
  7. Save mapping (name m_API_to_SQL_Load )
  8. Create Session using the mapping we created in the previous step
  9. Save Workflow and execute to load Apache Spark data into SQL Table. Verify your data and log.
    Loading JSON data to SQL Table in Informatica (Import Apache Spark / REST API / JSON Files)

    Loading Apache Spark data to SQL Table in Informatica (Import REST API or JSON Files)

Video Tutorial – Read any API / JSON data in Informatica (Load Apache Spark to SQL Table)

Below video is not about Apache Spark API but its showing API access in general (for any API). By watching following ~5 min video can learn steps listed in this article to load JSON API data into SQL Server Table using ZappySys JSON Driver. You can go though full article to learn many useful details not covered in this video.

Getting Started – Import Apache Spark to SQL Server in Informatica

Now let’s get started. For example purpose, we will read data from Apache Spark and load data into SQL Server Table using Informatica Workflow.

Create ODBC Data Source (DSN) based on ZappySys JDBC Bridge Driver

Step-by-step instructions

To get data from Apache Spark using Informatica we first need to create a DSN (Data Source) which will access data from Apache Spark. We will later be able to read data using Informatica. Perform these steps:

  1. Download and install ODBC PowerPack.

  2. Open ODBC Data Sources (x64):

    Open ODBC Data Source
  3. Create a User data source (User DSN) based on ZappySys JDBC Bridge Driver

    ZappySys JDBC Bridge Driver
    Create new User DSN for ZappySys JDBC Bridge Driver
    • Create and use User DSN if the client application is run under a User Account. This is an ideal option in design-time, when developing a solution, e.g. in Visual Studio 2019. Use it for both type of applications - 64-bit and 32-bit.
    • Create and use System DSN if the client application is launched under a System Account, e.g. as a Windows Service. Usually, this is an ideal option to use in a production environment. Use ODBC Data Source Administrator (32-bit), instead of 64-bit version, if Windows Service is a 32-bit application.
  4. Now, we need to configure the JDBC connection in the new ODBC data source. Simply enter the Connection string, credentials, configure other settings, and then click Test Connection button to test the connection:

    ApacheSparkDSN
    jdbc:hive2://spark-thrift-server-host:10000
    D:\Drivers\JDBC\hive-jdbc-standalone.jar
    []
    JDBC-ODBC Bridge driver data source settings

    Use these values when setting parameters:

    • Connection string: jdbc:hive2://spark-thrift-server-host:10000
    • JDBC driver file(s): D:\Drivers\JDBC\hive-jdbc-standalone.jar
    • Connection parameters: []

  5. You should see a message saying that connection test is successful:

    ODBC connection test is successful

    Otherwise, if you are getting an error, check out our Community for troubleshooting tips.

  6. We are at the point where we can preview a SQL query. For more SQL query examples visit JDBC Bridge documentation:

    ApacheSparkDSN
    -- Basic SELECT with a WHERE clause
    SELECT
        id,
        name,
        salary
    FROM employees
    WHERE department = 'Sales';
    JDBC ODBC Bridge data source preview
    -- Basic SELECT with a WHERE clause
    SELECT
        id,
        name,
        salary
    FROM employees
    WHERE department = 'Sales';
    You can also click on the <Select Table> dropdown and select a table from the list.

    The ZappySys JDBC Bridge Driver acts as a transparent intermediary, passing SQL queries directly to the Trino JDBC driver, which then handles the query execution. This means the Bridge Driver simply relays the SQL query without altering it.

    Some JDBC drivers don't support INSERT/UPDATE/DELETE statements, so you may get an error saying "action is not supported" or a similar one. Please, be aware, this is not the limitation of ZappySys JDBC Bridge Driver, but is a limitation of the specific JDBC driver you are using.

  7. Click OK to finish creating the data source.

Video Tutorial

Create Connection in Informatica Workflow Designer

Once you create DSN using API Driver our next step is to define a connection for Apache Spark source in Informatica PowerCenter Workflow designer.

  1. Open Workflow designer [W] icon
  2. Goto Connections > Relational
    Create new connection for JSON in Informatica

    Create a new connection for Apache Spark in Informatica

  3. Click New and select ODBC
    Select ODBC connection type in Informatica (Using ZappySys JSON ODBC DSN)

    Select ODBC connection type in Informatica (Using ZappySys API ODBC DSN)

  4. Now on the ODBC connection setup enter connection name, some fake userid / password (this is a required field but its ignored by JSON Driver)
  5. In the Connection String field enter the exact same name of DSN (Open ODBC Data Sources UI to confirm)
    Configure Apache Spark connection in Informatica for REST API – Using ZappySys API ODBC Driver

    Configure Apache Spark connection in Informatica for REST API – Using ZappySys API Driver

  6. Click OK to close the connection properties.

That’s it. Now we ready to move to next step (define source and target in Mapping Designer).

Import Apache Spark Source Definition in Informatica Mapping Designer

Now let’s look at steps to import Apache Spark table definition.

  1. Open Informatica Mapping Designer (Click [D] icon)
  2. Click on Source Icon to switch to Sources designer
  3. From the top menu > Click on Sources > Import from Database
    Import JSON Source definition in Informatica Mapping Designer (JSON file or REST API)

    Import Apache Spark Source definition in Informatica Mapping Designer (JSON file or REST API)

  4. Select ODBC data source from the dropdown (Find out DSN we created earlier to use as JSON Source)
  5. Click Connect button to get a list of tables. Any array node is listed as a table. Also, you will see array node with parent columns (e.g. value_with_parent). You may get some warning like below but they are harmless so just ignore by clicking OK.
    DLL name entry missing from C:\Informatica\PowerCenter8.6.1\client\bin\powrmart.ini Section = ODBCDLL Entry = ZappySys JSON Driver
    —————————————————-
    Using EXTODBC.DLL to support ZappySys JSON Driver. For native support of ZappySys JSON Driver make an entry in the .ini file.
    Select JSON Source Table in Informatica Mapping Designer (JSON file or REST API)

    Select Apache Spark Source Table in Informatica Mapping Designer (JSON file or REST API)

  6. Select Table you wish to get (You can filter rows by custom SQL query. We will see later in this article how to do)
  7. Optionally once table structure is imported you can rename it
    Rename imported table definition in Informatica Source Designer

    Rename imported table definition in Informatica Source Designer

  8. That’s it, we are now ready to perform similar steps to import Target table structure in the next section.

Import SQL Server Target Definition in Informatica Mapping Designer

Now let’s look at steps to import Target table definition (very similar to the previous section, the only difference is this time we will select DSN which points to SQL Server or any other Target Server).

Now lets look at steps to import target table definition in Informatica mapping designer.

  1. In the Mapping Designer, Click on Target Icon to switch to Target designer
  2. From the top menu > Click on Targets > Import from Database
  3. Select DSN for your Target server (if DSN doesn’t exist then create one by opening ODBC Sources just like we created one for JSON API source (see the previous section about creating DSN).
    Import target Table Definition in informatica

    Import target Table Definition in informatica

  4. Enter your userid , password and Schema name and click Connect to see tables
  5. Select Table name to and click OK import definition.
    Import Target SQL Table Definition in Informatica - Select table from the list

    Import Target SQL Table Definition in Informatica – Select table from the list

Create Source to Target Mapping in Informatica (Import JSON to SQL Server)

Once you have imported source and target table definition, we can create mapping and transformation to load data from JSON to SQL Table.

  1. First open Mapping Designer (Click [D] icon)
  2. Drag JSON Source from sources folder
  3. Drag SQL Table from Targets folder
  4. Map desired columns from Source to target
    Define Source to Target mapping for JSON to SQL Table load in Informatica

    Define Source to Target mapping for Apache Spark to SQL Table load in Informatica

  5. For certain columns you may have to do datatype conversion. For example to convert OrderDate form nstring to DataTime you have to use Expression Transform like below and map it to target. In below example, our JSON has date format (e.g. 2018-01-31 12:00:00 AM ). To import this to DateTime field in SQL server we need to convert it using TO_DATE function. Use double quotes around T to make this format working.
    TO_DATE(OrderDate,'YYYY-MM-DD H12:MI:SS AM')
    
     --For ISO use below way
            TO_DATE(OrderDate,'YYYY-MM-DD"T"HH24:MI:SS')
    Informatica JSON to SQL Table Mapping - Datatype conversion (nstring to datetime)

    Informatica Apache Spark to SQL Table Mapping – Datatype conversion (nstring to datetime)

  6. Once you done with mapping save your mapping and name it (i.e. m_Api_To_SQL)
  7. Now lets move to next section to create workflow.

Create Workflow and Session in Informatica

Now the final step is to create a new workflow. Perform following steps to create workflow which with a session task to import JSON data into SQL table.

  1. Open workflow designer by click [W] icon.
  2. Launch new workflow creation wizard by click Workflow top menuWizard
    name your workflow (e.g. wf_Api_Tp_Sql_Table_Import)

    Creating Informatica Workflow - Wizard UI (Import JSON data to SQL Table)

    Creating Informatica Workflow – Wizard UI (Import Apache Spark data to SQL Table)

  3. Finish the wizard and double-click the Session to edit some default properties.
  4. First change Error settings so we fail session on error (By default its always green)
    Fail Informatica Session on error (JSON to SQL Load)

    Fail Informatica Session on error (Apache Spark data to SQL Load)

  5. Select JSON connection for Source
    Select JSON Source Connection in Informatica - JSON File / REST API Load to SQL Table

    Select Apache Spark Source Connection in Informatica – Load Apache Spark data to SQL Table

  6. Change default Source query if needed. You can pass parameters to this query to make it dynamic.
    Modify JSON Source SQL query - Pass parameters, change URL, set filter etc

    Modify Apache Spark Source SQL query – Pass parameters, change URL, set filter etc

  7. Select Target connection of SQL Target Table
    Select SQL Target Connection in Informatica - JSON File / REST API Load to SQL Table

    Select SQL Target Connection in Informatica – Load Apache Spark data to SQL Table

  8. Save workflow
  9. That’s it. We ready to run our first workflow to load JSON data to SQL.

Execute Workflow and Validate Log in Informatica

Now once you are done with your workflow, execute it to see the log.

Loading JSON data to SQL Table in Informatica (Import REST API or JSON Files)

Loading Apache Spark data to SQL Table in Informatica (Import REST API or JSON Files)

 

POST data to Apache Spark in Informatica

There will be a time when you like to send Source data to REST API or SOAP Web Service. You can use below Query for example. For detailed explanation on how to POST data in Informatica check this article .

Video Tutorial – How to POST data to REST API in Informatica

Here is detailed step by step video on REST API POST in informatica PowerCenter

 

Keywords

how to import Apache Spark in informatica | how to read Apache Spark data in informatica powercenter | how to test json from informatica | how to use Apache Spark data as source in informatica power center | how to connect Apache Spark in informatica 10 | informatica how to import data from Apache Spark | informatica jtx to import Apache Spark (use of java transformation) | informatica plugin for restful api using json | informatica power center and Apache Spark support | informatica read Apache Spark | informatica rest api | informatica Apache Spark connector | json parser import informatica

Conclusion

In this article we showed you how to connect to Apache Spark in Informatica and integrate data without any coding, saving you time and effort. It's worth noting that ZappySys JDBC Bridge Driver allows you to connect not only to Apache Spark, but to any Java application that supports JDBC (just use a different JDBC driver and configure it appropriately).

We encourage you to download Apache Spark Connector for Informatica and see how easy it is to use it for yourself or your team.

If you have any questions, feel free to contact ZappySys support team. You can also open a live chat immediately by clicking on the chat icon below.

Download Apache Spark Connector for Informatica Documentation

More integrations

Other connectors for Informatica

All
Big Data & NoSQL
Database
CRM & ERP
Marketing
Collaboration
Cloud Storage
Reporting
Commerce
API & Files

Other application integration scenarios for Apache Spark

All
Data Integration
Database
BI & Reporting
Productivity
Programming Languages
Automation & Scripting
ODBC applications

  • How to connect Apache Spark in Informatica?

  • How to get Apache Spark data in Informatica?

  • How to read Apache Spark data in Informatica?

  • How to load Apache Spark data in Informatica?

  • How to import Apache Spark data in Informatica?

  • How to pull Apache Spark data in Informatica?

  • How to push data to Apache Spark in Informatica?

  • How to write data to Apache Spark in Informatica?

  • How to POST data to Apache Spark in Informatica?

  • Call Apache Spark API in Informatica

  • Consume Apache Spark API in Informatica

  • Apache Spark Informatica Automate

  • Apache Spark Informatica Integration

  • Integration Apache Spark in Informatica

  • Consume real-time Apache Spark data in Informatica

  • Consume real-time Apache Spark API data in Informatica

  • Apache Spark ODBC Driver | ODBC Driver for Apache Spark | ODBC Apache Spark Driver | SSIS Apache Spark Source | SSIS Apache Spark Destination

  • Connect Apache Spark in Informatica

  • Load Apache Spark in Informatica

  • Load Apache Spark data in Informatica

  • Read Apache Spark data in Informatica

  • Apache Spark API Call in Informatica