How to integrate Apache Spark with Qlik Cloud

Integrate Qlik Cloud and Apache Spark
Integrate Qlik Cloud and Apache Spark

Learn how to quickly and efficiently connect Apache Spark with Qlik Cloud for smooth data access.

Read and write Apache Spark data effortlessly. Integrate, manage, and automate jobs and data processing — almost no coding required. You can do it all using the high-performance Apache Spark ODBC Driver (powered by ZappySys JDBC-ODBC Bridge Driver). We'll walk you through the entire setup.

Ready to dive in? Download the product to jump right in, or follow the step-by-step guide below to see how it works.

Prerequisites

Before we begin, make sure you meet the following prerequisite: Java Runtime Environment (JRE) or Java Development Kit (JDK) must be installed on your system.

If your JDBC Driver targets a different Java version (e.g., 11 / 17 / 21), install the corresponding or newer Java version.

If you already have a JRE installed, you can try using it too. However, if you experience any issues, we recommend using one of the distributions mentioned above (you can install an additional JRE next to the existing one; just don't forget to configure the default Java in the Windows Environment Variables).

Download Apache Spark JDBC driver

To connect to Apache Spark, you will have to download JDBC driver for it, which we will use in later steps. Let's perform these little steps right away:

  1. Visit MVN Repository.

  2. Select the appropriate JDBC driver version in Latest Versions section in MVN Repository:

    Select JDBC driver version in MVN Repository
  3. Download the JDBC driver, and save it locally, e.g. to D:\Drivers\JDBC\hive-jdbc-standalone.jar.

    Download JDBC driver JAR file in MVN Repository
  4. Make sure to download the standalone version of the Apache Hive JDBC driver to avoid Java library dependency errors, e.g., hive-jdbc-4.0.1-standalone.jar (commonly used driver to connect to Spark).
  5. Done! That was easy, wasn't it? Let's proceed to the next step.

Create data source using Apache Spark ODBC Driver

Video instructions

Watch this quick walkthrough to see how to configure your Apache Spark ODBC data source, or scroll down for the step-by-step written guide.

While this video demonstrates how to connect to Postgres, the core concepts and setup process are exactly the same for Apache Spark.

Step-by-step instructions

To get data from Apache Spark using Qlik Cloud, we first need to create an ODBC data source. We will later read this data in Qlik Cloud. Perform these steps:

  1. Download and install ODBC PowerPack (if you haven't already).

  2. Search for odbc and open the ODBC Data Sources (64-bit):

    Open ODBC Data Source
  3. Create a User data source (User DSN) based on the ZappySys JDBC Bridge Driver driver:

    ZappySys JDBC Bridge Driver
    Create new User DSN for ZappySys JDBC Bridge Driver
    • Create and use a User DSN if the client application runs under a User Account. This is the ideal option at design time (e.g., when developing in Visual Studio). Use it for both types of applications (64-bit and 32-bit).
    • Create and use a System DSN if the client application runs under a System Account (e.g., as a Windows Service). This is usually the required option in a production environment. If your Windows Service is a 32-bit application, you must use the 32-bit ODBC Data Source Administrator to configure this
  4. Now, we need to configure the JDBC connection in the new ODBC data source. Simply enter the Connection string, credentials, configure other settings, and then click Test Connection button to test the connection:

    ApacheSparkDSN
    jdbc:hive2://spark-thrift-server-host:10000
    D:\Drivers\JDBC\hive-jdbc-standalone.jar
    []
    JDBC-ODBC Bridge driver data source settings

    Use these values when setting parameters:

    • Connection string: jdbc:hive2://spark-thrift-server-host:10000
    • JDBC driver file(s): D:\Drivers\JDBC\hive-jdbc-standalone.jar
    • Connection parameters: []

  5. You should see a message saying that connection test is successful:

    ODBC connection test is successful

    Otherwise, if you are getting an error, check out our Community for troubleshooting tips.

  6. We are at the point where we can preview a SQL query. For more SQL query examples visit JDBC Bridge documentation:

    ApacheSparkDSN
    -- Basic SELECT with a WHERE clause
    SELECT
        id,
        name,
        salary
    FROM employees
    WHERE department = 'Sales';
    JDBC ODBC Bridge data source preview
    -- Basic SELECT with a WHERE clause
    SELECT
        id,
        name,
        salary
    FROM employees
    WHERE department = 'Sales';
    You can also click on the <Select Table> dropdown and select a table from the list.

    The ZappySys JDBC Bridge Driver acts as a transparent intermediary, passing SQL queries directly to the JDBC driver, which then handles the query execution. This means the JDBC-ODBC Bridge Driver simply relays the SQL query without altering it.

    Some JDBC drivers don't support INSERT/UPDATE/DELETE statements, so you may get an error saying "action is not supported" or a similar one. Please, be aware, this is not the limitation of ZappySys JDBC Bridge Driver, but is a limitation of the specific JDBC driver you are using.

  7. Click OK to finish creating the data source.

Set up Qlik Direct Access data gateway

To connect to the Apache Spark data, you need a secure pipeline between your Qlik Cloud tenant and the Apache Spark ODBC DSN. Let's download, deploy, and register the Qlik data gateway to make this happen.

  1. First, log into your My Qlik tenants portal and choose your tenant:

    • Navigate to Administration.

    • Select Spaces.

    Open Spaces in Qlik Cloud Administration
  2. Create a shared space if you do not have one yet:

    • Click Create new.

    • Name shared space and confirm.

    Create shared space in Qlik Cloud
  3. Next, let's grab the gateway installer:

    • Navigate to Administration.

    • Select Data gateways.

    Open Data gateways in Qlik Cloud Administration
  4. Download the gateway:

    • Click Deploy.

    • Select Data Gateway - Direct Access (Windows).

    • Check acknowledgement box.

    • Click Download.

    Click Deploy to download Qlik data gateway
  5. If you are not running this locally, copy the installer to your target machine.

    The ODBC PowerPack and the Qlik Direct Access gateway must be installed on the exact same machine.
  6. Install the Qlik Direct Access gateway.

  7. Once installed, open an elevated Command Prompt on the gateway machine:

    Open command prompt as administrator for gateway setup
  8. Link the gateway to your tenant:

    • Navigate to ConnectorAgent directory.

    • Run tenant URL setup command.

    Link Qlik data gateway with tenant URL
    cd \
    cd "Program Files\Qlik\ConnectorAgent\ConnectorAgent"
    ConnectorAgent.exe qcs set_config --tenant_url https://<tenant>.qlikcloud.com
  9. Generate your data gateway key:

    Generate gateway key in Qlik tenant
    ConnectorAgent.exe qcs generate_keys
  10. Start the gateway service:

    Start Qlik data gateway service
    ConnectorAgent.exe service start
  11. Generate and copy the registration payload:

    • Run registration command.

    • Copy output text.

    Open Qlik gateway configuration details
    ConnectorAgent.exe qcs get_registration
    
  12. Now let's head back to the Qlik Cloud Administration page to complete the data gateway registration.

  13. Create the gateway entry in your tenant and apply the key:

    • Name gateway.

    • Select Direct Access for Gateway type.

    • Select shared space.

    • Paste generated key.

    • Click Create.

    Create Qlik data gateway and set key in tenant portal
  14. Finally, verify the status shows Connected:

    Verify Qlik data gateway status as Connected

Create ODBC connection in Analytics

With the gateway actively running, let's build the actual ODBC connection in Analytics.

  1. Navigate to your My Qlik tenants page, choose your tenant, select the Analytics tile, and click the Create menu item:

    Open Analytics catalog and go to Create section in Qlik Cloud
  2. Click the Data connection tile:

    Click Create connection in Qlik Analytics
  3. Next, let's create the ODBC data connection by selecting your shared space and finding the right connector:

    • Pick your shared space.

    • Search for odbc.

    • Choose the ODBC (via Direct Access gateway) option.

    Select ODBC connector via Qlik data gateway
  4. Finally, configure the ODBC connection fields to wrap up the setup:

    • Select your Data gateway.

    • Select the System DSN option.

    • Choose the ODBC source (DSN).

    • Select the DB2 SQL syntax.

    • Name the connection.

    • Click Test connection, then click Create.

    ApacheSparkDSN
    Configure and create ODBC connection in Qlik Cloud
  5. Connection is ready!

You are now successfully wired up to your Apache Spark data. Let's see how to actually load it into your dashboard.

Load Apache Spark data into Qlik Cloud

Depending on your specific use case, you can choose one of the two methods below to bring your data into the Qlik Cloud environment for analysis.

Use Analytics app with Data load editor

This is the best method if you need to run dynamic SQL. We will use your new gateway connection in the Data load editor to pull the data and verify it visually.

  1. Navigate to your My Qlik tenants page, choose your tenant, select the Analytics tile, and click the Create menu item:

    Open Analytics catalog and go to Create section in Qlik Cloud
  2. Click the Application tile to start creating a new application:

    Click Application to create a new app in Qlik Cloud
  3. Next, create the application within your shared space:

    • Name the application.

    • Select your shared space.

    • Click the Create button.

    Create application in shared space in Qlik Cloud
  4. Now, open the new application and click the Data load editor option:

    Open Data load editor in Qlik application
  5. With the editor open, you can configure your DSN and SQL query before running the data load:

    • Select your shared space connection.

    • Click the gateway connection button to use the ODBC (via Direct Access gateway) option.

    • Name the table as MyData and prepare your SQL load script.

    • Click Load data.

    -- Basic SELECT with a WHERE clause SELECT id, name, salary FROM employees WHERE department = 'Sales';
    Configure SQL query in Qlik Data load editor using ODBC connection
    LIB CONNECT TO 'My Shared Space:Gateway Connection';
    
    MyData:
    
    LOAD *;
    
    SQL
    -- Basic SELECT with a WHERE clause
    SELECT
        id,
        name,
        salary
    FROM employees
    WHERE department = 'Sales';
    
  6. Once the data has successfully loaded, go to the sheet to build your dashboard:

    • Select the Sheet option.

    • Select the Fields section.

    • Drag and drop the fields onto the sheet to create your visuals.

    MyData
    Add fields from loaded MyData table to Qlik sheet
  7. Your Apache Spark data is now ready for analysis!

Use Data flow for data integration

Use this method if you want to build a complete source-to-destination pipeline directly inside Qlik Cloud.

  1. Navigate to your My Qlik tenants page, choose your tenant, select the Analytics tile, and click the Create menu item:

    Open Analytics catalog and go to Create section in Qlik Cloud
  2. Click the Data flow tile to start building your integration:

    Click Create data flow in Qlik Analytics
  3. Next, create the data flow within your shared space:

    • Name your data flow.

    • Select your shared space.

    • Click the Create button.

    Create data flow in shared space in Qlik Cloud
  4. Now that the data flow is created, open the editor and browse your available connections:

    • Select the Editor tab.

    • Click the Browse connections button.

    Open connection browser in Qlik data flow editor
  5. Select the gateway-backed ODBC connection we set up earlier:

    • Choose your gateway connection.

    • Click Next.

    Select gateway connection in Qlik data flow
  6. Select your source tables and the specific fields you want to import:

    • Select one or multiple tables.

    • Uncheck any unwanted columns (optional).

    • Click Finish.

    Select source tables in Qlik data flow
  7. Map your source and destination nodes to define the flow:

    • Connect the source node to the destination node.

    • Click Run flow to start the integration process.

    Connect source to destination in Qlik data flow
  8. Wait for the successful completion message to appear:

    Qlik data flow completed successfully
  9. Your Apache Spark data integration is now complete!

Optional: Centralized data access via ZappySys Data Gateway

In some situations, you may need to provide Apache Spark data access to multiple users or services. Configuring the data source on a Data Gateway creates a single, centralized connection point for this purpose.

This configuration provides two primary advantages:

  • Centralized data access
    The data source is configured once on the gateway, eliminating the need to set it up individually on each user's machine or application. This significantly simplifies the management process.
  • Centralized access control
    Since all connections route through the gateway, access can be governed or revoked from a single location for all users.
Data Gateway
Local ODBC
data source
Simple configuration
Installation Single machine Per machine
Connectivity Local and remote Local only
Connections limit Limited by License Unlimited
Central data access
Central access control
More flexible cost

To achieve this, you must first create a data source in the Data Gateway (server-side) and then create an ODBC data source in Qlik Cloud (client-side) to connect to it.

Let's not wait and get going!

Create Apache Spark data source in the gateway

In this section we will create a data source for Apache Spark in the Data Gateway. Let's follow these steps to accomplish that:

  1. Search for gateway in the Windows Start Menu and open ZappySys Data Gateway Configuration:

    Open ZappySys Data Gateway Service Manager
  2. Go to the Users tab and follow these steps to add a Data Gateway user:

    • Click the Add button
    • In the Login field enter a username, e.g., john
    • Then enter a Password
    • Check the Is Administrator checkbox
    • Click OK to save
    Data Gateway - Add User
  3. Now we are ready to add a data source:

    • Click the Add button
    • Give the Data source a name (have it handy for later)
    • Then select Native - ZappySys JDBC Bridge Driver
    • Finally, click OK
    ApacheSparkDSN
    ZappySys JDBC Bridge Driver
    Data Gateway - Add data source
  4. When the ZappySys JDBC Bridge Driver configuration window opens, go back to ODBC Data Source Administrator where you already have the Apache Spark ODBC data source created and configured, and follow these steps on how to Import data source configuration into the Gateway:

    • Open ODBC data source configuration and click Copy settings:
      ZappySys JDBC Bridge Driver - Configuration [Version: 2.0.1.10418]
      ZappySys JDBC Bridge Driver - Apache Spark
      Read and write Apache Spark data effortlessly. Integrate, manage, and automate jobs and data processing — almost no coding required.
      ApacheSparkDSN
      Copy connection string for ODBC application
    • The window opens, telling us the connection string was successfully copied to the clipboard: Successful connection string copying for ODBC application
    • Then go to Data Gateway configuration and in data source configuration window click Load settings:

      ApacheSparkDSN
      ZappySys JDBC Bridge Driver - Configuration [Version: 2.0.1.10418]
      ZappySys JDBC Bridge Driver - Apache Spark
      Read and write Apache Spark data effortlessly. Integrate, manage, and automate jobs and data processing — almost no coding required.
      ApacheSparkDSN
      Load configuration in ZappySys Data Gateway data source
    • Once a window opens, just paste the settings by pressing CTRL+V or by clicking right mouse button and then Paste option.
  5. Once done, go to the Network Settings tab and Add a firewall rule for inbound traffic:

    Data Gateway - Add firewall rule for inbound connections
    • This will initially allow all inbound traffic.
    • Click Edit IP filters to restrict access to specific IP addresses or ranges.
  6. Crucial Step: After creating or modifying the data source, you must:

    • Click the Save button to persist your changes.
    • Hit Yes when prompted to restart the Data Gateway service.

    This ensures all changes are properly applied:

    ZappySys Data Gateway - Save Changes
    Skipping this step may cause the new settings to fail, preventing you from connecting to the data source.

Create ODBC data source to connect to the gateway

In this part we will create an ODBC data source to connect to the ZappySys Data Gateway from Qlik Cloud. To achieve that, let's perform these steps:

  1. Search for odbc and open the ODBC Data Sources (64-bit):

    Open ODBC Data Source
  2. Create a User data source (User DSN) based on the ODBC Driver 17 for SQL Server driver:

    ODBC Driver 17 for SQL Server
    Create new User DSN for ODBC Driver 17 for SQL Server
    If you don't see the ODBC Driver 17 for SQL Server driver in the list, choose a similar version.
  3. Then set a Name for the data source (e.g. Gateway) and the address of the Data Gateway:

    ZappySysGatewayDSN
    localhost,5000
    ODBC driver for SQL Server - Setting hostname and port
    Make sure you separate the hostname and port with a comma, e.g. localhost,5000.
  4. Proceed with the authentication part:

    • Select SQL Server authentication
    • In the Login ID field enter the user name you created in the Data Gateway, e.g., john
    • Set Password to the one you configured in the Data Gateway
    ODBC driver for SQL Server - Selecting SQL Authentication
  5. Then set the default database property to ApacheSparkDSN (the one we used in the Data Gateway):

    ApacheSparkDSN
    ApacheSparkDSN
    ODBC driver for SQL Server - Selecting database
    Make sure to type the data source name manually or copy/paste it directly into the field. Using the dropdown might fail because the Trust server certificate option is not enabled yet (next step).
  6. Continue by checking the Trust server certificate option:

    ODBC driver for SQL Server - Trusting certificate
  7. Once you do that, test the connection:

    ODBC driver for SQL Server - Testing connection
  8. If the connection is successful, everything is good:

    ODBC driver for SQL Server - Testing connection succeeded
  9. Done!

We are ready to move to the final step. Let's do it!

Access data in Qlik Cloud via the gateway

Finally, we are ready to read data from Apache Spark in Qlik Cloud via the Data Gateway. Follow these final steps:

  1. Go back to Qlik Cloud.

  2. Navigate to your My Qlik tenants page, choose your tenant, select the Analytics tile, and click the Create menu item:

    Open Analytics catalog and go to Create section in Qlik Cloud
  3. Click the Data connection tile:

    Click Create connection in Qlik Analytics
  4. Next, let's create the ODBC data connection by selecting your shared space and finding the right connector:

    • Pick your shared space.

    • Search for odbc.

    • Choose the ODBC (via Direct Access gateway) option.

    Select ODBC connector via Qlik data gateway
  5. Finally, configure the ODBC connection fields to wrap up the setup:

    • Select your Data gateway.

    • Select the System DSN option.

    • Choose the ODBC source (DSN).

    • Select the DB2 SQL syntax.

    • Name the connection.

    • Click Test connection, then click Create.

    ZappySysGatewayDSN
    Configure and create ODBC connection in Qlik Cloud
  6. Read the data the same way we discussed at the beginning of this article.

  7. That's it!

Now you can connect to Apache Spark data in Qlik Cloud via the ZappySys Data Gateway.

If you are asked for authentication details, use Database authentication, SQL authentication or Basic authentication option and enter the credentials you used when configuring the Data Gateway, e.g. john and your password.

Troubleshooters & resources (JDBC Bridge Driver)

Below are some useful community articles to help you troubleshoot and configure the ZappySys JDBC Bridge Driver:

Conclusion

In this article we showed you how to connect to Apache Spark in Qlik Cloud and integrate data without writing complex code — all of this was powered by Apache Spark ODBC Driver. It's worth noting that ZappySys JDBC Bridge Driver allows you to connect not only to Apache Spark, but to any Java application that supports JDBC (just use a different JDBC driver and configure it appropriately).

Download ODBC PowerPack now or ping us via chat if you have any questions or are looking for a specific feature (you can also reach out to us by submitting a ticket):

Explore Qlik Cloud connectors

All
Big Data & NoSQL
Database
CRM & ERP
Marketing
Collaboration
Cloud Storage
Reporting
Commerce
API & Files

More Apache Spark integrations

All
Data Integration
Database
BI & Reporting
Productivity
Programming Languages
Automation & Scripting
ODBC applications