Google BigQuery Connector for Informatica

In this article you will learn how to integrate Google BigQuery data in Informatica without coding in just a few clicks (live / bi-directional connection to Google BigQuery). Read / write Google BigQuery data inside your app without coding using easy to use high performance API Connector.

Using Google BigQuery Connector you will be able to connect, read, and write data from within Informatica. Follow the steps below to see how we would accomplish that.

Download Documentation

NOTE: If you need to consume API which is not listed on connector library page then please refer to the below article links. It talks about how to read / write pretty much any API and not just Google BigQuery API. It explains various API tips / tricks using our other Universal Drivers not mentioned in this article (i.e. ZappySys JSON / XML and CSV Drivers).
How to read API data in Informatica (Call JSON / XML SOAP Service)
How to write data to API (POST) in Informatica (Call JSON / XML SOAP Service)

Introduction

Informatica PowerCenter LogoJSON / REST API is becoming more and more popular each day as everyone embrace cloud-centric services. This article is primarily focused on Informatica users who want to access Google BigQuery data or may be other API Integration in Informatica. However many tips and techniques described in this article will help you to understand how to integrate Google BigQuery / XML SOAP / JSON / REST API in other ETL / Reporting apps such as Tableau, Power BI, SSRS, Talend, Excel and many more.

After going through this article you will learn how to Read Google BigQuery / JSON / REST API data in Informatica and understand the concept of JSON / REST API. We will go through many screenshots and step-by-step examples to demonstrate  Google BigQuery or REST API integration in Informatica PowerCenter.

XML / JSON can come from a local file or REST API service (internal or public) so we will include both examples in this article (i.e. Read JSON files in Informatica,  Import REST API in Informatica). So let’s get started. Next article will focus on how to write data to API in Informatica (POST / PUT data)

Requirements

This article assumes that you have full filled following basic requirements.

  1. Download Install ZappySys ODBC PowerPack (API Driver for Google BigQuery included)
  2. Install Informatica PowerCenter Client Tools (e.g. Workflow and Mapping Designers)
  3. Access to a Relational database such as SQL Server (or use any of your choice e.g. Oracle, MySQL, DB2 ). If nothing available then you can use flat file target.

High level Steps for Import Google BigQuery data using Informatica (Read Google BigQuery API data)

Before we dive deep to learn how to load Google BigQuery data in Informatica (i.e. Google BigQuery to SQL Table), Here the summary of high-level steps you need to perform to import Google BigQuery in Informatica (same steps for Import JSON / XML / REST API).

  1. Download and Install ZappySys API Driver (for connecting to Google BigQuery)
  2. Create ODBC DSN using ZappySys API driver and choose Google BigQuery Connector during Wizard
  3. Create Relational > ODBC Connection in Informatica Workflow designer (Point to DSN we created in the previous step)
  4. Import Google BigQuery Source Definition in the Informatica Mapping Designer > Sources Tab
  5. Import Target Table Definition in the Informatica Mapping Designer > Targets Tab
  6. Create source to target mapping in Mappings tab
  7. Save mapping (name m_API_to_SQL_Load )
  8. Create Session using the mapping we created in the previous step
  9. Save Workflow and execute to load Google BigQuery data into SQL Table. Verify your data and log.
    Loading JSON data to SQL Table in Informatica (Import Google BigQuery / REST API / JSON Files)

    Loading Google BigQuery data to SQL Table in Informatica (Import REST API or JSON Files)

Video Tutorial – Read any API / JSON data in Informatica (Load Google BigQuery to SQL Table)

Below video is not about Google BigQuery API but its showing API access in general (for any API). By watching following ~5 min video can learn steps listed in this article to load JSON API data into SQL Server Table using ZappySys JSON Driver. You can go though full article to learn many useful details not covered in this video.

Getting Started – Import Google BigQuery to SQL Server in Informatica

Now let’s get started. For example purpose, we will read data from Google BigQuery and load data into SQL Server Table using Informatica Workflow.

Create ODBC Data Source (DSN) based on ZappySys API Driver

Step-by-step instructions

To get data from Google BigQuery using Informatica we first need to create a DSN (Data Source) which will access data from Google BigQuery. We will later be able to read data using Informatica. Perform these steps:

  1. Install ZappySys ODBC PowerPack.

  2. Open ODBC Data Sources (x64):
    Open ODBC Data Source

  3. Create a User Data Source (User DSN) based on ZappySys API Driver

    ZappySys API Driver
    Create new User DSN for ZappySys API Driver
    You should create a System DSN (instead of a User DSN) if the client application is launched under a Windows System Account, e.g. as a Windows Service. If the client application is 32-bit (x86) running with a System DSN, use ODBC Data Sources (32-bit) instead of the 64-bit version.
  4. When the Configuration window appears give your data source a name if you haven't done that already, then select "Google BigQuery" from the list of Popular Connectors. If "Google BigQuery" is not present in the list, then click "Search Online" and download it. Then set the path to the location where you downloaded it. Finally, click Continue >> to proceed with configuring the DSN:

    GoogleBigQueryDSN
    Google BigQuery
    ODBC DSN Template Selection

  5. Now it's time to configure the Connection Manager. Select Authentication Type, e.g. Token Authentication. Then select API Base URL (in most cases, the default one is the right one). More info is available in the Authentication section.

    Steps to get Google BigQuery Credentials
    This connection can be configured using two ways. Use Default App (Created by ZappySys) OR Use Custom App created by you.
    To use minimum settings you can start with ZappySys created App. Just change UseCustomApp=false on the properties grid so you dont need ClientID / Secret. When you click Generate Token you might see warning about App is not trusted (Simply Click Advanced Link to expand hidden section and then click Go to App link to Proceed).

    To register custom App, perform the following steps (Detailed steps found in the help link at the end)

    1. Go to Google API Console
    2. From the Project Dropdown (usually found at the top bar) click Select Project
    3. On Project Propup click CREATE PROJECT
    4. Once project is created you can click Select Project to switch the context (You can click on Notification link or Choose from Top Dropdown)
    5. Click ENABLE APIS AND SERVICES
    6. Now we need to Enable two APIs one by one (BigQuery API and Cloud Resource Manager API).
    7. Search BigQuery API. Select and click ENABLE
    8. Search Cloud Resource Manager API. Select and click ENABLE
    9. Go to back to main screen of Google API Console
    10. Click OAuth consent screen Tab. Enter necessary details and Save.

      1. Choose Testing as Publishing status
      2. Set application User type to Internal, if possible
      3. If MAKE INTERNAL option is disabled, then add a user in Test users section, which you will use in authentication process when generating Access and Refresh tokens
    11. Click Credentials Tab
    12. Click CREATE CREDENTIALS (some where in topbar) and select OAuth Client ID option.
    13. When prompted Select Application Type as Desktop App and click Create to receive your ClientID and Secret. Later on you can use this information now to configure Connection with UseCustomApp=true.
    14. Go to OAuth Consent Screen tab. Under Publishing Status click PUBLISH APP to ensure your refresh token doesnt expire often. If you planning to use App for Private use then do not have to worry about Verification Status after Publish.

    Fill in all required parameters and set optional parameters if needed:

    GoogleBigQueryDSN
    Google BigQuery
    User Account [OAuth]
    https://www.googleapis.com/bigquery/v2
    Required Parameters
    UseCustomApp Fill in the parameter...
    ProjectId (Choose after [Generate Token] clicked) Fill in the parameter...
    DatasetId (Choose after [Generate Token] clicked and ProjectId selected) Fill in the parameter...
    Optional Parameters
    ClientId Fill in the parameter...
    ClientSecret Fill in the parameter...
    Scope Fill in the parameter...
    RetryMode Fill in the parameter...
    RetryStatusCodeList Fill in the parameter...
    RetryCountMax Fill in the parameter...
    RetryMultiplyWaitTime Fill in the parameter...
    Job Location Fill in the parameter...
    Redirect URL (Only for Web App) Fill in the parameter...
    ODBC DSN Oauth Connection Configuration
    Steps to get Google BigQuery Credentials
    Use these steps to authenticate as service account rather than Google / GSuite User. Learn more about service account here

    Basically to call Google API as Service account we need to perform following steps listed in 3 sections (Detailed steps found in the help link at the end)

    Create Project

    First thing is create a Project so we can call Google API. Skip this section if you already have Project (Go to next section)
    1. Go to Google API Console
    2. From the Project Dropdown (usually found at the top bar) click Select Project
    3. On Project Propup click CREATE PROJECT
    4. Once project is created you can click Select Project to switch the context (You can click on Notification link or Choose from Top Dropdown)
    5. Click ENABLE APIS AND SERVICES
    6. Now we need to Enable two APIs one by one (BigQuery API and Cloud Resource Manager API).
    7. Search BigQuery API. Select and click ENABLE
    8. Search Cloud Resource Manager API. Select and click ENABLE

    Create Service Account

    Once Project is created and APIs are enabled we can now create a service account under that project. Service account has its ID which looks like some email ID (not to confuse with Google /Gmail email ID)
    1. Go to Create Service Account
    2. From the Project Dropdown (usually found at the top bar) click Select Project
    3. Enter Service account name and Service account description
    4. Click on Create. Now you should see an option to assign Service Account permissions (See Next Section).

    Give Permission to Service Account

    By default service account cant access BigQuery data or List BigQuery Projects so we need to give that permission using below steps.
    1. After you Create Service Account look for Permission drop down in the Wizard.
    2. Choose BigQuery -> BigQuery Admin role so we can read/write data. (NOTE: If you just need read only access then you can choose BigQuery Data Viewer)
    3. Now choose one more Project -> Viewer and add that role so we can query Project Ids.
    4. Click on Continue. Now you should see an option to Create Key (See Next Section).

    Create Key (P12)

    Once service account is created and Permission is assigned we need to create key file.
    1. In the Cloud Console, click the email address for the service account that you created.
    2. Click Keys.
    3. Click Add key, then click Create new key.
    4. Click Create and select P12 format. A P12 key file is downloaded to your computer. We will use this file in our API connection.
    5. Click Close.
    6. Now you may use downloaded *.p12 key file as secret file and Service Account Email as Client ID (e.g. some_name@some_name.iam.gserviceaccount.com).

    Manage Permissions / Give Access to Other Projects

    We saw how to add permissions for Service Account during Account Creation Wizard but if you ever wish to edit after its created or you wish to give permission for other projects then perform forllowing steps.
    1. From the top Select Project for which you like to edit Permission.
    2. Go to IAM Menu option (here)
      Link to IAM: https://console.cloud.google.com/iam-admin/iam
    3. Goto Permissions tab. Over there you will find ADD button.
    4. Enter Service account email for which you like to grant permission. Select role you wish to assign.

    Fill in all required parameters and set optional parameters if needed:

    GoogleBigQueryDSN
    Google BigQuery
    Service Account (Using Private Key File) [OAuth]
    https://www.googleapis.com/bigquery/v2
    Required Parameters
    Service Account Email Fill in the parameter...
    P12 Service Account Private Key Path (i.e. *.p12) Fill in the parameter...
    ProjectId Fill in the parameter...
    DatasetId (Choose after ProjectId) Fill in the parameter...
    Optional Parameters
    Scope Fill in the parameter...
    RetryMode Fill in the parameter...
    RetryStatusCodeList Fill in the parameter...
    RetryCountMax Fill in the parameter...
    RetryMultiplyWaitTime Fill in the parameter...
    Job Location Fill in the parameter...
    ODBC DSN Oauth Connection Configuration

  6. Once the data source has been configured, you can preview data. Select the Preview tab and use settings similar to the following to preview data:
    ODBC ZappySys Data Source Preview

  7. Click OK to finish creating the data source.

Video instructions

Create Connection in Informatica Workflow Designer

Once you create DSN using API Driver our next step is to define a connection for Google BigQuery source in Informatica PowerCenter Workflow designer.

  1. Open Workflow designer [W] icon
  2. Goto Connections > Relational
    Create new connection for JSON in Informatica

    Create a new connection for Google BigQuery in Informatica

  3. Click New and select ODBC
    Select ODBC connection type in Informatica (Using ZappySys JSON ODBC DSN)

    Select ODBC connection type in Informatica (Using ZappySys API ODBC DSN)

  4. Now on the ODBC connection setup enter connection name, some fake userid / password (this is a required field but its ignored by JSON Driver)
  5. In the Connection String field enter the exact same name of DSN (Open ODBC Data Sources UI to confirm)
    Configure Google BigQuery connection in Informatica for REST API – Using ZappySys API ODBC Driver

    Configure Google BigQuery connection in Informatica for REST API – Using ZappySys API Driver

  6. Click OK to close the connection properties.

That’s it. Now we ready to move to next step (define source and target in Mapping Designer).

Import Google BigQuery Source Definition in Informatica Mapping Designer

Now let’s look at steps to import Google BigQuery table definition.

  1. Open Informatica Mapping Designer (Click [D] icon)
  2. Click on Source Icon to switch to Sources designer
  3. From the top menu > Click on Sources > Import from Database
    Import JSON Source definition in Informatica Mapping Designer (JSON file or REST API)

    Import Google BigQuery Source definition in Informatica Mapping Designer (JSON file or REST API)

  4. Select ODBC data source from the dropdown (Find out DSN we created earlier to use as JSON Source)
  5. Click Connect button to get a list of tables. Any array node is listed as a table. Also, you will see array node with parent columns (e.g. value_with_parent). You may get some warning like below but they are harmless so just ignore by clicking OK.
    DLL name entry missing from C:\Informatica\PowerCenter8.6.1\client\bin\powrmart.ini Section = ODBCDLL Entry = ZappySys JSON Driver
    —————————————————-
    Using EXTODBC.DLL to support ZappySys JSON Driver. For native support of ZappySys JSON Driver make an entry in the .ini file.
    Select JSON Source Table in Informatica Mapping Designer (JSON file or REST API)

    Select Google BigQuery Source Table in Informatica Mapping Designer (JSON file or REST API)

  6. Select Table you wish to get (You can filter rows by custom SQL query. We will see later in this article how to do)
  7. Optionally once table structure is imported you can rename it
    Rename imported table definition in Informatica Source Designer

    Rename imported table definition in Informatica Source Designer

  8. That’s it, we are now ready to perform similar steps to import Target table structure in the next section.

Import SQL Server Target Definition in Informatica Mapping Designer

Now let’s look at steps to import Target table definition (very similar to the previous section, the only difference is this time we will select DSN which points to SQL Server or any other Target Server).

Now lets look at steps to import target table definition in Informatica mapping designer.

  1. In the Mapping Designer, Click on Target Icon to switch to Target designer
  2. From the top menu > Click on Targets > Import from Database
  3. Select DSN for your Target server (if DSN doesn’t exist then create one by opening ODBC Sources just like we created one for JSON API source (see the previous section about creating DSN).
    Import target Table Definition in informatica

    Import target Table Definition in informatica

  4. Enter your userid , password and Schema name and click Connect to see tables
  5. Select Table name to and click OK import definition.
    Import Target SQL Table Definition in Informatica - Select table from the list

    Import Target SQL Table Definition in Informatica – Select table from the list

Create Source to Target Mapping in Informatica (Import JSON to SQL Server)

Once you have imported source and target table definition, we can create mapping and transformation to load data from JSON to SQL Table.

  1. First open Mapping Designer (Click [D] icon)
  2. Drag JSON Source from sources folder
  3. Drag SQL Table from Targets folder
  4. Map desired columns from Source to target
    Define Source to Target mapping for JSON to SQL Table load in Informatica

    Define Source to Target mapping for Google BigQuery to SQL Table load in Informatica

  5. For certain columns you may have to do datatype conversion. For example to convert OrderDate form nstring to DataTime you have to use Expression Transform like below and map it to target. In below example, our JSON has date format (e.g. 2018-01-31 12:00:00 AM ). To import this to DateTime field in SQL server we need to convert it using TO_DATE function. Use double quotes around T to make this format working.
    TO_DATE(OrderDate,'YYYY-MM-DD H12:MI:SS AM')
    
     --For ISO use below way
                    TO_DATE(OrderDate,'YYYY-MM-DD"T"HH24:MI:SS')
    Informatica JSON to SQL Table Mapping - Datatype conversion (nstring to datetime)

    Informatica Google BigQuery to SQL Table Mapping – Datatype conversion (nstring to datetime)

  6. Once you done with mapping save your mapping and name it (i.e. m_Api_To_SQL)
  7. Now lets move to next section to create workflow.

Create Workflow and Session in Informatica

Now the final step is to create a new workflow. Perform following steps to create workflow which with a session task to import JSON data into SQL table.

  1. Open workflow designer by click [W] icon.
  2. Launch new workflow creation wizard by click Workflow top menuWizard
    name your workflow (e.g. wf_Api_Tp_Sql_Table_Import)

    Creating Informatica Workflow - Wizard UI (Import JSON data to SQL Table)

    Creating Informatica Workflow – Wizard UI (Import Google BigQuery data to SQL Table)

  3. Finish the wizard and double-click the Session to edit some default properties.
  4. First change Error settings so we fail session on error (By default its always green)
    Fail Informatica Session on error (JSON to SQL Load)

    Fail Informatica Session on error (Google BigQuery data to SQL Load)

  5. Select JSON connection for Source
    Select JSON Source Connection in Informatica - JSON File / REST API Load to SQL Table

    Select Google BigQuery Source Connection in Informatica – Load Google BigQuery data to SQL Table

  6. Change default Source query if needed. You can pass parameters to this query to make it dynamic.
    Modify JSON Source SQL query - Pass parameters, change URL, set filter etc

    Modify Google BigQuery Source SQL query – Pass parameters, change URL, set filter etc

  7. Select Target connection of SQL Target Table
    Select SQL Target Connection in Informatica - JSON File / REST API Load to SQL Table

    Select SQL Target Connection in Informatica – Load Google BigQuery data to SQL Table

  8. Save workflow
  9. That’s it. We ready to run our first workflow to load JSON data to SQL.

Execute Workflow and Validate Log in Informatica

Now once you are done with your workflow, execute it to see the log.

Loading JSON data to SQL Table in Informatica (Import REST API or JSON Files)

Loading Google BigQuery data to SQL Table in Informatica (Import REST API or JSON Files)

 

POST data to Google BigQuery in Informatica

There will be a time when you like to send Source data to REST API or SOAP Web Service. You can use below Query for example. For detailed explanation on how to POST data in Informatica check this article.

Video Tutorial – How to POST data to REST API in Informatica

Here is detailed step by step video on REST API POST in informatica PowerCenter

 

Keywords

how to import Google BigQuery in informatica | how to read Google BigQuery data in informatica powercenter | how to test json from informatica | how to use Google BigQuery data as source in informatica power center | how to connect Google BigQuery in informatica 10 | informatica how to import data from Google BigQuery | informatica jtx to import Google BigQuery (use of java transformation) | informatica plugin for restful api using json | informatica power center and Google BigQuery support | informatica read Google BigQuery | informatica rest api | informatica Google BigQuery connector | json parser import informatica

Advanced topics

Create Custom Stored Procedure in ZappySys Driver

You can create procedures to encapsulate custom logic and then only pass handful parameters rather than long SQL to execute your API call.

Steps to create Custom Stored Procedure in ZappySys Driver. You can insert Placeholders anywhere inside Procedure Body. Read more about placeholders here

  1. Go to Custom Objects Tab and Click on Add button and Select Add Procedure:
    ZappySys Driver - Add Stored Procedure

  2. Enter the desired Procedure name and click on OK:
    ZappySys Driver - Add Stored Procedure Name

  3. Select the created Stored Procedure and write the your desired stored procedure and Save it and it will create the custom stored procedure in the ZappySys Driver:
    Here is an example stored procedure for ZappySys Driver. You can insert Placeholders anywhere inside Procedure Body. Read more about placeholders here

    CREATE PROCEDURE [usp_get_orders]
        @fromdate = '<<yyyy-MM-dd,FUN_TODAY>>'
     AS
        SELECT * FROM Orders where OrderDate >= '<@fromdate>';
    

    ZappySys Driver - Create Custom Stored Procedure

  4. That's it now go to Preview Tab and Execute your Stored Procedure using Exec Command. In this example it will extract the orders from the date 1996-01-01:

    Exec usp_get_orders '1996-01-01';

    ZappySys Driver - Execute Custom Stored Procedure

  5. Let's generate the SQL Server Query Code to make the API call using stored procedure. Go to Code Generator Tab, select language as SQL Server and click on Generate button the generate the code.
    As we already created the linked server for this Data Source, in that you just need to copy the Select Query and need to use the linked server name which we have apply on the place of [MY_API_SERVICE] placeholder.

    SELECT * FROM OPENQUERY([MY_API_SERVICE], 'EXEC usp_get_orders @fromdate=''1996-07-30''')

    ZappySys Driver - Generate SQL Server Query

  6. Now go to SQL served and execute that query and it will make the API call using stored procedure and provide you the response.
    ZappySys Driver - Generate SQL Server Query

Create Custom Virtual Table in ZappySys Driver

ZappySys API Drivers support flexible Query language so you can override Default Properties you configured on Data Source such as URL, Body. This way you don't have to create multiple Data Sources if you like to read data from multiple EndPoints. However not every application support supplying custom SQL to driver so you can only select Table from list returned from driver.

If you're dealing with Microsoft Access and need to import data from an SQL query, it's important to note that Access doesn't allow direct import of SQL queries. Instead, you can create custom objects (Virtual Tables) to handle the import process.

Many applications like MS Access, Informatica Designer wont give you option to specify custom SQL when you import Objects. In such case Virtual Table is very useful. You can create many Virtual Tables on the same Data Source (e.g. If you have 50 URLs with slight variations you can create virtual tables with just URL as Parameter setting.

  1. Go to Custom Objects Tab and Click on Add button and Select Add Table:
    ZappySys Driver - Add Table

  2. Enter the desired Table name and click on OK:
    ZappySys Driver - Add Table Name

  3. And it will open the New Query Window Click on Cancel to close that window and go to Custom Objects Tab.

  4. Select the created table, Select Text Type AS SQL and write the your desired SQL Query and Save it and it will create the custom table in the ZappySys Driver:
    Here is an example SQL query for ZappySys Driver. You can insert Placeholders also. Read more about placeholders here

    SELECT
      "ShipCountry",
      "OrderID",
      "CustomerID",
      "EmployeeID",
      "OrderDate",
      "RequiredDate",
      "ShippedDate",
      "ShipVia",
      "Freight",
      "ShipName",
      "ShipAddress",
      "ShipCity",
      "ShipRegion",
      "ShipPostalCode"
    FROM "Orders"
    Where "ShipCountry"='USA'

    ZappySys Driver - Create Custom Table

  5. That's it now go to Preview Tab and Execute your custom virtual table query. In this example it will extract the orders for the USA Shipping Country only:

    SELECT * FROM "vt__usa_orders_only"

    ZappySys Driver - Execute Custom Virtual Table Query

  6. Let's generate the SQL Server Query Code to make the API call using stored procedure. Go to Code Generator Tab, select language as SQL Server and click on Generate button the generate the code.
    As we already created the linked server for this Data Source, in that you just need to copy the Select Query and need to use the linked server name which we have apply on the place of [MY_API_SERVICE] placeholder.

    SELECT * FROM OPENQUERY([MY_API_SERVICE], 'EXEC [usp_get_orders] ''1996-01-01''')

    ZappySys Driver - Generate SQL Server Query

  7. Now go to SQL served and execute that query and it will make the API call using stored procedure and provide you the response.
    ZappySys Driver - Generate SQL Server Query

Actions supported by Google BigQuery Connector

Google BigQuery Connector support following actions for REST API integration. If some actions are not listed below then you can easily edit Connector file and enhance out of the box functionality.
 Read Data using SQL Query -OR- Execute Script (i.e. CREATE, SELECT, INSERT, UPDATE, DELETE)
Runs a BigQuery SQL query synchronously and returns query results if the query completes within a specified timeout    [Read more...]
Parameter Description
SQL Statement (i.e. SELECT / DROP / CREATE)
Option Value
Example1 SELECT title,id,language,wp_namespace,reversion_id ,comment,num_characters FROM bigquery-public-data.samples.wikipedia LIMIT 1000
Example2 CREATE TABLE TestDataset.Table1 (ID INT64,Name STRING,BirthDate DATETIME, Active BOOL)
Example3 INSERT TestDataset.Table1 (ID, Name,BirthDate,Active) VALUES(1,'AA','2020-01-01',true),(2,'BB','2020-01-02',true),(3,'CC','2020-01-03',false)
Use Legacy SQL Syntax?
Option Value
false false
true true
timeout (Milliseconds) Wait until timeout is reached.
Option Value
false false
true true
Job Location The geographic location where the job should run. For Non-EU and Non-US datacenters we suggest you to supply this parameter to avoid any error.
Option Value
System Default
Data centers in the United States US
Data centers in the European Union EU
Columbus, Ohio us-east5
Iowa us-central1
Las Vegas us-west4
Los Angeles us-west2
Montréal northamerica-northeast1
Northern Virginia us-east4
Oregon us-west1
Salt Lake City us-west3
São Paulo southamerica-east1
Santiago southamerica-west1
South Carolina us-east1
Toronto northamerica-northeast2
Delhi asia-south2
Hong Kong asia-east2
Jakarta asia-southeast2
Melbourne australia-southeast2
Mumbai asia-south1
Osaka asia-northeast2
Seoul asia-northeast3
Singapore asia-southeast1
Sydney australia-southeast1
Taiwan asia-east1
Tokyo asia-northeast1
Belgium europe-west1
Finland europe-north1
Frankfurt europe-west3
London europe-west2
Madrid europe-southwest1
Milan europe-west8
Netherlands europe-west4
Paris europe-west9
Warsaw europe-central2
Zürich europe-west6
AWS - US East (N. Virginia) aws-us-east-1
Azure - East US 2 azure-eastus2
Custom Name (Type your own) type-region-id-here
 Read Table Rows
Gets the specified table resource by table ID. This method does not return the data in the table, it only returns the table resource, which describes the structure of this table.    [Read more...]
Parameter Description
ProjectId Leave this value blank to use ProjectId from connection settings
DatasetId Leave this value blank to use DatasetId from connection settings
TableId
 [$parent.tableReference.datasetId$].[$parent.tableReference.tableId$]
Read data from [$parent.tableReference.datasetId$].[$parent.tableReference.tableId$] for project .    [Read more...]
Parameter Description
 List Projects
Lists Projects that the caller has permission on and satisfy the specified filter.    [Read more...]
Parameter Description
SearchFilter An expression for filtering the results of the request. Filter rules are case insensitive. If multiple fields are included in a filter query, the query will return results that match any of the fields. Some eligible fields for filtering are: name, id, labels.{key} (where key is the name of a label), parent.type, parent.id, lifecycleState. Example: name:how*
 List Datasets
Lists all BigQuery datasets in the specified project to which the user has been granted the READER dataset role.    [Read more...]
Parameter Description
ProjectId
SearchFilter An expression for filtering the results of the request. Filter rules are case insensitive. If multiple fields are included in a filter query, the query will return results that match any of the fields. Some eligible fields for filtering are: name, id, labels.{key} (where key is the name of a label), parent.type, parent.id, lifecycleState. Example: name:how*
all Whether to list all datasets, including hidden ones
Option Value
True True
False False
 Create Dataset
Creates a new empty dataset.    [Read more...]
Parameter Description
ProjectId
Dataset Name Enter dataset name
Description
 Delete Dataset
Deletes the dataset specified by the datasetId value. Before you can delete a dataset, you must delete all its tables, either manually or by specifying deleteContents. Immediately after deletion, you can create another dataset with the same name.    [Read more...]
Parameter Description
ProjectId
DatasetId
Delete All Tables If True, delete all the tables in the dataset. If False and the dataset contains tables, the request will fail. Default is False
Option Value
True True
False False
 Delete Table
Deletes the dataset specified by the datasetId value. Before you can delete a dataset, you must delete all its tables, either manually or by specifying deleteContents. Immediately after deletion, you can create another dataset with the same name.    [Read more...]
Parameter Description
ProjectId
DatasetId
TableId
 List Tables
Lists BigQuery Tables for the specified project / dataset to which the user has been granted the READER dataset role.    [Read more...]
Parameter Description
ProjectId
DatasetId
 Get Query Schema (From SQL)
Runs a BigQuery SQL query synchronously and returns query schema    [Read more...]
Parameter Description
SQL Query
Filter
Use Legacy SQL Syntax?
Option Value
false false
true true
timeout (Milliseconds) Wait until timeout is reached.
Option Value
false false
true true
Job Location The geographic location where the job should run. For Non-EU and Non-US datacenters we suggest you to supply this parameter to avoid any error.
Option Value
System Default
Data centers in the United States US
Data centers in the European Union EU
Columbus, Ohio us-east5
Iowa us-central1
Las Vegas us-west4
Los Angeles us-west2
Montréal northamerica-northeast1
Northern Virginia us-east4
Oregon us-west1
Salt Lake City us-west3
São Paulo southamerica-east1
Santiago southamerica-west1
South Carolina us-east1
Toronto northamerica-northeast2
Delhi asia-south2
Hong Kong asia-east2
Jakarta asia-southeast2
Melbourne australia-southeast2
Mumbai asia-south1
Osaka asia-northeast2
Seoul asia-northeast3
Singapore asia-southeast1
Sydney australia-southeast1
Taiwan asia-east1
Tokyo asia-northeast1
Belgium europe-west1
Finland europe-north1
Frankfurt europe-west3
London europe-west2
Madrid europe-southwest1
Milan europe-west8
Netherlands europe-west4
Paris europe-west9
Warsaw europe-central2
Zürich europe-west6
AWS - US East (N. Virginia) aws-us-east-1
Azure - East US 2 azure-eastus2
Custom Name (Type your own) type-region-id-here
 Get Table Schema
Gets the specified table resource by table ID. This method does not return the data in the table, it only returns the table resource, which describes the structure of this table.    [Read more...]
Parameter Description
DatasetId
TableId
Filter
 insert_table_data
   [Read more...]
Parameter Description
ProjectId
DatasetId
TableId
 post_[$parent.tableReference.datasetId$]_[$parent.tableReference.tableId$]
   [Read more...]
 Generic Request
This is generic endpoint. Use this endpoint when some actions are not implemented by connector. Just enter partial URL (Required), Body, Method, Header etc. Most parameters are optional except URL.    [Read more...]
Parameter Description
Url API URL goes here. You can enter full URL or Partial URL relative to Base URL. If it is full URL then domain name must be part of ServiceURL or part of TrustedDomains
Body Request Body content goes here
IsMultiPart Set this option if you want to upload file(s) (i.e. POST RAW file data) or send data using Multi-Part encoding method (i.e. Content-Type: multipart/form-data). Multi-Part request allows you to mix key/value and upload files in same request. On the other hand raw upload allows only single file upload (without any key/value) ==== Raw Upload (Content-Type: application/octet-stream) ===== To upload single file in raw mode check this option and specify full file path starting with @ sign in the Body (e.g. @c:\data\myfile.zip ) ==== Form-Data / Multipart Upload (Content-Type: multipart/form-data) ===== To treat your Request data as multi part fields you must specify key/value pairs separated by new lines into RequestData field (i.e. Body). Each key value pair is entered on new-line and key/value are separated using equal sign (=). Preceding and trailing spaces are ignored also blank lines are ignored. If field value has some any special character(s) then use escape sequence (e.g. For NewLine: \r\n, For Tab: \t, For at (@): \@). When value of any field starts with at sign (@) its automatically treated as File you want to upload. By default file content type is determined based on extension however you can supply content type manually for any field using this way [ YourFileFieldName.Content-Type=some-content-type ]. By default File Upload Field always includes Content-Type in the request (non file fields do not have content-type by default unless you supply manually). For some reason if you dont want to use Content-Type header in your request then supply blank Content-Type to exclude this header altogather [e.g. SomeFieldName.Content-Type= ]. In below example we have supplied Content-Type for file2 and SomeField1, all other fields are using default content-type. See below Example of uploading multiple files along with additional fields. If some API requires you to pass Content-Type: multipart/form-data rather than multipart/form-data then manually set Request Header => Content-Type: multipart/mixed (it must starts with multipart/ else will be ignored). file1=@c:\data\Myfile1.txt file2=@c:\data\Myfile2.json file2.Content-Type=application/json SomeField1=aaaaaaa SomeField1.Content-Type=text/plain SomeField2=12345 SomeFieldWithNewLineAndTab=This is line1\r\nThis is line2\r\nThis is \ttab \ttab \ttab SomeFieldStartingWithAtSign=\@MyTwitterHandle
Filter Enter filter to extract array from response. Example: $.rows[*] --OR-- $.customers[*].orders[*]. Check your response document and find out hierarchy you like to extract
Headers Headers for Request. To enter multiple headers use double pipe or new line after each {header-name}:{value} pair

Google BigQuery Connector Examples for Informatica Connection

This page offers a collection of SQL examples designed for seamless integration with the ZappySys API ODBC Driver under ODBC Data Source (36/64) or ZappySys Data Gateway, enhancing your ability to connect and interact with Prebuilt Connectors effectively.

Native Query (ServerSide): Query using Simple SQL    [Read more...]

Server side BigQuery SQL query example. Prefix SQL with word #DirectSQL to invoke server side engine (Pass-through SQL). Query free dataset table (bigquery-public-data.samples.wikipedia)

#DirectSQL SELECT * FROM bigquery-public-data.samples.wikipedia LIMIT 1000 /* try your own dataset or Some FREE dataset like nyc-tlc.yellow.trips -- 3 parts ([Project.]Dataset.Table) */

Native Query (ServerSide): Query using Complex SQL    [Read more...]

Server side SQL query example of BigQuery. Prefix SQL with word #DirectSQL to invoke server side engine (Pass-through SQL). Query free dataset table (bigquery-public-data.usa_names.usa_1910_2013)

#DirectSQL 
SELECT name, gender, SUM(number) AS total
FROM bigquery-public-data.usa_names.usa_1910_2013
GROUP BY name, gender
ORDER BY total DESC
LIMIT 10

Native Query (ServerSide): Delete Multiple Records (Call DML)    [Read more...]

This Server side SQL query example of BigQuery shows how to invoke DELETE statement. To do that prefix SQL with word #DirectSQL to invoke server side engine (Pass-through SQL). Query free dataset table (bigquery-public-data.usa_names.usa_1910_2013)

#DirectSQL DELETE FROM TestDataset.MyTable Where Id > 5

Native Query (ServerSide): Query with CAST unix TIMESTAMP datatype column as datetime    [Read more...]

This example shows how to query timestamp column as DateTime. E.g. 73833719.524272 should be displayed as 1972-05-04 or with milliseconds 1972-05-04 1:21:59.524 PM then use CAST function (you must use #DirectSQL prefix)

#DirectSQL 
SELECT id, col_timestamp, CAST(col_timestamp as DATE) AS timestamp_as_date, CAST(col_timestamp as DATETIME) AS timestamp_as_datetime
FROM MyProject.MyDataset.MyTable
LIMIT 10

Native Query (ServerSide): Create Table / Run Other DDL    [Read more...]

Example of how to run Valid BigQuery DDL statement. Prefix SQL with word #DirectSQL to invoke server side engine (Pass-through SQL)

#DirectSQL CREATE TABLE TestDataset.Table1 (ID INT64,Name STRING,BirthDate DATETIME, Active BOOL)

Native Query (ServerSide): UPDATE Table data for complex types (e.g. Nested RECORD, Geography, JSON)    [Read more...]

Example of how to run Valid BigQuery DML statement ()e.g. UPDATE / INSERT / DELETE). This usecase shows how to update record with complex data types such as RECORD (i.e Array), Geography, JSON and more. Prefix SQL with word #DirectSQL to invoke server side engine (Pass-through SQL)

#DirectSQL 
#DirectSQL 
Update TestDataset.DataTypeTest 
Set ColTime='23:59:59.123456',
 ColGeography=ST_GEOGPOINT(34.150480, -84.233870),
 ColRecord=(1,"AA","Column3 data"),
 ColBigNumeric=1222222222222222222.123456789123456789123456789123456789,
 ColJson= JSON_ARRAY('{"doc":1, "values":[{"id":1},{"id":2}]}') 
Where ColInteger=1

Native Query (ServerSide): DROP Table (if exists) / Other DDL    [Read more...]

Example of how to run Valid BigQuery DDL statement. Prefix SQL with word #DirectSQL to invoke server side engine (Pass-through SQL)

#DirectSQL DROP TABLE IF EXISTS Myproject.Mydataset.Mytable

Native Query (ServerSide): Call Stored Procedure    [Read more...]

Example of how to run BigQuery Stored Procedure and pass parameters. Assuming you created a valid stored proc called usp_GetData in TestDataset, call like below.

#DirectSQL CALL TestDataset.usp_GetData(1)

INSERT Single Row    [Read more...]

This is sample how you can insert into BigQuery using ZappySys query language. You can also use ProjectId='myproject-id' in WITH clause.

INSERT INTO MyBQTable1(SomeBQCol1, SomeBQCol2) Values(1,'AAA')
--WITH(DatasetId='TestDataset',Output='*')
--WITH(DatasetId='TestDataset',ProjectId='MyProjectId',Output='*')

INSERT Multiple Rows from SQL Server    [Read more...]

This example shows how to bulk insert into Google BigQuery Table from microsoft SQL Server as external source. Notice that INSERT is missing column list. Its provided by source query so must produce valid column names found in target BQ Table (you can use SQL Alias in Column name to produce matching names)

INSERT INTO MyBQTable1 
SOURCE(
    'MSSQL'
  , 'Data Source=localhost;Initial Catalog=tempdb;Initial Catalog=tempdb;Integrated Security=true'
  , 'SELECT Col1 as SomeBQCol1,Col2 as SomeBQCol2 FROM SomeTable Where SomeCol=123'
)
--WITH(DatasetId='TestDataset',Output='*')
--WITH(DatasetId='TestDataset',ProjectId='MyProjectId',Output='*')

INSERT Multiple Rows from any ODBC Source (DSN)    [Read more...]

This example shows how to bulk insert into Google BigQuery Table from any external ODBC Source (Assuming you have installed ODBC Driver and configured DSN). Notice that INSERT is missing column list. Its provided by source query so it must produce valid column names found in target BQ Table (you can use SQL Alias in Column name to produce matching names)

INSERT INTO MyBQTable1 
SOURCE(
    'ODBC'
  , 'DSN=MyDsn'
  , 'SELECT Col1 as SomeBQCol1,Col2 as SomeBQCol2 FROM SomeTable Where SomeCol=123'
) 
WITH(DatasetId='TestDataset')

INSERT Multiple Rows from any JSON Files / API (Using ZappySys ODBC JSON Driver)    [Read more...]

This example shows how to bulk insert into Google BigQuery Table from any external ODBC JSON API / File Source (Assuming you have installed ZappySys ODBC Driver for JSON). Notice that INSERT is missing column list. Its provided by source query so it must produce valid column names found in target BQ Table (you can use SQL Alias in Column name to produce matching names). You can also use similar approach to read from CSV files or XML Files. Just use CSV / XML driver rather than JSON driver in connection string. Refer this for more examples of JSON Query https://zappysys.com/onlinehelp/odbc-powerpack/scr/json-odbc-driver-sql-query-examples.htm

INSERT INTO MyBQTable1 
SOURCE(
    'ODBC'
  , 'Driver={ZappySys JSON Driver};Src='https://some-url/get-data''
  , 'SELECT Col1 as SomeBQCol1,Col2 as SomeBQCol2 FROM _root_'
)
--WITH(DatasetId='TestDataset',Output='*')
--WITH(DatasetId='TestDataset',ProjectId='MyProjectId',Output='*')

List Projects    [Read more...]

Lists Projects for which user has access

SELECT * FROM list_projects

List Datasets    [Read more...]

Lists Datasets for specified project. If you do not specify ProjectId then it will use connection level details.

SELECT * FROM list_datasets
--WITH(ProjectId='MyProjectId')

List Tables    [Read more...]

Lists tables for specified project / dataset. If you do not specify ProjectId or datasetId then it will use connection level details.

SELECT * FROM list_tables
--WITH(ProjectId='MyProjectId')
--WITH(ProjectId='MyProjectId',DatasetId='MyDatasetId')

Delete dataset    [Read more...]

Delete dataset for specified ID. If you like to delete all tables under that dataset then set deleteContents='true'

SELECT * FROM delete_dataset WITH(DatasetId='MyDatasetId', deleteContents='False')

Conclusion

In this article we discussed how to connect to Google BigQuery in Informatica and integrate data without any coding. Click here to Download Google BigQuery Connector for Informatica and try yourself see how easy it is. If you still have any question(s) then ask here or simply click on live chat icon below and ask our expert (see bottom-right corner of this page).

Download Google BigQuery Connector for Informatica Documentation 

More integrations

Other application integration scenarios for Google BigQuery

Other connectors for Informatica


Download Google BigQuery Connector for Informatica Documentation

  • How to connect Google BigQuery in Informatica?

  • How to get Google BigQuery data in Informatica?

  • How to read Google BigQuery data in Informatica?

  • How to load Google BigQuery data in Informatica?

  • How to import Google BigQuery data in Informatica?

  • How to pull Google BigQuery data in Informatica?

  • How to push data to Google BigQuery in Informatica?

  • How to write data to Google BigQuery in Informatica?

  • How to POST data to Google BigQuery in Informatica?

  • Call Google BigQuery API in Informatica

  • Consume Google BigQuery API in Informatica

  • Google BigQuery Informatica Automate

  • Google BigQuery Informatica Integration

  • Integration Google BigQuery in Informatica

  • Consume real-time Google BigQuery data in Informatica

  • Consume real-time Google BigQuery API data in Informatica

  • Google BigQuery ODBC Driver | ODBC Driver for Google BigQuery | ODBC Google BigQuery Driver | SSIS Google BigQuery Source | SSIS Google BigQuery Destination

  • Connect Google BigQuery in Informatica

  • Load Google BigQuery in Informatica

  • Load Google BigQuery data in Informatica

  • Read Google BigQuery data in Informatica

  • Google BigQuery API Call in Informatica