Skip to content
ZappySys Logo - SSIS, SQL Server, Big Data, Cloud Computing, NoSQL, Business Intelligence
ZappySys is a USA-based company with 10+ years in business and 3000+ customers in 90+ countries. Our mission is to provide user friendly, high performance drag-and-drop API Connectors, ODBC drivers, and no-code/low-code tools to automate data integration workflows and ensures maximum ROI.

Tel: +1-800-444-5602 | Live Chat

  • Home
  • Products
    • SSIS PowerPack
      • Product Overview
      • Compare Editions
      • Download FREE Trial
      • Documentation
      • Pricing
    • ODBC PowerPack
      • Product Overview
      • Compare Editions
      • Download FREE Trial
      • Documentation
      • Pricing
    • ZappyShell
    • ————————
    • Customer Download Area
  • Pricing
    • SSIS PowerPack
    • ODBC PowerPack
    • ZappyShell
  • API Integration
    • API Connectors / Apps
    • Articles
  • FREE SSIS Tasks
  • Blog
  • Community
  • Support
    • Contact Us
    • Product Documentation
    • Knowledge Base
    • FAQs
  • About Us
    • Our Customers
    • Our Partners
    • Resellers
    • Reviews & Testimonials
    • Privacy Policy
    • Terms & Conditions
  • Menu
Home » Metadata in ODBC Driver / Performance Options

Metadata in ODBC Driver / Performance Options

Updated on April 1, 2025 by ZappySys
Contents hide
1 Introduction
2 The Problem
3 How to Speedup ZappySys Driver Performance
4 Using Metadata Options in SQL Query
4.1 Generate Metadata Manually
4.1.1 Compact Format (New)
4.1.2 JSON Format (Legacy)
4.2 Using Metadata in SQL Query
4.2.1 Metadata from Direct String
4.2.2 Metadata from File
4.2.3 Metadata from internal DSN Storage
4.3 Hybride Metadata Mode (Auto Scan + Override )
5 Using Data Caching Options in ODBC Drivers
6 Using Stream Mode for Large Files
6.1 Understanding Streaming Mode
6.2 SQL Query for reading Large JSON File (Streaming Mode)
6.3 SQL Query for reading Large XML File (Streaming Mode)
6.4 SQL Query for reading large files with parent columns or 2 levels deep
6.5 Enhancing Performance with URL JOIN Queries Using Child Meta Clause
6.5.1 Example Query with URL JOIN
6.5.2 Why Use Explicit Metadata?
7 Handling POST requests to create / update records
8 Advanced Options
8.1 Change scan row limit for metadata
8.2 Using Disk for Query Processing Engine
8.3 More from ZappySys Blog

Introduction

In this post we will learn how to fix some metadata / performance related issues in ODBC PowerPack Drivers using Caching / Metadata / Streaming Mode features. By default ZappySys API Drivers issues atleast two API request (First one to obtain Metadata and second, third… for Fetching Data Pages). Most of the times this should be OK and user wont even notice this unless you use tools like fiddler to see how many requests are sent by the driver. Sometimes its necessary to avoid extra requests to obtain metadata (For example you do POST to create a new record or the API has strict Throttling). In this post we will learn various techniques on how to avoid extra POST requests or how to speed up queries by reading from the Cache if your data doesn’t change often.

The Problem

Most of the ZappySys Drivers have metadata detection features (e.g. name, datatype, length, etc.). This is done by reading a few sample rows (around 300 rows – see last section). But this can cause problems in some cases below.

  • Slow reading speed due to extra requests needed to scan metadata
  • Double requests in POST API call (first meta request and then data request)

Let’s look at how to solve these issues in the following sections.

How to Speedup ZappySys Driver Performance

ZappySys Drivers may provide the following features (Some options may be available only for API drivers). These features can be used to speed up query performance and solve some metadata issues.

  1. Use Pre-generated Metadata (META option in WITH clause of SQL Query)
  2. Use Data Caching Option
  3. Use Streaming Mode for large XML / JSON files

Using Metadata Options in SQL Query

Now let’s talk about Metadata handling. Most ETL / Reporting tool need to know column type, size and precision before getting actual data from driver. If you are dealing with JSON / XML or CSV format you may realize that there is no metadata stored in the file itself to describe columns/data types.

However, metadata must be sent to most Reporting / ETL tool when they use ODBC Driver. ZappySys driver does an intelligent scan from your local file or API response to guess datatypes for each column. In most cases driver does accurate guess but sometimes it’s necessary to adjust metadata (Specially Column Length) to avoid truncation-related errors from your ETL /Reporting tool.

Issue with this automatic metadata scan is, it can be expensive (slow performance) or inaccurate (e.g. invalid datatype for some columns)

Let’s look at how to take complete control on your Metadata so you can avoid metadata-related errors and speed up query performance.

To use metadata you need to perform 3 steps

  1. Execute a sample query and generate metadata
  2. Rewrite SQL Query WITH + META to use Metadata captured in the previous step

Generate Metadata Manually

Let’s look at how to generate SQL query metadata using ODBC Driver UI.

  1. We are assuming you have downloaded and installed ODBC PowerPack
  2. Open ODBC DSN by typing “ODBC” in your start menu and select ODBC Data Sources 64 bit
  3. Now create Add and select “ZappySys JSON Driver” for test
  4. On the UI enter URL like below
    1
    https://services.odata.org/V3/Northwind/Northwind.svc/Invoices?$format=json
  5. Now you can go to the Preview tab and enter query like below and click Preview Data
    1
    select * from value
  6. Once the query is executed you can click Save Metadata button
    NOTE: To generate metadata you must run query first else it will throw an error. In some cases, if its not possible (i.e. POST request which creates new records) then you can craft metadata by hand (match attribute name).
  7. You will be prompted with following dialog box. You can choose how to Save metadata and the format of metadata.
    Generate Metadata in ZappySys ODBC Drivers

    Generate Metadata in ZappySys ODBC Drivers

     

The metadata file may look like below depending on the sample source URL you used. You can edit this metadata as per your need.

Compact Format (New)

Version 1.4 introduced a new format for metadata. Here is an example. Each pair can be its own line or you can put all in one line. Whitespaces around any value / name is ignored. string type without length assumes 2000 chars long string.

Syntax: col_name1 : type_name[(length)] [; col_name2 : type_name[(length)] ] …. [; col_nameN : type_name[(length)] ]

1
2
3
4
5
6
col1: int32;
col2: string(10);
col3: boolean;
col4: datetime;
col5: int64;
col6: double;

JSON Format (Legacy)

JavaScript
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
/*
Available column types:
Default, String, Int64, Long, Int, Int32, Short, Byte,
Decimal, Double, Float, DateTime, Date, Boolean
*/
[
  {
    "Name": "p_odata_metadata",
    "Type": "String",
    "Length": 16777216
  },
  {
    "Name": "p_odata_nextLink",
    "Type": "String",
    "Length": 16777216
  },
  {
    "Name": "ShipName",
    "Type": "String",
    "Length": 16777216
  },
  {
    "Name": "ShipPostalCode",
    "Type": "String",
    "Length": 16777216
  },
  {
    "Name": "CustomerID",
    "Type": "String",
    "Length": 16777216
  },
  {
    "Name": "CustomerName",
    "Type": "String",
    "Length": 16777216
  },
  {
    "Name": "OrderID",
    "Type": "Int64",
    "Length": 16777216
  },
  {
    "Name": "OrderDate",
    "Type": "DateTime",
    "Length": 16777216
  }
]

Using Metadata in SQL Query

Now it’s time to use Metadata we generated in the previous section and speed up our queries. There are 3 ways you can use metadata in SQL query.

  1. Metadata as Direct String
  2. Metadata from file
  3. Metadata from internal storage

Metadata from Direct String

To use metadata directly inside SQL query using a direct string approach you can use META attribute inside WITH clause as below.

Possible datatypes are: String, Int64, Long, Int, Int32, Short, Byte, Decimal, Double, Float, DateTime, Date, Boolean.

Compact format

Transact-SQL
1
select * from value WITH( meta='col1: int32; col2: string(10); col3: boolean; col4: datetime; col5: int64;col6: double' )

JSON format (Legacy) 

If you are using Legacy Metadata format (JSON) then do below (Just enter Metadata in META attribute like below).

Transact-SQL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
select * from value WITH( meta='[
  {
    "Name": "p_odata_metadata",
    "Type": "String",
    "Length": 16777216
  },
  {
    "Name": "p_odata_nextLink",
    "Type": "String",
    "Length": 16777216
  },
  {
    "Name": "ShipName",
    "Type": "String",
    "Length": 16777216
  },
...........
...........
...........
]' )

Metadata from File

To use metadata that is saved to a file (like our previous screenshot) use below SQL query for example. The table name may be different in your case if you didn’t use the previous example URL. You can Edit Metadata files as per your need in any text editor.

Transact-SQL
1
select * from value WITH( meta='c:\temp\meta.txt' )

Metadata from internal DSN Storage

To use metadata that is saved to DSN Storage use below SQL query for example.

Transact-SQL
1
select * from value WITH( meta='My-Invoice-Meta' )

In the previous section, we mentioned how to Save metadata. In that prompt, the 2nd option is to save metadata to internal DSN Storage. In case you like to see/edit that metadata entry you can do it below way.

Edit Metadata saved to internal DSN Storage

Once you save Metadata to DSN Storage, here is how you can view and edit.

View / Edit Metadata stored in DSN Storage

View / Edit Metadata stored in DSN Storage

Hybride Metadata Mode (Auto Scan + Override )

If you have many columns but out of them only some column needs manual override then you can try something like below. This will perform auto scan to detect all columns (let’s say 150 columns) and out of them override datatypes for only two columns (i.e. “id” and “checknumber”).

Transact-SQL
1
select * from mytable WITH(META='@OverrideMode:1;id:int;checknumber:string(10)' )

or for Legacy format use below

Transact-SQL
1
select * from mytable WITH(META='[{Name:"@OverrideMode"}, {Name:"id",Type:"Int32"},{Name:"checknumber",Type:"String",Length:10}]')

 

Using Data Caching Options in ODBC Drivers

ZappySys drivers come with very useful Data Caching feature. This can be very useful feature to speed up performance in many cases.

If your data doesn’t change often and you need to issue same query multiple times then enabling data caching may speed up data retrieval significantly. By default ZappySys driver enables Caching for just Metadata (60 Seconds Expiration). So metadata for each query issued by ZappySys Driver is cached for 60 seconds (See below screenshot).

Here is how you can enable caching options.

Enable Caching Options for Metadata / Data for ZappySys ODBC Drivers

Enable Caching Options for Metadata / Data for ZappySys ODBC Drivers

New version of ODBC PowerPack now supports Caching Options in WITH clause (see below). Per query cache by supplying file name.

Transact-SQL
1
2
3
4
5
6
7
8
SELECT * FROM $
WITH
(  SRC='https://myhost.com/some-api'
  ,CachingMode='All'  --cache metadata and data rows both
  ,CacheStorage='File' --or Memory
  ,CacheFileLocation='c:\temp\myquery.cache'
  ,CacheEntryTtl=300 --cache for 300 seconds
)

Using Stream Mode for Large Files

There will be a time when you need to read very large JSON / XML files from local disk or URL. ZappySys engine by default process everything in memory, which may work fine upto certain size but if you have file size larger than OS allowed memory internal limit then you have to tweak some settings.

First lets understand the problem. Try to create new blank DSN and run below query and watch your Memory Graph in Task Manager. You will see RAM graph spikes… and query takes around 10-15 seconds to return 10 rows.

Slow Version (Fully load In memory then parse)

Transact-SQL
1
2
3
4
5
6
7
8
9
SELECT * FROM $
LIMIT 10
WITH(
Filter='$.LargeArray[*]'
,SRC='https://zappysys.com/downloads/files/test/large_file_100k_largearray_prop.json.gz'
--,SRC='c:\data\large_file.json.gz'
,IncludeParentColumns='True'
,FileCompressionType='GZip' --Zip or None (Zip format only available for Local files)
)

Now let’s modify query little bit. Add –FAST, Turn off IncludeParentColumns and run again below modified query. You will notice it takes less than a second for same result.

FAST Version (Streaming Mode – Parse as you go)

Transact-SQL
1
2
3
4
5
6
7
8
9
SELECT * FROM $
LIMIT 10
WITH(
Filter='$.LargeArray[*]--FAST' --//Adding --FAST option turn on STREAM mode (large files)
,SRC='https://zappysys.com/downloads/files/test/large_file_100k_largearray_prop.json.gz'
--,SRC='c:\data\large_file.json.gz'
,IncludeParentColumns='False'  --//This Must be OFF for STREAM mode (read very large files)
,FileCompressionType='GZip' --Zip or None (Zip format only available for Local files)
)

Understanding Streaming Mode

Now let’s understand step-by-step what we did and why we did. By default if you’re reading JSON / XML data, entire Document is loaded into Memory for processing. This is fine for most cases but some API returns very large Document like below.

Sample JSON File

JavaScript
1
2
3
4
5
6
7
8
9
10
{
  rows:[
      {..},
      {..},
      ....
      .... 100000 more rows
      ....
      {..}
   ]
}

To read from above document without getting OutOfMemory exception change following settings. For similar problem in SSIS check this article.

  1. In the filter append –FAST  (prefix dash dash)
  2. Uncheck IncludeParentColumn option (This is needed for stream mode)
  3. Enable Performance Mode (not applicable for JSON Driver)
  4. Write your query and execute see how long it takes ( Table name must be $ in FROM clause, Filter must have –FAST suffix, Parent Columns must be excluded as below)
Configure Settings to read very large XML /JSON Files

Configure Settings to read very large XML /JSON Files

 

Reading Very Large JSON / XML Files - Streaming Option

Reading Very Large JSON / XML Files – Streaming Option

 

SQL Query for reading Large JSON File (Streaming Mode)

Here is a sample query which enables very large JSON file reading using Stream Mode using ZappySys JSON Driver

Notice Three settings.

Table name must be $ in FROM clause, Filter must have –FAST suffix, Parent Columns must be excluded (IncludeParentColumns=false) as below.

Transact-SQL
1
2
3
4
5
6
7
8
9
SELECT * FROM $
--LIMIT 10
WITH(
Filter='$.LargeArray[*]--FAST' --//Adding --FAST option turn on STREAM mode (large files)
,SRC='https://zappysys.com/downloads/files/test/large_file_100k_largearray_prop.json.gz'
--,SRC='c:\data\large_file.json.gz'
,IncludeParentColumns='False'  --//This Must be OFF for STREAM mode (read very large files)
,FileCompressionType='GZip' --Zip or None (Zip format only available for Local files)
)

SQL Query for reading Large XML File (Streaming Mode)

Here is a sample query which enables very large JSON file reading using Stream Mode using ZappySys XML Driver

Notice one extra option EnablePerformanceMode = True for Large XML File Processing and following three changes.

Table name must be $ in FROM clause, Filter must have –FAST suffix, Parent Columns must be excluded (IncludeParentColumns=false) as below.

Transact-SQL
1
2
3
4
5
6
7
8
9
10
SELECT * FROM $
--LIMIT 10
WITH(
Filter='$.doc.Customer[*]--FAST' --//Adding --FAST option turn on STREAM mode (large files)
,SRC='https://zappysys.com/downloads/files/customer_10k.xml'
--,SRC='c:\data\customer_10k.xml'
,IncludeParentColumns='False'  --//This Must be OFF for STREAM mode (read very large files)
,FileCompressionType='None' --GZip, Zip or None (Zip format only available for Local files)
,EnablePerformanceMode='True'  --try to disable this option for simple files
)

SQL Query for reading large files with parent columns or 2 levels deep

So far we saw one level deep array with Streaming mode. Now assume a scenario where you have a very large XML or JSON file which requires filter more than 2 level deep. (e.g. $.Customers[*].Orders[*]   or $.Customers[*].Orders[*].Items[*]   ) , and also you need parent columns (e.g. IncludeParentColumns=True).

If you followed previous section, we mentioned that for Streaming mode you must set IncludeParentColumns=False. So what do you do in that case?

Well, you can use JOIN Query as below to support that scenario. You may notice how we extracting Branches for each record and passing to child Query query. Notice that rather than SRC we are using DATA in child query.

Transact-SQL
1
2
3
4
5
6
7
8
9
10
11
12
13
SELECT a.RecID,a.CustomerID, b.* FROM $
LIMIT 10
WITH(
Filter='$.LargeArray[*]--FAST' --//Adding --FAST option turn on STREAM mode (large files)
,SRC='https://zappysys.com/downloads/files/test/large_file_100k_largearray_prop.json.gz'
--,SRC='c:\data\large_file.json.gz'
,IncludeParentColumns='False'  --//This Must be OFF for STREAM mode (read very large files)
,FileCompressionType='GZip' --Zip or None (Zip format only available for Local files)
,Alias='a'
,JOIN1_Data='[$a.Branches$]'
,JOIN1_Alias='b'
,JOIN1_Filter=''
)

Enhancing Performance with URL JOIN Queries Using Child Meta Clause

When working with ZappySys ODBC drivers, joining data from multiple REST API endpoints can significantly impact performance if the metadata is not properly handled. Using the meta clause can optimize the query execution by defining the data types explicitly, preventing unnecessary metadata discovery requests.

Example Query with URL JOIN

The following query demonstrates how to join two JSON endpoints:

  • Customers ( customers.json)
  • Orders ( orders-p3.json)
Transact-SQL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
SELECT
    c.custid,
    o.orderid,
    o.orderdate
FROM $
WITH(
    src = 'https://zappysys.com/downloads/files/test/join/customers.json',
    filter = '$.customers[*]',
    alias = 'c',
 
    -- Joining with Orders endpoint
    join1_src = 'https://zappysys.com/downloads/files/test/join/c2/orders-p3.json',
    join1_filter = '$.myArray[*]',
    join1_alias = 'o',
 
    -- Optimized metadata handling (works properly)
    -- Explicit metadata definition improves performance
    -- Avoids additional API calls for metadata discovery
    -- Faster query execution
    -- join1_meta = 'custid: int64; orderid: int64;orderdate: string(50);'
    join1_meta = 'C:\temp\meta.txt',
);

Why Use Explicit Metadata?

  1. Improved Performance: When the metadata is defined explicitly, the driver skips automatic schema detection, reducing additional API calls.
  2. Consistency: Prevents mismatched data types and improves query stability.
  3. Faster Execution: Predefined metadata allows the driver to parse and process data more efficiently.

Handling POST requests to create / update records

As we mention earlier in some cases you might be calling POST requests to Create new records. In such case API request must be sent exactly once. By default Driver sends first request to Get metadata and then sends second request to get data using same parameters used for metadata request. This is usually fine if we reading data and not creating new row on server… (e.g. create new Customer Row). If you have case where you must call API request precisely once then you have to use META clause in the WITH query to avoid Metadata request by supplying static metadata from File or Storage. We discussed this one usecase here.

Advanced Options

Let’s look at some advanced options for metadata management.

Change scan row limit for metadata

By default, Metadata is detected based on scanning 300 rows. But if that scan is inaccurate then you can change the limit by setting the following attribute (this may slow down query performance in some cases).

Metadata row scan limit option

Metadata row scan limit option

Using Disk for Query Processing Engine

There will be a time when you get OutOfMemory exception due to a large amount of data processed in memory by driver. In such case you instruct the driver to use Disk based temporary engine rather than in-memory using the below option.

  1. Click on Advanced View
  2. Goto Query Engine Temp Storage
  3. Select Disk rather than Memory for “Intermediate results storage“

 

More from ZappySys Blog

  • How to create custom ODBC Driver for API without codingHow to create custom ODBC Driver for API without coding
  • Connect to Infor Compass using JDBC Driver in ODBC Apps (e.g. SQL Server, Power BI, Excel, Informatica)Connect to Infor Compass using JDBC Driver in ODBC Apps (e.g. SQL Server, Power BI, Excel, Informatica)
  • How To Connect ADP API using HTTP Connection over OAuth ConnectionHow To Connect ADP API using HTTP Connection over OAuth Connection
  • How to Secure ZappySys Data Gateway (Network Settings)How to Secure ZappySys Data Gateway (Network Settings)
Posted in ODBC PowerPack and tagged metadata, odbc, performance.

Post navigation

← Connect Workday in Power BI…
Import REST API in Google… →
SSIS PowerPack - Collection of custom Tasks, Sources, Destination s and Transform
70+ high performance, drag and drop connectors/tasks for SSIS
Download Read More
ODBC PowerPack - Collection of ODBC Drivers for REST API, JSON, XML, SOAP, OData
ODBC Drivers for REST API, JSON, XML, SOAP, OData … Integrate inside Apps like Power BI, Tableau, SSRS, Excel, Informatica and more…
Download Read More

Recent Posts

  • Product release announcements have been moved to the Community Site
  • How to upload files on SharePoint Online using SSIS
  • How to download files from SharePoint Online using SSIS
  • How to download files from OneDrive using SSIS
  • How to upload files on OneDrive using SSIS

Categories

  • Announcements (44)
  • Cloud Computing (13)
    • AWS (Amazon Web Services) (12)
      • Redshift (5)
      • S3 (Simple Storage Service) (7)
  • Google API (14)
  • ODBC PowerPack (61)
    • ODBC App Integration (24)
      • ETL – Informatica (3)
      • ETL – Pentaho Kettle (1)
      • ETL – Talend (1)
      • Reporting – Google Sheet (1)
      • Reporting – Microsoft Access (4)
      • Reporting – Microsoft Excel (1)
      • Reporting – Microsoft Power BI (8)
      • Reporting – Microsoft SSRS (2)
      • Reporting – MicroStrategy (1)
      • Reporting – Qlik (1)
      • Reporting – Tableau (2)
    • ODBC Drivers (51)
      • Amazon S3 CSV Driver (2)
      • Amazon S3 JSON Driver (2)
      • Amazon S3 XML Driver (2)
      • API Driver (1)
      • Azure Blob CSV Driver (1)
      • Azure Blob JSON Driver (1)
      • Azure Blob XML Driver (1)
      • CSV File / REST API Driver (1)
      • JDBC Bridge Driver (1)
      • JSON File / REST API Driver (34)
      • Salesforce Driver (1)
      • XML File / SOAP API Driver (22)
    • ODBC Gateway (17)
    • ODBC Programming (14)
      • C# (CSharp) (2)
      • JAVA (1)
      • PHP (1)
      • PowerShell (1)
      • Python (2)
      • T-SQL (SQL Server) (9)
  • REST API (37)
  • REST API Integration (66)
  • SSIS PowerPack (211)
    • SSIS Components (148)
      • REST Connector (3)
      • SSIS Amazon DynamoDB Destination (1)
      • SSIS Amazon DynamoDB Src (1)
      • SSIS Amazon Redshift Destination (1)
      • SSIS Amazon Redshift Source (1)
      • SSIS Amazon S3 CSV Dest (1)
      • SSIS Amazon S3 CSV Source (2)
      • SSIS Amazon S3 JSON Source (2)
      • SSIS Amazon S3 XML Source (1)
      • SSIS Amazon SQS Destination (1)
      • SSIS Amazon SQS Source (1)
      • SSIS API Destination (1)
      • SSIS API Source (6)
      • SSIS Azure Blob CSV Destination (1)
      • SSIS Azure Blob CSV Source (1)
      • SSIS Azure Blob JSON Source (1)
      • SSIS Azure Blob XML Source (1)
      • SSIS Azure Queue Storage Destination (2)
      • SSIS Azure Queue Storage Source (2)
      • SSIS Azure Table Storage Destination (1)
      • SSIS Azure Table Storage Source (1)
      • SSIS Conditional Split Transform (1)
      • SSIS CSV File Destination (2)
      • SSIS CSV Generator Transform (2)
      • SSIS CSV Parser Transform (1)
      • SSIS CSV Source (6)
      • SSIS Dummy Data Source (1)
      • SSIS Dynamics CRM Destination (3)
      • SSIS Dynamics CRM Source (3)
      • SSIS Excel Destination (2)
      • SSIS Excel Source (4)
      • SSIS Google Analytics Source (3)
      • SSIS HTML Table Source (2)
      • SSIS JSON File Destination (2)
      • SSIS JSON Generator Transform (8)
      • SSIS JSON Parser Transform (5)
      • SSIS JSON Source (File/REST) (62)
      • SSIS Merge Join Transform (1)
      • SSIS MongoDB Destination (3)
      • SSIS MongoDB Source (6)
      • SSIS PostgreSQL Destination (1)
      • SSIS PostgreSQL Source (2)
      • SSIS Recordset Destination (1)
      • SSIS Salesforce Destination (3)
      • SSIS Salesforce Source (4)
      • SSIS Script Component (1)
      • SSIS Set Variable Transform (1)
      • SSIS SFTP CSV Source (1)
      • SSIS SFTP JSON Source (1)
      • SSIS SFTP XML Source (1)
      • SSIS Sort Transform (1)
      • SSIS Template Transform (12)
      • SSIS Trash Destination (3)
      • SSIS Upsert Destination (7)
      • SSIS WEB API Destination (13)
      • SSIS XML File Destination (1)
      • SSIS XML Generator Transform (4)
      • SSIS XML Parser Transform (3)
      • SSIS XML Source (File / SOAP) (20)
    • SSIS Connection Manager (41)
      • HTTP Connection (6)
      • SSIS Amazon S3 Connection (3)
      • SSIS Azure Blob Connection (4)
      • SSIS Dynamics CRM Connection (1)
      • SSIS Excel Connection (2)
      • SSIS MongoDB Connection (1)
      • SSIS OAuth Connection (14)
      • SSIS PostgreSql Connection (4)
      • SSIS Salesforce Connection (4)
      • SSIS SFTP / FTP Connection (2)
    • SSIS Tasks (87)
      • SSIS Advanced File System Task (10)
      • SSIS Amazon DynamoDB ExecuteSQL Task (1)
      • SSIS Amazon Redshift ExecuteSQL Task (1)
      • SSIS Amazon Storage Task (10)
      • SSIS Azure Blob Storage Task (9)
      • SSIS CSV Export Task (6)
      • SSIS Download File Task (1)
      • SSIS Excel Export Task (3)
      • SSIS ForEach Loop Task (1)
      • SSIS JSON Export Task (2)
      • SSIS JSON Parser Task (2)
      • SSIS Logging Task (7)
      • SSIS MongoDB ExecuteSQL (4)
      • SSIS PostgreSQL ExecuteSQL Task (1)
      • SSIS Regex Parser Task (4)
      • SSIS Report Generator (SSRS) (1)
      • SSIS REST API Task (36)
      • SSIS Salesforce API Task (1)
      • SSIS SFTP Task (6)
      • SSIS Timer Task (1)
      • SSIS XML Export Task (2)
      • SSIS Zip File Task (1)
  • SSIS Tips & How-Tos (15)
  • Tools (3)
  • Uncategorized (10)
  • ZappyShell (1)

Tags

access amazon api API Integration aws azure CSV Destination excel export fiddler google google api json json source MongoDB oauth oauth2 odata odbc office 365 pagination PostgreSQL power bi redshift regex Regular Expression rest rest api s3 salesforce soap source SQL sql server ssis ssis advanced file system task SSIS JSON Generator Transform ssis json source SSIS PowerPack ssis rest api task ssis xml source storage upsert xml

Sitemap

  • Home
  • Products
  • Purchase
  • Product Documentation
  • Contact us
  • Privacy Policy
  • Terms & Conditions
  • Support

Follow us

RSS ZappySys Community – Latest topics

  • SSIS Tips: How to Schedule Your Packages Effectively
  • SSIS Tip: Extracting JSON Data Stored in SQL Server Tables
  • SSIS Tutorial: Converting Excel to PDF using Adobe APIs
  • SSIS tips: How to import JSON files to MongoDB
  • SSIS Tip: How to import JSON files into SQL Server
  • How to set default AWS credentials in SSIS or ODBC application
  • How to Fix the “This App is Blocked” Error in Google OAuth
  • SSIS tips: How to delete old files easily from SFTP
  • How to copy files with specific extensions using SSIS
  • How to move old files and compress them using SSIS (archiving files)
All rights reserved ZappySys LLC.
A SiteOrigin Theme