Azure Blob JSON File Connector for SQL Server
In this article you will learn how to integrate Using Azure Blob JSON File Connector you will be able to connect, read, and write data from within SQL Server. Follow the steps below to see how we would accomplish that. The driver mentioned above is part of ODBC PowerPack which is a collection of high-performance Drivers for various API data source (i.e. REST API, JSON, XML, CSV, Amazon S3 and many more). Using familiar SQL query language you can make live connections and read/write data from API sources or JSON / XML / CSV Files inside SQL Server (T-SQL) or your favorite Reporting (i.e. Power BI, Tableau, Qlik, SSRS, MicroStrategy, Excel, MS Access), ETL Tools (i.e. Informatica, Talend, Pentaho, SSIS). You can also call our drivers from programming languages such as JAVA, C#, Python, PowerShell etc. If you are new to ODBC and ZappySys ODBC PowerPack then check the following links to get started. |
Connect to Azure Blob JSON File in other apps
|
Create Data Source in ZappySys Data Gateway based on ZappySys Azure Blob JSON Driver
-
Download and install ZappySys ODBC PowerPack.
-
Search for gateway in start menu and Open ZappySys Data Gateway:
-
Go to Users Tab to add our first Gateway user. Click Add; we will give it a name tdsuser and enter password you like to give. Check Admin option and click OK to save. We will use these details later when we create linked server:
-
Now we are ready to add a data source. Click Add, give data source a name (Copy this name somewhere, we will need it later) and then select Native - ZappySys Azure Blob JSON Driver. Finally, click OK. And it will create the Data Set for it and open the ZS driver UI.
AZUREBLOB-JSONFILEDSNNative - ZappySys Azure Blob JSON Driver -
Create and configure a connection for the Azure Blob storage account.
-
You can use select your desired single file by clicking [...] path button.
mybucket/dbo.tblNames.jsondbo.tblNames.json
----------OR----------You can also read the multiple files stored in Azure Blob Storage using wildcard pattern supported e.g. dbo.tblNames*.json.
Note: If you want to operation with multiple files then use wild card pattern as below (when you use wild card pattern in source path then system will treat target path as folder regardless you end with slash) mybucket/dbo.tblNames.json (will read only single .JSON file) mybucket/dbo.tbl*.json (all files starting with file name) mybucket/*.json (all files with .json Extension and located under folder subfolder)
mybucket/dbo.tblNames*.json
----------OR----------You can also read the zip and gzip compressed files also without extracting it in using Azure Blob JSON Source File Task.
mybucket/dbo.tblNames*.gz -
Now select/enter Path expression in Path textbox to extract only specific part of JSON string as below ($.value[*] will get content of value attribute from JSON document. Value attribute is array of JSON documents so we have to use [*] to indicate we want all records of that array)
NOTE: Here, We are using our desired filter, but you need to select your desired filter based on your requirement.Go to Preview Tab. -
Navigate to the Preview Tab and let's explore the different modes available to access the data.
-
--- Using Direct Query ---
Click on Preview Tab, Select Table from Tables Dropdown and select [value] and click Preview.
-
--- Using Stored Procedure ---
Note : For this you have to Save ODBC Driver configuration and then again reopen to configure same driver. For more information click here.Click on the Custom Objects Tab, Click on Add button and select Add Procedure and Enter an appropriate name and Click on OK button to create.
-
--- Without Parameters ---
Now Stored Procedure can be created with or without parameters (see example below). If you use parameters then Set default value otherwise it may fail to compilation)
-
--- With Parameters ---
Note : Here you can use Placeholder with Paramters in Stored Procedure. Example : SELECT * FROM $ WHERE OrderID = '<@OrderID, FUN_TRIM>' or CustId = '<@CustId>' and Total >= '<@Total>'
-
-
--- Using Virtual Table ---
Note : For this you have to Save ODBC Driver configuration and then again reopen to configure same driver. For more information click here.ZappySys APi Drivers support flexible Query language so you can override Default Properties you configured on Data Source such as URL, Body. This way you don't have to create multiple Data Sources if you like to read data from multiple EndPoints. However not every application support supplying custom SQL to driver so you can only select Table from list returned from driver.
Many applications like MS Access, Informatica Designer wont give you option to specify custom SQL when you import Objects. In such case Virtual Table is very useful. You can create many Virtual Tables on the same Data Source (e.g. If you have 50 Buckets with slight variations you can create virtual tables with just URL as Parameter setting).
vt__Customers DataPath=mybucket_1/customers.json vt__Orders DataPath=mybucket_2/orders.json vt__Products DataPath=mybucket_3/products.json
-
Click on the Custom Objects Tab, Click on Add button and select Add Table and Enter an appropriate name and Click on OK button to create.
-
Once you see Query Builder Window on screen Configure it.
-
Click on Preview Tab, Select Virtual Table(prefix with vt__) from Tables Dropdown or write SQL query with Virtual Table name and click Preview.
-
Click on the Custom Objects Tab, Click on Add button and select Add Table and Enter an appropriate name and Click on OK button to create.
-
-
Click OK to finish creating the data source
-
That's it; we are done. In a few clicks we configured the to Read the Azure Blob JSON File data using ZappySys Azure Blob JSON File Connector
Read data in SQL Server from the ZappySys Data Gateway
-
To read the data in SQL Server the first thing you have to do is create a Linked Server. Go to SQL Server Management Studio and configure it in a similar way:
-
Then click on Security option and configure username we created in ZappySys Data Gateway in one of the previous steps:
-
Optional: Under the Server Options, Enable RPC and RPC Out and Disable Promotion of Distributed Transactions(MSDTC).
You need to enable RPC Out if you plan to use
EXEC(...) AT [MY_LINKED_SERVER_NAME]
rather than OPENQUERY.
If don't enabled it, you will encounter theServer 'MY_LINKED_SERVER_NAME' is not configured for RPC
error.Query Example:
EXEC('Select * from Products') AT [MY_LINKED_SERVER_NAME]
If you plan to use
'INSERT INTO...EXEC(....) AT [MY_LINKED_SERVER_NAME]'
in that case you need to Disable Promotion of Distributed Transactions(MSDTC).
If don't disabled it, you will encounter theThe operation could not be performed because OLE DB provider "SQLNCLI11" for linked server "MY_LINKED_SERVER_NAME" was unable to begin a distributed transaction.
error.Query Example:
Insert Into dbo.Products EXEC('Select * from Products') AT [MY_LINKED_SERVER_NAME]
-
Finally, open a new query and execute a query we saved in one of the previous steps:
SELECT * FROM OPENQUERY([MY_LINKED_SERVER_NAME], 'SELECT * FROM Products');
Create Linked Server using Code
In previous section you saw how to create a Linked Server from UI. You can do similar action by code too (see below). Run below script after changing necessary parameters. Assuming your Data Source name on ZappySys Data Gateway UI is 'AzureBlobJsonFileDSN'
USE [master]
GO
--///////////////////////////////////////////////////////////////////////////////////////
--Run below code in SSMS to create Linked Server and use ZappySys Drivers in SQL Server
--///////////////////////////////////////////////////////////////////////////////////////
//Replace YOUR_GATEWAY_USER, YOUR_GATEWAY_PASSWORD
//Replace localhost with IP/Machine name if ZappySys Gateway Running on different machine other than SQL Server
//Replace Port 5000 if you configured gateway on a different port
--1. Configure your gateway service as per this article https://zappysys.com/links?id=10036
--2. Make sure you have SQL Server Installed. You can download FREE SQL Server Express Edition from here if you dont want to buy Paid version https://www.microsoft.com/en-us/sql-server/sql-server-editions-express
--Uncomment below if you like to drop linked server if it already exists
--EXEC master.dbo.sp_dropserver @server=N'LS_AzureBlobJsonFileDSN', @droplogins='droplogins'
--3. Create new linked server
EXEC master.dbo.sp_addlinkedserver
@server = N'LS_AzureBlobJsonFileDSN' --Linked server name (this will be used in OPENQUERY sql
, @srvproduct=N''
---- For MSSQL 2012,2014,2016 and 2019 use below (SQL Server Native Client 11.0)---
, @provider=N'SQLNCLI11'
---- For MSSQL 2022 or higher use below (Microsoft OLE DB Driver for SQL Server)---
--, @provider=N'MSOLEDBSQL'
, @datasrc=N'localhost,5000' --//Machine / Port where Gateway service is running
, @provstr=N'Network Library=DBMSSOCN;'
, @catalog=N'AzureBlobJsonFileDSN' --Data source name you gave on Gateway service settings
--4. Attach gateway login with linked server
EXEC master.dbo.sp_addlinkedsrvlogin
@rmtsrvname=N'LS_AzureBlobJsonFileDSN' --linked server name
, @useself=N'False'
, @locallogin=NULL
, @rmtuser=N'YOUR_GATEWAY_USER' --enter your Gateway user name
, @rmtpassword='YOUR_GATEWAY_PASSWORD' --enter your Gateway user's password
GO
--5. Enable RPC OUT (This is Optional - Only needed if you plan to use EXEC(...) AT YourLinkedServerName rather than OPENQUERY
EXEC sp_serveroption 'LS_AzureBlobJsonFileDSN', 'rpc', true;
EXEC sp_serveroption 'LS_AzureBlobJsonFileDSN', 'rpc out', true;
--Disable MSDTC - Below needed to support INSERT INTO from EXEC AT statement
EXEC sp_serveroption 'LS_AzureBlobJsonFileDSN', 'remote proc transaction promotion', false;
--Increase query timeout if query is going to take longer than 10 mins (Default timeout is 600 seconds)
--EXEC sp_serveroption 'LS_AzureBlobJsonFileDSN', 'query timeout', 1200;
GO
Firewall settings
So far we have assumed that Gateway is running on the same machine as SQL Server. However there will be a case when ZappySys ODBC PowerPack is installed on a different machine than SQL Server. In such case you may have to perform additional Firewall configurations. On most computers firewall settings wont allow outside traffic to ZappySys Data Gateway. In such case perform following steps to allow other machines to connect to Gateway.
Method-1 (Preferred)If you are using newer version of ZappySys Data Gateway then adding firewall rule is just a single click.
- Search for gateway in start menu and open ZappySys Data Gateway.
- Go to Firewall Tab and click Add Firewall Rule button like below. This will create Firewall rule to all Inbound Traffic on Port 5000 (Unless you changed it).
- Search for Windows Firewall Advanced Security in start menu.
- Under Inbound Rules > Right click and click [New Rule] >> Click Next
- Select Port on Rule Type >> Click Next
- Click on TCP and enter port number under specified local port as 5000 (use different one if you changed Default port) >> Click Next
- Select Profile (i.e. Private, Public) >> Click Next
- Enter Rule name [i.e. ZappySys Data Gateway – Allow Inbound ] >> Click Next
- Click OK to save the rule
OPENQUERY vs EXEC (handling larger SQL text)
So far we have seen examples of using OPENQUERY. It allows us to send pass-through query at remote server. The biggest limitation of OPENQUERY is it doesn't allow you to use variables inside SQL so often we have to use unpleasant looking dynamic SQL (Lots of tick, tick …. and escape hell). Well there is good news. With SQL 2005 and later you can use EXEC(your_sql) AT your_linked_server
syntax .
Disadvantage of EXEC AT is you cannot do SELECT INTO like OPENQUERY. Also you cannot perform JOIN like below in EXEC AT
SELECT a.* FROM OPENQUERY([ls_AzureBlobJsonFileDSN],'select * from Customers') a
JOIN OPENQUERY([ls_AzureBlobJsonFileDSN],'select * from Orders') b ON a.CustomerId=b.CustomerId;
However you can always do INSERT INTO SomeTable EXEC(…) AT your_linked_server
. So table must exists when you do that way.
Here is how to use it. To use EXEC(..) AT {linked-server}
you must turn on RPC OUT
option. Notice how we used variable in SQL to make it dynamic. This is much cleaner than previous approach we saw.
USE [master]
GO
--Replace YOUR_GATEWAY_USER, YOUR_GATEWAY_PASSWORD
--Replace localhost with IP/Machine name if ZappySys Gateway Running on different machine other than SQL Server
--Create new linked server
EXEC master.dbo.sp_addlinkedserver
@server = N'LS_AzureBlobJsonFileDSN' --Linked server name (this will be used in OPENQUERY sql)
, @srvproduct=N''
---- For MSSQL 2012,2014,2016 and 2019 use below (SQL Server Native Client 11.0)---
, @provider=N'SQLNCLI11'
---- For MSSQL 2022 or higher use below (Microsoft OLE DB Driver for SQL Server)---
--, @provider=N'MSOLEDBSQL'
, @datasrc=N'localhost,5000' --//Machine / Port where Gateway service is running
, @provstr=N'Network Library=DBMSSOCN;'
, @catalog=N'AzureBlobJsonFileDSN' --Data source name you gave on Gateway service settings
--Attach gateway login with linked server
EXEC master.dbo.sp_addlinkedsrvlogin
@rmtsrvname=N'LS_AzureBlobJsonFileDSN' --linked server name
, @useself=N'False'
, @locallogin=NULL
, @rmtuser=N'YOUR_GATEWAY_USER' --enter your Gateway user name
, @rmtpassword='YOUR_GATEWAY_PASSWORD' --enter your Gateway user's password
GO
--5. Enable RPC OUT (This is Optional - Only needed if you plan to use EXEC(...) AT YourLinkedServerName rather than OPENQUERY
EXEC sp_serveroption 'LS_AzureBlobJsonFileDSN', 'rpc', true;
EXEC sp_serveroption 'LS_AzureBlobJsonFileDSN', 'rpc out', true;
--Disable MSDTC - Below needed to support INSERT INTO from EXEC AT statement
EXEC sp_serveroption 'LS_AzureBlobJsonFileDSN', 'remote proc transaction promotion', false;
--Increase query timeout if query is going to take longer than 10 mins (Default timeout is 600 seconds)
--EXEC sp_serveroption 'LS_AzureBlobJsonFileDSN', 'query timeout', 1200;
GO
Here is the difference between OPENQUERY vs EXEC approaches:
Fetching Tables / Columns using metadata stored procs
ZappySys Data Gateway emulates certains system procs you might find in real SQL Server. You can call using below syntax using 4-Parts syntaxexec [linked-server-name].[gateway-datasource-name].[DATA].sp_tables
exec [linked-server-name].[gateway-datasource-name].[DATA].sp_columns_90 N'your-table-name'
Example:
//List all tables
exec [ls_AzureBlobJsonFileDSN].[AzureBlobJsonFileDSN].[DATA].sp_tables
//List all columns and its type for specified table
exec [ls_AzureBlobJsonFileDSN].[AzureBlobJsonFileDSN].[DATA].sp_columns_90 N'Account'
Known Issues
Let's explore some common problems that can occur when using OPENQUERY or Data Gateway connectivity.
SQL Native Client 11.0 not visible in the Providers dropdown (Linked Server Creation)
If you are following some screenshots / steps from our article it might say use SQL Native Client to create Linked Server to ZappySys Gateway but for some users they dont see that driver entry in the dropdown. This is due to the fact that Microsoft has deprecated SQL Native Client OLEDB Driver (SQLNCLI and SQLNCLI11) going forward after SQL 2022. So you need to use [Microsoft OLE DB Driver for SQL Server] instead (MSOLEDBSQL). Please follow all other instructions except the driver type selection, use new suggested driver instead if you dont see SQL Native Client.
Error: The data is invalid
There will be a time when, you may encounter unexpected errors like the ones listed below. These can include:
OLE DB provider "SQLNCLI11" for linked server "Zs_Csv" returned message "Deferred prepare could not be completed.". OLE DB provider "SQLNCLI11" for linked server "Zs_Csv" returned message "Communication link failure". Msg 13, Level 16, State 1, Line 0 Session Provider: The data is invalid.Possible Cause:
There are few reasons for such error but below are two main reasons
-
If the query length exceeds 2000 characters, as shown below, you might encounter this error.
SELECT * FROM OPENQUERY(LS, '--some really long text more than 2000 chars--')
-
If a query contains multiple OPENQUERY statements for JOINs or UNIONs, as shown below, it might fail due to a MARS compatibility issue where the gateway doesn't support parallel queries on a single connection.
SELECT a.id, b.name from OPENQUERY(LS, 'select * from tbl1') a join OPENQUERY(LS, 'select * from tbl2') b on a.id=b.id
There are few ways to fix above error based on reason why you getting this error (i.e. Query Length issue OR JOIN/UNION in the same statement)
-
If your query has long SQL (more than 2000 chars ) then reduce SQL length using different techniques
- e.g. use SELECT * FROM MyTable rather than SELECT col1,col2… FROM MyTable
- Use Meta Option in WITH clause if you must use column name. (e.g. SELECT * FROM MyTable WITH(META=’c:\meta.txt’) this way you can define column in Meta file rather than SELECT query. Check this article
- Consider using EXECT (….) AT [Linked_Server_name] option rather than OPENQUERY so you can use very long SQL (See next section on EXEC..AT usecase)
-
Consider using Virtual Table / Stored Proc to wrap long SQL so your call is very small (where usp_GetOrdersByYear is custom proc created on ZappySys Driver UI)
SELECT * FROM OPENQUERY(LS, 'EXEC usp_GetOrdersByYear 2021')
-
If your query uses JOIN / UNION with multiple OPENQUERY in same SQL then use multiple Linked servers (one for each OPENQUERY clause) as below.
select a.id, b.name from OPENQUERY(LS_1, 'select * from tbl1') a join OPENQUERY(LS_2, 'select * from tbl2') b on a.id=b.id
Error: Unable to begin a distributed transaction (When INSERT + EXEC used)
If you try to use the EXEC statement to insert data into a table, as shown below, you might encounter the following error unless the MSDTC option is turned off.
INSERT INTO MyTable EXEC('select * from tbl') AT MyLinkedServer
"Protocol error in TDS stream" The operation could not be performed because OLE DB provider "SQLNCLI11" for linked server "ls_Json2" was unable to begin a distributed transaction. --OR-- The operation could not be performed because OLE DB provider "MSOLEDBSQL" for linked server "ls_Json" was unable to begin a distributed transaction.
Solution:
Method-1: Go to linked server properties | Server Options | Enable Promotion of Distributed Transaction | Change to false (Default is true)
Now your try your INSERT with EXEC AT and it should work
Method-2: Run the below command if you dont want to use UI
EXEC master.dbo.sp_serveroption @server=N'My_Linked_Server', @optname=N'remote proc transaction promotion', @optvalue=N'false'
Error: Cannot use OPENQUERY with JOIN / UNION
When you perform a JOIN or UNION ALL on the same Linked Server, it may fail to process sometimes because the Data Gateway doesn't support parallel query requests on the same connection. A workaround for that would be to create multiple linked servers for the same data source. Refer to the section above for the same workaround.
Error: Truncation errors due to data length mismatch
Many times, you may encounter truncation errors if a table column's length is less than the actual column size from the query column. To solve this issue, use the new version of Data Gateway and check the 'Use nvarchar(max) for string options' option found on the General Tab.
Performance Tips
Now, let's look at a few performance tips in this section.
Use INSERT INTO rather than SELECT INTO to avoid extra META request
We discussed some Pros and Cons of OPENQUERY vs EXEC (…) AT in previous section. One obvious advantage of EXEC (….) AT is it reduces number of requests to driver (It sends pass through query). With EXEC you cannot load data dynamically like SELECT INTO tmp FROM OPENQUERY. Table must exist before hand if you use EXEC.
INSERT INTO tmp_API_Report_Load(col1,col2)
EXEC('select col1,col2 from some_api_table') AT [API-LINKED-SERVER]
--OR--
INSERT INTO tmp_API_Report_Load(col1,col2)
select col1,col2 from OPENQUERY([API-LINKED-SERVER], 'select col1,col2 from some_api_table')
The advantage of this method is that your query speed will increase because the system only calls the API once when you use EXEC AT. In contrast, with OPENROWSET, the query needs to be called twice: once to obtain metadata and once to retrieve the data.
Use Cached Metadata if possible
By default, most SQL queries sent to the Data Gateway need to invoke two phases: first, to get metadata, and second, to fetch data. However, you can bypass the metadata API call by supplying static metadata. Use the META property in the WITH clause, as explained in this article, to speed up your SQL queries.Advanced topics
Create Custom Stored Procedure in ZappySys Driver
You can create procedures to encapsulate custom logic and then only pass handful parameters rather than long SQL to execute your API call.
Steps to create Custom Stored Procedure in ZappySys Driver. You can insert Placeholders anywhere inside Procedure Body. Read more about placeholders here
-
Go to Custom Objects Tab and Click on Add button and Select Add Procedure:
-
Enter the desired Procedure name and click on OK:
-
Select the created Stored Procedure and write the your desired stored procedure and Save it and it will create the custom stored procedure in the ZappySys Driver:
Here is an example stored procedure for ZappySys Driver. You can insert Placeholders anywhere inside Procedure Body. Read more about placeholders here
CREATE PROCEDURE [usp_get_orders] @fromdate = '<<yyyy-MM-dd,FUN_TODAY>>' AS SELECT * FROM Orders where OrderDate >= '<@fromdate>';
-
That's it now go to Preview Tab and Execute your Stored Procedure using Exec Command. In this example it will extract the orders from the date 1996-01-01:
Exec usp_get_orders '1996-01-01';
-
Let's generate the SQL Server Query Code to make the API call using stored procedure. Go to Code Generator Tab, select language as SQL Server and click on Generate button the generate the code.
As we already created the linked server for this Data Source, in that you just need to copy the Select Query and need to use the linked server name which we have apply on the place of [MY_API_SERVICE] placeholder.
SELECT * FROM OPENQUERY([MY_API_SERVICE], 'EXEC usp_get_orders @fromdate=''1996-07-30''')
-
Now go to SQL served and execute that query and it will make the API call using stored procedure and provide you the response.
Create Custom Virtual Table in ZappySys Driver
ZappySys API Drivers support flexible Query language so you can override Default Properties you configured on Data Source such as URL, Body. This way you don't have to create multiple Data Sources if you like to read data from multiple EndPoints. However not every application support supplying custom SQL to driver so you can only select Table from list returned from driver.
If you're dealing with Microsoft Access and need to import data from an SQL query, it's important to note that Access doesn't allow direct import of SQL queries. Instead, you can create custom objects (Virtual Tables) to handle the import process.
Many applications like MS Access, Informatica Designer wont give you option to specify custom SQL when you import Objects. In such case Virtual Table is very useful. You can create many Virtual Tables on the same Data Source (e.g. If you have 50 URLs with slight variations you can create virtual tables with just URL as Parameter setting.
-
Go to Custom Objects Tab and Click on Add button and Select Add Table:
-
Enter the desired Table name and click on OK:
-
And it will open the New Query Window Click on Cancel to close that window and go to Custom Objects Tab.
-
Select the created table, Select Text Type AS SQL and write the your desired SQL Query and Save it and it will create the custom table in the ZappySys Driver:
Here is an example SQL query for ZappySys Driver. You can insert Placeholders also. Read more about placeholders here
SELECT "ShipCountry", "OrderID", "CustomerID", "EmployeeID", "OrderDate", "RequiredDate", "ShippedDate", "ShipVia", "Freight", "ShipName", "ShipAddress", "ShipCity", "ShipRegion", "ShipPostalCode" FROM "Orders" Where "ShipCountry"='USA'
-
That's it now go to Preview Tab and Execute your custom virtual table query. In this example it will extract the orders for the USA Shipping Country only:
SELECT * FROM "vt__usa_orders_only"
-
Let's generate the SQL Server Query Code to make the API call using stored procedure. Go to Code Generator Tab, select language as SQL Server and click on Generate button the generate the code.
As we already created the linked server for this Data Source, in that you just need to copy the Select Query and need to use the linked server name which we have apply on the place of [MY_API_SERVICE] placeholder.
SELECT * FROM OPENQUERY([MY_API_SERVICE], 'EXEC [usp_get_orders] ''1996-01-01''')
-
Now go to SQL served and execute that query and it will make the API call using stored procedure and provide you the response.
Conclusion
In this article we discussed how to connect to Azure Blob JSON File in SQL Server and integrate data without any coding. Click here to Download Azure Blob JSON File Connector for SQL Server and try yourself see how easy it is. If you still have any question(s) then ask here or simply click on live chat icon below and ask our expert (see bottom-right corner of this page).
Download Azure Blob JSON File Connector for SQL Server
Documentation
More integrations
Other application integration scenarios for Azure Blob JSON File
Other connectors for SQL Server
Download Azure Blob JSON File Connector for SQL Server
Documentation
How to connect Azure Blob JSON File in SQL Server?
How to get Azure Blob JSON File data in SQL Server?
How to read Azure Blob JSON File data in SQL Server?
How to load Azure Blob JSON File data in SQL Server?
How to import Azure Blob JSON File data in SQL Server?
How to pull Azure Blob JSON File data in SQL Server?
How to push data to Azure Blob JSON File in SQL Server?
How to write data to Azure Blob JSON File in SQL Server?
How to POST data to Azure Blob JSON File in SQL Server?
Call Azure Blob JSON File API in SQL Server
Consume Azure Blob JSON File API in SQL Server
Azure Blob JSON File SQL Server Automate
Azure Blob JSON File SQL Server Integration
Integration Azure Blob JSON File in SQL Server
Consume real-time Azure Blob JSON File data in SQL Server
Consume real-time Azure Blob JSON File API data in SQL Server
Azure Blob JSON File ODBC Driver | ODBC Driver for Azure Blob JSON File | ODBC Azure Blob JSON File Driver | SSIS Azure Blob JSON File Source | SSIS Azure Blob JSON File Destination
Connect Azure Blob JSON File in SQL Server
Load Azure Blob JSON File in SQL Server
Load Azure Blob JSON File data in SQL Server
Read Azure Blob JSON File data in SQL Server
Azure Blob JSON File API Call in SQL Server