How to integrate CSV using Microsoft Fabric

Integrate Microsoft Fabric and CSV
Integrate Microsoft Fabric and CSV

Learn how to quickly and efficiently connect CSV with Microsoft Fabric for smooth data access.

Read and write CSV data effortlessly. Extract, sync, and manage CSV from URLs, strings, and local files for analytics, reporting, and data pipelines — almost no coding required. You can do it all using the high-performance CSV ODBC Driver. We'll walk you through the entire setup.

Ready to dive in? Download the product to jump right in, or follow the step-by-step guide below to see how it works.

Create data source using CSV ODBC Driver

Step-by-step instructions

To get data from CSV using Microsoft Fabric, we first need to create an ODBC data source. We will later read this data in Microsoft Fabric. Perform these steps:

  1. Download and install ODBC PowerPack (if you haven't already).

  2. Search for odbc and open the ODBC Data Sources (64-bit):

    Open ODBC Data Source
  3. Create a User data source (User DSN) based on the ZappySys CSV Driver driver:

    ZappySys CSV Driver
    Create new User DSN for ZappySys CSV Driver
    • Create and use a User DSN if the client application runs under a User Account. This is the ideal option at design time (e.g., when developing in Visual Studio). Use it for both types of applications (64-bit and 32-bit).
    • Create and use a System DSN if the client application runs under a System Account (e.g., as a Windows Service). This is usually the required option in a production environment. If your Windows Service is a 32-bit application, you must use the 32-bit ODBC Data Source Administrator to configure this
  4. Select Url or File.

    Read CSV API in Microsoft Fabric

    • Paste the following Url. In this example, We are using Zip format CSV File URL, but you need to refer your CSV File/URL.

      https://zappysys.com/downloads/files/test/cust-1.csv.zip
      Click on Test Connection button to view whether the Test Connection is SUCCESSFUL or Not. ZappySys ODBC Driver - Configure CSV Driver

    Read CSV File in Microsoft Fabric

    • You can use pass single file or multiple file path using wildcard pattern in path and you can use select single file by clicking [...] path button or multiple file using wildcard pattern in path.

      Note: If you want to operation with multiple files then use wild card pattern as below
      (when you use wild card pattern in source path then system will treat target path as folder regardless you end with slash)
      
      C:\SSIS\Test\reponse.csv (will read only single reponse.csv file)
      C:\SSIS\Test\j*.csv (all files starting with file name j)
      C:\SSIS\Test\*.csv (all files with .csv Extension and located under folder subfolder)
      
      Click on Test Connection button to view whether the Test Connection is SUCCESSFUL or Not. ZappySys ODBC Driver - Configure CSV Driver

  5. Once you configured a data source, you can preview data. Hit Preview tab, and use similar settings to preview data:
    ZappySys ODBC Driver - Preview CSV Driver

  6. Click OK to finish creating the data source

  7. That's it; we are done. In a few clicks we configured the read the CSV data using ZappySys CSV Connector.

Install Microsoft On-premises data gateway (Standard mode)

To access and read CSV data in Microsoft Fabric, you must download and install the Microsoft On-premises data gateway (Standard mode). It acts as a secure bridge between Microsoft Fabric cloud services and your local CSV ODBC data source:

On-premises data gateway securely bridging ODBC data source and Microsoft Fabric

There are two types of On-premises data gateways:

Standard mode
  • Supports Power BI and other Microsoft Cloud services
  • Installs as a Windows service
  • Starts automatically
  • Supports centralized user access control
  • Supports the Direct Query feature
  • Ideal for enterprise solutions
Personal mode
  • Supports Power BI services only
  • Cannot run as a Windows service
  • Stops when you sign out of Windows
  • Does not support access control
  • Does not support the Direct Query feature
  • Best for individual use and POC solutions

You can download the On-premises data gateway directly from the Microsoft Fabric or Power BI portals:

Download Power BI On-premises data gateway
You must use the Standard mode of the gateway. Personal mode is not supported by Microsoft Fabric.

Link ODBC data source via the gateway

Follow these steps to download, install, and configure the gateway in Standard mode:

  1. Download On-premises data gateway (standard mode) and run the installer.

  2. Once the configuration window opens, sign in:

    Signing in to on-premises data gateway standard
    Sign in with the same email address you use for Microsoft Fabric.
  3. Select Register a new gateway on this computer (or migrate an existing one):

    Registering or migrating on-premises data gateway standard
  4. Name your gateway, enter a Recovery key, and click the Configure button:

    Naming on-premises data gateway standard
    Save your Recovery Key in a safe place (like a password manager). If you lose it, you cannot restore or migrate this gateway later.
  5. Once Microsoft gateway is installed, check if it registered correctly:

    • Go back to Microsoft Fabric portal

    • Click Gear icon on top-right

    • And then hit Manage connections and gateways menu item

    Manage On-premise data gateways in Microsoft Fabric or Power BI
  6. Continue by clicking the On-premises data gateway tab and selecting Standard mode gateways from the dropdown menu:

    Access On-premises data gateway list (Standard mode) in Microsoft Fabric

    If your gateway is not listed, the registration may have failed. To resolve this:

    • Wait a couple of minutes and refresh Microsoft Fabric portal page
    • Restart the machine where On-premises data gateway is installed
    • Check firewall settings
  7. Success! The gateway is now Online and ready to handle requests.

  8. Done!
Make sure to download and install Standard mode. The Personal mode gateway is not supported by Microsoft Fabric cloud services and will not work.

You are now ready to load data into Microsoft Fabric.

Load CSV data into Microsoft Fabric

Now that we have configured the ODBC data source and installed the On-premises data gateway, we can proceed with loading data. You can accomplish this in two ways:

  • Copy job
    Best for simple, high-speed data copying without modification.
  • Dataflow Gen2
    Best if you need to transform, clean, or reshape data before loading.

Let's dive into the steps for both methods.

Use Copy job for high-speed loading

  1. Go to the Microsoft Fabric Portal.

  2. Select an existing Workspace or create a new one by clicking New workspace (ensure you are in the Home section):

    Create a new workspace in Microsoft Fabric for a Copy job
  3. Inside your workspace, click the New item button in the toolbar to start creating your data pipeline:

    Create new item in Microsoft Fabric workspace
  4. In the item selection window, choose Copy job to open the data ingestion wizard:

    Add Copy job to Microsoft Fabric workspace
  5. In the Choose data source screen, search for odbc and select the Odbc source:

    Choose ODBC as the data source in Microsoft Fabric Copy job
  6. Then enter your ODBC connection string (DSN=CsvDSN) and select MyGateway from the Data gateway dropdown we configured in the previous step:

    DSN=CsvDSN
    DSN=CsvDSN
    Configure ODBC connection string in Microsoft Fabric Copy job
  7. Select the table(s) and preview the data you wish to copy from CSV. Once done, click Next:

    DSN=CsvDSN
    Selecting tables to copy in Microsoft Fabric Copy Job
  8. Choose your Data Destination. You can create a New Fabric item (like a Lakehouse or Warehouse) or select an existing one:

    Choose data destination in Microsoft Fabric Copy job
    In this example, we will use a Lakehouse as the destination.
  9. Choose Full copy to load all data, or Incremental copy to load only changed data in subsequent runs:

    Select copy mode in Microsoft Fabric Copy job (Full vs Incremental)
  10. Review the Column and Table mappings section:

    Map source tables and columns to destination in Microsoft Fabric Copy job
  11. On the summary screen, review your settings. You can optionally enable Run on schedule. Click Save + Run to execute the job:

    DSN=CsvDSN
    DSN=CsvDSN
    Save and run the Copy job in Microsoft Fabric
  12. The job will enter the queue. Monitor the Status column to see the progress:

    DSN=CsvDSN
    Monitor the status of the Microsoft Fabric Copy job
  13. Wait for the status to change to Succeeded. Your CSV data is now successfully integrated into Microsoft Fabric!

    Verify Microsoft Fabric Copy job success status
  14. Let's go to our Lakehouse (MyLakehouse) and verify the data:

    View loaded data in Microsoft Fabric Lakehouse
  15. Success! The data has been loaded.

Use Dataflow for advanced transformation

Another way to load data is by creating a Dataflow Gen2. This approach allows you to perform complex data transformations (ETL) before loading the data into its destination.

Configure Dataflow activity

  1. Go to the Microsoft Fabric Portal.

  2. Inside your workspace, click New item and select Dataflow Gen2:

    Create Dataflow Gen2 in Microsoft Fabric
  3. In the Power Query editor, click Get data from another source:

    Get data from ODBC source in Dataflow
  4. Search for ODBC in the search bar and select the ODBC connector:

    Choose ODBC data source in Dataflow
  5. Then in the next step follow these instructions:

    • Enter your ODBC connection string (e.g., DSN=CsvDSN)
    • Expand Advanced options
    • Enter your SQL statement
    • Select your On-premises data gateway
    • Finally, click Next:
    DSN=CsvDSN
    DSN=CsvDSN
    SELECT * FROM Orders
    Configure ODBC data source in Dataflow
  6. You will see a preview of your CSV data. You can now transform the data if needed (filter rows, rename columns, change types, etc.):

    Odbc.Query("DSN=CsvDSN", "SELECT * FROM Orders")
    Source data preview in Dataflow
  7. Now, let's send this data to the Lakehouse. Click the + button (Add data destination) at the bottom right and select Lakehouse:

    Odbc.Query("DSN=CsvDSN", "SELECT * FROM Orders")
    Add destination in Dataflow
  8. Configure the destination connection settings and click Next:

    Configure destination in Dataflow
  9. Select your specific Lakehouse, enter the Table name you want to create, and click Next:

    Configure destination target in Dataflow
  10. Uncheck Use automatic settings to set data update or schema options manually. Map the columns with proper data types and click Save settings when done:

    Set column mappings in destination in Dataflow
  11. The destination is now set. Click the Publish button to save the Dataflow:

    Odbc.Query("DSN=CsvDSN", "SELECT * FROM Orders")
    Ready to publish Dataflow
  12. Done! You can now start building reports using your new semantic model.

Configure and run Pipeline

Once you have created and published your Dataflow, you can use a Pipeline to orchestrate and run it.

  1. Go to the Microsoft Fabric Portal.

  2. Inside your workspace, click New item and select Data Pipeline to create a new pipeline.

    Create Pipeline in Microsoft Fabric
  3. In the pipeline editor, select the Dataflow activity from the toolbar to add it to your canvas:

    Add Dataflow activity to pipeline
  4. Select the Dataflow activity on the canvas and click the Settings tab. Choose your Workspace and the Dataflow you created in the previous steps:

    Configure Dataflow activity settings in pipeline
  5. You are now ready to link the Dataflow with other Pipeline activities.

  6. Once the Pipeline flow is configured, click the Run button at the top, then click Save and run to execute the pipeline:

    Save and run pipeline
  7. Monitor the Output tab below. The Pipeline status will initially show as In progress:

    Pipeline run in progress status
  8. Wait for the process to complete. The status will update to Succeeded, indicating your data has been successfully loaded via the Dataflow:

    Pipeline run succeeded status
  9. Done! You can now start building reports on your new semantic model.

Optional: Centralized data access via ZappySys Data Gateway

In some situations, you may need to provide CSV data access to multiple users or services. Configuring the data source on a Data Gateway creates a single, centralized connection point for this purpose.

This configuration provides two primary advantages:

  • Centralized data access
    The data source is configured once on the gateway, eliminating the need to set it up individually on each user's machine or application. This significantly simplifies the management process.
  • Centralized access control
    Since all connections route through the gateway, access can be governed or revoked from a single location for all users.
Data Gateway
Local ODBC
data source
Simple configuration
Installation Single machine Per machine
Connectivity Local and remote Local only
Connections limit Limited by License Unlimited
Central data access
Central access control
More flexible cost

To achieve this, you must first create a data source in the Data Gateway (server-side) and then create an ODBC data source in Microsoft Fabric (client-side) to connect to it.

Let's not wait and get going!

Create CSV data source in the gateway

In this section we will create a data source for CSV in the Data Gateway. Let's follow these steps to accomplish that:

  1. Search for gateway in the Windows Start Menu and open ZappySys Data Gateway Configuration:

    Open ZappySys Data Gateway Service Manager
  2. Go to the Users tab and follow these steps to add a Data Gateway user:

    • Click the Add button
    • In the Login field enter a username, e.g., john
    • Then enter a Password
    • Check the Is Administrator checkbox
    • Click OK to save
    Data Gateway - Add User
  3. Now we are ready to add a data source:

    • Click the Add button
    • Give the Data source a name (have it handy for later)
    • Then select Native - ZappySys CSV Driver
    • Finally, click OK
    CsvDSN
    ZappySys CSV Driver
    Data Gateway - Add data source
  4. When the ZappySys CSV Driver configuration window opens, go back to ODBC Data Source Administrator where you already have the CSV ODBC data source created and configured, and follow these steps on how to Import data source configuration into the Gateway:

    • Open ODBC data source configuration and click Copy settings:
      ZappySys CSV Driver - Configuration [Version: 2.0.1.10418]
      ZappySys CSV Driver - CSV
      Read and write CSV data effortlessly. Extract, sync, and manage CSV from URLs, strings, and local files for analytics, reporting, and data pipelines — almost no coding required.
      CsvDSN
      Copy connection string for ODBC application
    • The window opens, telling us the connection string was successfully copied to the clipboard: Successful connection string copying for ODBC application
    • Then go to Data Gateway configuration and in data source configuration window click Load settings:

      CsvDSN
      ZappySys CSV Driver - Configuration [Version: 2.0.1.10418]
      ZappySys CSV Driver - CSV
      Read and write CSV data effortlessly. Extract, sync, and manage CSV from URLs, strings, and local files for analytics, reporting, and data pipelines — almost no coding required.
      CsvDSN
      Load configuration in ZappySys Data Gateway data source
    • Once a window opens, just paste the settings by pressing CTRL+V or by clicking right mouse button and then Paste option.
  5. Once done, go to the Network Settings tab and Add a firewall rule for inbound traffic:

    Data Gateway - Add firewall rule for inbound connections
    • This will initially allow all inbound traffic.
    • Click Edit IP filters to restrict access to specific IP addresses or ranges.
  6. Crucial Step: After creating or modifying the data source, you must:

    • Click the Save button to persist your changes.
    • Hit Yes when prompted to restart the Data Gateway service.

    This ensures all changes are properly applied:

    ZappySys Data Gateway - Save Changes
    Skipping this step may cause the new settings to fail, preventing you from connecting to the data source.

Create ODBC data source to connect to the gateway

In this part we will create an ODBC data source to connect to the ZappySys Data Gateway from Microsoft Fabric. To achieve that, let's perform these steps:

  1. Search for odbc and open the ODBC Data Sources (64-bit):

    Open ODBC Data Source
  2. Create a User data source (User DSN) based on the ODBC Driver 17 for SQL Server driver:

    ODBC Driver 17 for SQL Server
    Create new User DSN for ODBC Driver 17 for SQL Server
    If you don't see the ODBC Driver 17 for SQL Server driver in the list, choose a similar version.
  3. Then set a Name for the data source (e.g. Gateway) and the address of the Data Gateway:

    ZappySysGatewayDSN
    localhost,5000
    ODBC driver for SQL Server - Setting hostname and port
    Make sure you separate the hostname and port with a comma, e.g. localhost,5000.
  4. Proceed with the authentication part:

    • Select SQL Server authentication
    • In the Login ID field enter the user name you created in the Data Gateway, e.g., john
    • Set Password to the one you configured in the Data Gateway
    ODBC driver for SQL Server - Selecting SQL Authentication
  5. Then set the default database property to CsvDSN (the one we used in the Data Gateway):

    CsvDSN
    CsvDSN
    ODBC driver for SQL Server - Selecting database
    Make sure to type the data source name manually or copy/paste it directly into the field. Using the dropdown might fail because the Trust server certificate option is not enabled yet (next step).
  6. Continue by checking the Trust server certificate option:

    ODBC driver for SQL Server - Trusting certificate
  7. Once you do that, test the connection:

    ODBC driver for SQL Server - Testing connection
  8. If the connection is successful, everything is good:

    ODBC driver for SQL Server - Testing connection succeeded
  9. Done!

We are ready to move to the final step. Let's do it!

Access data in Microsoft Fabric via the gateway

Finally, we are ready to read data from CSV in Microsoft Fabric via the Data Gateway. Follow these final steps:

  1. Go back to Microsoft Fabric.

  2. Then, go to your Copy job or Dataflow and start configuring your ODBC data source (like you did in the previous step).

  3. In the ODBC configuration window, configure these fields:

    • Enter your ODBC connection string (DSN format), for example: DSN=ZappySysGatewayDSN
    • Expand Advanced options and set the SQL statement
    • Select MyGateway from the Data gateway dropdown that you configured in the previous step
    • Select Basic from the Authentication kind dropdown
    • Enter the Username (e.g., john) and Password that you configured in ZappySys Data Gateway
    DSN=ZappySysGatewayDSN
    SELECT * FROM Orders
    DSN=ZappySysGatewayDSN
    Configure access to ZappySys Data Gateway data source in Microsoft Fabric
  4. Read the data the same way we discussed at the beginning of this article.

  5. That's it!

Now you can connect to CSV data in Microsoft Fabric via the Data Gateway.

Configuring pagination in the CSV Driver

ZappySys CSV Driver equips users with powerful tools for seamless data extraction and management from REST APIs, leveraging advanced pagination methods for enhanced efficiency. These options are designed to handle various types of pagination structures commonly used in APIs. Below are the detailed descriptions of these options:

  1. Page-based Pagination: This method works by retrieving data in fixed-size pages from the Rest API. It allows you to specify the page size and navigate through the results by requesting different page numbers, ensuring that you can access all the data in a structured manner.

  2. Offset-based Pagination: With this approach, you can extract data by specifying the starting point or offset from which to begin retrieving data. It allows you to define the number of records to skip and fetch subsequent data accordingly, providing precise control over the data extraction process.

  3. Cursor-based Pagination: This technique involves using a cursor or a marker that points to a specific position in the dataset. It enables you to retrieve data starting from the position indicated by the cursor and proceed to subsequent segments, ensuring that you capture all the relevant information without missing any records.

  4. Token-based Pagination: In this method, a token serves as a unique identifier for a specific data segment. It allows you to access the next set of data by using the token provided in the response from the previous request. This ensures that you can systematically retrieve all the data segments without duplication or omission.

Utilizing these comprehensive pagination features in the ZappySys CSV Driver facilitates efficient data management and extraction from REST APIs, optimizing the integration and analysis of extensive datasets.

For more detailed steps, please refer to this link: How to do REST API Pagination in SSIS / ODBC Drivers

Authentication

ZappySys offers various authentication methods to securely access data from various sources. These authentication methods include OAuth, Basic Authentication, Token-based Authentication, and more, allowing users to connect to a wide range of data sources securely.

ZappySys Authentication is a robust system that facilitates secure access to data from a diverse range of sources. It includes a variety of authentication methods tailored to meet the specific requirements of different data platforms and services. These authentication methods may involve:

  1. OAuth: ZappySys supports OAuth for authentication, which allows users to grant limited access to their data without revealing their credentials. It's commonly used for applications that require access to user account information.

  2. Basic Authentication: This method involves sending a username and password with every request. ZappySys allows users to securely access data using this traditional authentication approach.

  3. Token-based Authentication: ZappySys enables users to utilize tokens for authentication. This method involves exchanging a unique token with each request to authenticate the user's identity without revealing sensitive information.

By implementing these authentication methods, ZappySys ensures the secure and reliable retrieval of data from various sources, providing users with the necessary tools to access and integrate data securely and efficiently. For more comprehensive details on the authentication process, please refer to the official ZappySys documentation or reach out to their support team for further assistance.

For more details, please refer to this link: ZappySys Connections

Conclusion

In this article we showed you how to connect to CSV in Microsoft Fabric and integrate data without writing complex code — all of this was powered by CSV ODBC Driver.

Download ODBC PowerPack now or ping us via chat if you have any questions or are looking for a specific feature (you can also reach out to us by submitting a ticket):

Explore Microsoft Fabric connectors

All
Big Data & NoSQL
Database
CRM & ERP
Marketing
Collaboration
Cloud Storage
Reporting
Commerce
API & Files

More CSV integrations

All
Data Integration
Database
BI & Reporting
Productivity
Programming Languages
Automation & Scripting
ODBC applications