Example : You can download CSV Files from following url. http://zappysys.com/downloads/files/test/cust-1.csv http://zappysys.com/downloads/files/test/cust-2.csv http://zappysys.com/downloads/files/test/cust-3.csv
https://zappysys.com/downloads/files/test/invoices.csv.zip https://zappysys.com/downloads/files/test/invoices.csv.gz
Property Name | Description | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LoggingMode | LoggingMode determines how much information is logged during Package Execution. Set Logging mode to Debugging for maximum log.
|
||||||||||||||||||||||||||
PrefixTimestamp | When you enable this property it will prefix timestamp before Log messages. | ||||||||||||||||||||||||||
TreatBlankNumberAsNull | Treat empty string as NULL for any numeric data types | ||||||||||||||||||||||||||
TreatBlankBoolAsNull | Treat empty string as NULL for bool data types | ||||||||||||||||||||||||||
TreatBlankDateAsNull | Treat empty string as NULL for any date/time data types | ||||||||||||||||||||||||||
Encoding | Encoding of source file
|
||||||||||||||||||||||||||
CharacterSet | Character set for text (e.g. windows-1250 ) | ||||||||||||||||||||||||||
Culture | Culture code (e.g. pt-BT). This helps to parse culture specific number formats (e.g. In some culture you may have comma rather than decimal points 0.1 can be 0,1) | ||||||||||||||||||||||||||
MaxRows | Maximum JSON records to fetch. Set this value to 0 for all records | ||||||||||||||||||||||||||
EnableCustomReplace | Enables custom search / replace in the document text after its read from the file/url or direct string. This replace operation happens before its parsed. This option can be useful for custom escape sequence in source document which is causing issue in the parser. You can replace such unwanted characters fore parser starts parsing the text. | ||||||||||||||||||||||||||
SearchFor | String you like to search for (Only valid when EnableCustomReplace option is turned on). If you want to enable Regular Expression pattern search then add --regex or --regex-ic (for case-insensitive search) at the end of your search string (e.g. ORDER-\d+--regex OR ORDER-\d+--regex-ic (case-insensitive search) ) | ||||||||||||||||||||||||||
ReplaceWith | String you like to replace with (Only valid when EnableCustomReplace option is turned on). If you added --regex or --regex-ic at the end of your SearchFor string then ReplaceWith can use special placeholders (i.e. $1, $2...) based on regular expression groups. For example you SearchFor=(\w+)(@\w+.com) to search for emails then to mask emails you can something like this for ReplaceWith = ****$2 (where $2 is domain part and $1 is before @) | ||||||||||||||||||||||||||
ColumnDelimiter | Column delimiter for data you like to parse. To use custom delimiter type it directly or enter 4-digit hex string starting with \x (e.g. you can enter \x0009 for Tab character). For multiple characters repeat group. e.g. \x00090009 if you need two tabs. | ||||||||||||||||||||||||||
HasColumnHeaderRow | Column delimiter for data you like to parse. | ||||||||||||||||||||||||||
ThrowErrorOnColumnCountMismatch | Throw error if record has different number of columns than actual columns detected based on first row | ||||||||||||||||||||||||||
ThrowErrorOnNoRecordFound | Throw error if no record found or file is blank | ||||||||||||||||||||||||||
AllowComment | Allow comment lines which can be skipped by parser. When comment line found row is skipped. See CommentCharacter to configure first character for commented line. | ||||||||||||||||||||||||||
CommentCharacter | Allow lines with comment. When comment line found row is skipped. See LineCommentCharacter property to configure first character for commented line. | ||||||||||||||||||||||||||
SkipRows | Total data rows you like to skip (after header row) | ||||||||||||||||||||||||||
SkipHeaderCommentRows | Total rows you like to skip before header row. If its header less file then skip initial N rows (before any data row). | ||||||||||||||||||||||||||
TreatBlankAsNull | When this option is enabled, blank values for any column is treated as null so for string type rather than empty value you will see null value in the output. | ||||||||||||||||||||||||||
IgnoreBlankLines | When this option is enabled, blank lines are skipped. | ||||||||||||||||||||||||||
SkipEmptyRecords | When this option is enabled, any row with empty values in all fields is skipped (e.g. , , , , ). | ||||||||||||||||||||||||||
TrimHeaders | Trim column names if whitespace found before or after name | ||||||||||||||||||||||||||
TrimFields | Trim value for each field if whitespace found before or after | ||||||||||||||||||||||||||
IgnoreQuotes | Ignore quote character and consider it part of actual value | ||||||||||||||||||||||||||
QuoteCharacter | Quote character for quoted values. | ||||||||||||||||||||||||||
DirectPath | JSON file path (e.g. c:\data\myfile.json) or pattern to process multiple files (e.g. c:\data\*.json) | ||||||||||||||||||||||||||
Recursive | Include files from sub folders too. | ||||||||||||||||||||||||||
EnableMultiPathMode | Enable this option to treat DirectPath as list of paths / urls (separated by new line or double colon :: ). This option is very useful if you have many URLs / Paths with similar data structure and you want to return response from all URLs in one step (UNION all URLs with single dataset). Examples: http://someurl1::http://someurl2 --OR-- c:\file1::c:\file2 --OR-- c:\file1::https://someurl | ||||||||||||||||||||||||||
ContinueOnFileNotFoundError | By default process stops with error if specified local file is not found. Set this property to true if you wish to continue rather than throwing file not found error. | ||||||||||||||||||||||||||
FileCompressionType | Compression format for source file (e.g. gzip, zip)
|
||||||||||||||||||||||||||
DateFormatString | Specifies how custom date formatted strings are parsed when reading JSON. | ||||||||||||||||||||||||||
DateParseHandling | Specifies how date formatted strings, e.g. Date(1198908717056) and 2012-03-21T05:40Z, are parsed when reading JSON.
|
||||||||||||||||||||||||||
FloatParseHandling | Specifies how decimal values are parsed when reading JSON. Change this setting to Decimal if you like to have large precision / scale.
|
||||||||||||||||||||||||||
OnErrorOutputResponseBody | When you redirect error to error output by default you get additional information in ErrorMessage column. Check this option if you need exact Response Body (Useful if its in JSON/XML format which needs to be parsed for additional information for later step). | ||||||||||||||||||||||||||
OutputFilePath | Set this option to true if you want to output FilePath. This option is ignored when you consume DirectValue or data from Url rather than local files. Output column name will be __FilePath | ||||||||||||||||||||||||||
OutputFileName | Set this option to true if you want to output FileName. This option is ignored when you consume DirectValue or data from Url rather than local files. Output column name will be __FileName | ||||||||||||||||||||||||||
EnablePivot | When this property is true then Column is converted to Row. Pivoted names will appear under Pivot_Name column and values will appear under Pivot_Value field. | ||||||||||||||||||||||||||
IncludePivotPath | When this property is true then one extra column Pivot_Path appears in the output along with Pivot_Name and Pivot_Value. This option is really useful to see parent hierarchy for pivoted value. | ||||||||||||||||||||||||||
EnablePivotPathSearchReplace | Enables custom search/replace function on Pivot_Path before final value appears in the output. This option is only valid when IncludePivotPath=true. | ||||||||||||||||||||||||||
PivotPathSearchFor | Search string (static string or regex pattern) for search/replace operation on Pivot_Path. You can use --regex suffix to treat search string as Regular Expression (e.g. MyData-(\d+)--regex ). To invoke case in-sensitive regex search use --regex. This option is only valid when EnablePivotPathSearchReplace=true. | ||||||||||||||||||||||||||
PivotPathReplaceWith | Replacement string for search/replace operation on Pivot_Path. If you used --regex suffix in PivotPathSearchFor then you can use placeholders like $0, $1, $2... anywhere in this string (e.g. To remove first part of email id and just keep domain part you can do this way. Set PivotPathSearchFor=(\w+)@(\w+.com)--regex, and set current property i.e. PivotPathReplaceWith=***@$2 ). This option is only valid when EnablePivotPathSearchReplace=true. | ||||||||||||||||||||||||||
MetaDataScanMode | Metadata scan mode controls how data type and length is determined. By default few records scanned to determine datatype/length. Changing ScanMode affects length/datatype accuracy.
|
||||||||||||||||||||||||||
MetaDataCustomLength | Length for all string column. This option is only valid for MetaDataScanMode=Custom | ||||||||||||||||||||||||||
MetaDataTreatStringAsAscii | When this option is true, it detects all string values as DT_STR (Ascii) rather than DT_WSTR (Unicode) |