Thor Logo dbatools

Import-DbaCsv

View Source
Chrissy LeMaire (@cl), netnerds.net
Windows, Linux, macOS

Synopsis

Imports CSV files into SQL Server tables using high-performance bulk copy operations.

Description

Import-DbaCsv uses .NET’s SqlBulkCopy class to efficiently load CSV data into SQL Server tables, handling files of any size from small datasets to multi-gigabyte imports. The function wraps the entire operation in a transaction, so any failure or interruption rolls back all changes automatically.

When the target table doesn’t exist, you can use -AutoCreateTable to create it on the fly with basic nvarchar(max) columns. For production use, create your table first with proper data types and constraints. The function intelligently maps CSV columns to table columns by name, with fallback to ordinal position when needed.

Supports various CSV formats including custom delimiters, quoted fields, gzip compression (.csv.gz files), and multi-line values within quoted fields. Column mapping lets you import specific columns or rename them during import, while schema detection can automatically place data in the correct schema based on filename patterns.

Perfect for ETL processes, data migrations, or loading reference data where you need reliable, fast imports with proper error handling and transaction safety.

Syntax

Import-DbaCsv
    [[-Path] <Object[]>]
    [-SqlInstance] <DbaInstanceParameter[]>
    [[-SqlCredential] <PSCredential>]
    [-Database] <String>
    [[-Table] <String>]
    [[-Schema] <String>]
    [-Truncate]
    [[-Delimiter] <String>]
    [-SingleColumn]
    [[-BatchSize] <Int32>]
    [[-NotifyAfter] <Int32>]
    [-TableLock]
    [-CheckConstraints]
    [-FireTriggers]
    [-KeepIdentity]
    [-KeepNulls]
    [[-Column] <String[]>]
    [[-ColumnMap] <Hashtable>]
    [-KeepOrdinalOrder]
    [-AutoCreateTable]
    [-NoProgress]
    [-NoHeaderRow]
    [-UseFileNameForSchema]
    [[-Quote] <Char>]
    [[-Escape] <Char>]
    [[-Comment] <Char>]
    [[-TrimmingOption] <String>]
    [[-BufferSize] <Int32>]
    [[-ParseErrorAction] <String>]
    [[-Encoding] <String>]
    [[-NullValue] <String>]
    [[-MaxQuotedFieldLength] <Int32>]
    [-SkipEmptyLine]
    [-SupportsMultiline]
    [-UseColumnDefault]
    [-NoTransaction]
    [[-MaxDecompressedSize] <Int64>]
    [[-SkipRows] <Int32>]
    [[-QuoteMode] <String>]
    [[-DuplicateHeaderBehavior] <String>]
    [[-MismatchedFieldAction] <String>]
    [-DistinguishEmptyFromNull]
    [-NormalizeQuotes]
    [-CollectParseErrors]
    [[-MaxParseErrors] <Int32>]
    [[-StaticColumns] <Hashtable>]
    [[-DateTimeFormats] <String[]>]
    [[-Culture] <String>]
    [-Parallel]
    [[-ThrottleLimit] <Int32>]
    [[-ParallelBatchSize] <Int32>]
    [-EnableException]
    [-WhatIf]
    [-Confirm]
    [<CommonParameters>]

 

Examples

 

Example: 1
PS C:\> Import-DbaCsv -Path C:\temp\housing.csv -SqlInstance sql001 -Database markets

Imports the entire comma-delimited housing.csv to the SQL “markets” database on a SQL Server named sql001, using the first row as column names.
Since a table name was not specified, the table name is automatically determined from filename as “housing”.

Example: 2
PS C:\> Import-DbaCsv -Path .\housing.csv -SqlInstance sql001 -Database markets -Table housing -Delimiter "`t" -NoHeaderRow

Imports the entire tab-delimited housing.csv, including the first row which is not used for colum names, to the SQL markets database, into the housing table, on a SQL Server named sql001.

Example: 3
PS C:\> Import-DbaCsv -Path C:\temp\huge.txt -SqlInstance sqlcluster -Database locations -Table latitudes -Delimiter "|"

Imports the entire pipe-delimited huge.txt to the locations database, into the latitudes table on a SQL Server named sqlcluster.

Example: 4
PS C:\> Import-DbaCsv -Path c:\temp\SingleColumn.csv -SqlInstance sql001 -Database markets -Table TempTable -SingleColumn

Imports the single column CSV into TempTable

Example: 5
PS C:\> Get-ChildItem -Path \\FileServer\csvs | Import-DbaCsv -SqlInstance sql001, sql002 -Database tempdb -AutoCreateTable

Imports every CSV in the \FileServer\csvs path into both sql001 and sql002’s tempdb database. Each CSV will be imported into an automatically determined table name.

Example: 6
PS C:\> Get-ChildItem -Path \\FileServer\csvs | Import-DbaCsv -SqlInstance sql001, sql002 -Database tempdb -AutoCreateTable -WhatIf

Shows what would happen if the command were to be executed

Example: 7
PS C:\> Import-DbaCsv -Path c:\temp\dataset.csv -SqlInstance sql2016 -Database tempdb -Column Name, Address, Mobile

Import only Name, Address and Mobile even if other columns exist. All other columns are ignored and therefore null or default values.

Example: 8
PS C:\> Import-DbaCsv -Path C:\temp\schema.data.csv -SqlInstance sql2016 -database tempdb -UseFileNameForSchema

Will import the contents of C:\temp\schema.data.csv to table ‘data’ in schema ‘schema’.

Example: 9
PS C:\> Import-DbaCsv -Path C:\temp\schema.data.csv -SqlInstance sql2016 -database tempdb -UseFileNameForSchema -Table testtable

Will import the contents of C:\temp\schema.data.csv to table ’testtable’ in schema ‘schema’.

Example: 10
PS C:\> $columns = @{
>> Text = 'FirstName'
>> Number = 'PhoneNumber'
>> }
PS C:\> Import-DbaCsv -Path c:\temp\supersmall.csv -SqlInstance sql2016 -Database tempdb -ColumnMap $columns

The CSV field ‘Text’ is inserted into SQL column ‘FirstName’ and CSV field Number is inserted into the SQL Column ‘PhoneNumber’. All other columns are ignored and therefore null or default values.

Example: 11
PS C:\> $columns = @{
>> 0 = 'FirstName'
>> 1 = 'PhoneNumber'
>> }
PS C:\> Import-DbaCsv -Path c:\temp\supersmall.csv -SqlInstance sql2016 -Database tempdb -NoHeaderRow -ColumnMap $columns

If the CSV has no headers, passing a ColumnMap works when you have as the key the ordinal of the column (0-based).
In this example the first CSV field is inserted into SQL column ‘FirstName’ and the second CSV field is inserted into the SQL Column ‘PhoneNumber’.

Example: 12
PS C:\> Import-DbaCsv -Path C:\temp\data.csv -SqlInstance sql001 -Database tempdb -Table MyTable -Delimiter "::" -AutoCreateTable

Imports a CSV file that uses a multi-character delimiter (::). The new CSV reader supports delimiters of any length,
not just single characters.

Example: 13
PS C:\> Import-DbaCsv -Path C:\temp\data.csv.gz -SqlInstance sql001 -Database tempdb -Table MyTable -AutoCreateTable

Imports a gzip-compressed CSV file. Compression is automatically detected from the .gz extension and the file
is decompressed on-the-fly during import without extracting to disk.

Example: 14
PS C:\> Import-DbaCsv -Path C:\temp\export.csv -SqlInstance sql001 -Database tempdb -Table MyTable -SkipRows 3 -AutoCreateTable

Skips the first 3 rows of the file before reading headers and data. Useful for files that have metadata rows,
comments, or report headers before the actual CSV content begins.

Example: 15
PS C:\> Import-DbaCsv -Path C:\temp\data.csv -SqlInstance sql001 -Database tempdb -Table MyTable -DuplicateHeaderBehavior Rename -AutoCreateTable

Handles CSV files with duplicate column headers by automatically renaming them (e.g., Name, Name_2, Name_3).
Without this, duplicate headers would cause an error.

Example: 16
PS C:\> Import-DbaCsv -Path C:\temp\messy.csv -SqlInstance sql001 -Database tempdb -Table MyTable -MismatchedFieldAction PadWithNulls -AutoCreateTable

Imports a CSV where some rows have fewer fields than the header row. Missing fields are padded with NULL values
instead of throwing an error. Useful for importing data from systems that omit trailing empty fields.

Example: 17
PS C:\> Import-DbaCsv -Path C:\temp\data.csv -SqlInstance sql001 -Database tempdb -Table MyTable -QuoteMode Lenient -AutoCreateTable

Uses lenient quote parsing for CSV files with improperly escaped quotes. This is useful when importing data
from systems that don’t follow RFC 4180 strictly, such as files with embedded quotes that aren’t doubled.

Example: 18
PS C:\> # Import CSV with proper type conversion to a pre-created table
PS C:\> $query = "CREATE TABLE TypedData (id INT, amount DECIMAL(10,2), active BIT, created DATETIME)"
PS C:\> Invoke-DbaQuery -SqlInstance sql001 -Database tempdb -Query $query
PS C:\> Import-DbaCsv -Path C:\temp\typed.csv -SqlInstance sql001 -Database tempdb -Table TypedData

Imports CSV data into a table with specific column types. The CSV reader automatically converts string values
to the appropriate SQL Server types (INT, DECIMAL, BIT, DATETIME, etc.) during import.

Example: 19
PS C:\> Import-DbaCsv -Path C:\temp\large.csv -SqlInstance sql001 -Database tempdb -Table BigData -AutoCreateTable -Parallel

Imports a large CSV file using parallel processing for improved performance. The -Parallel switch enables
concurrent line reading, parsing, and type conversion, which can provide 2-4x speedup on multi-core systems
for files with 100K+ rows.

Example: 20
PS C:\> Import-DbaCsv -Path C:\temp\huge.csv -SqlInstance sql001 -Database tempdb -Table HugeData -AutoCreateTable -Parallel -ThrottleLimit 4

Imports a large CSV with parallel processing limited to 4 worker threads. Use -ThrottleLimit to control
resource usage on shared systems or when you want to limit CPU consumption during the import.

Example: 21
PS C:\> Import-DbaCsv -Path C:\temp\massive.csv -SqlInstance sql001 -Database tempdb -Table MassiveData -AutoCreateTable -Parallel -ParallelBatchSize 500

Imports a very large CSV with parallel processing using larger batch sizes. Increase -ParallelBatchSize
for very large files (millions of rows) to reduce synchronization overhead. The default is 100.

Example: 22
PS C:\> Import-DbaCsv -Path C:\temp\refresh.csv -SqlInstance sql001 -Database tempdb -Table LookupData -Truncate

Performs a full data refresh by truncating the existing table before importing. The truncate and import
operations are wrapped in a transaction, so if the import fails, the original data is preserved.

Example: 23
PS C:\> $static = @{ SourceFile = "sales_2024.csv"; ImportDate = (Get-Date); Region = "EMEA" }
PS C:\> Import-DbaCsv -Path C:\temp\sales.csv -SqlInstance sql001 -Database sales -Table SalesData -StaticColumns $static -AutoCreateTable

Imports CSV data and adds three static columns (SourceFile, ImportDate, Region) to every row.
This is useful for tracking data lineage and tagging imported records with metadata.

Example: 24
PS C:\> Import-DbaCsv -Path C:\temp\oracle_export.csv -SqlInstance sql001 -Database tempdb -Table OracleData -DateTimeFormats @("dd-MMM-yyyy", "dd-MMM-yyyy HH:mm:ss") -AutoCreateTable

Imports a CSV with Oracle-style date formats (e.g., “15-Jan-2024”). The -DateTimeFormats parameter
specifies custom format strings to parse non-standard date columns correctly.

Example: 25
PS C:\> Import-DbaCsv -Path C:\temp\german_data.csv -SqlInstance sql001 -Database tempdb -Table GermanData -Culture "de-DE" -AutoCreateTable

Imports a CSV with German number formatting where comma is the decimal separator (e.g., “1.234,56”).
The -Culture parameter ensures numbers are parsed correctly according to the specified locale.

Required Parameters

-SqlInstance

The SQL Server Instance to import data into.

PropertyValue
Alias
RequiredTrue
Pipelinefalse
Default Value
-Database

Specifies the target database for the CSV import. The database must exist on the SQL Server instance.
Use this to direct your data load to the appropriate database, whether it’s a staging, ETL, or production database.

PropertyValue
Alias
RequiredTrue
Pipelinefalse
Default Value

Optional Parameters

-Path

Specifies the file path to CSV files for import. Supports single files, multiple files, or pipeline input from Get-ChildItem.
Accepts .csv files and compressed .csv.gz files for large datasets with automatic decompression.

PropertyValue
AliasCsv,FullPath
RequiredFalse
Pipelinetrue (ByValue)
Default Value
-SqlCredential

Login to the target instance using alternative credentials. Accepts PowerShell credentials (Get-Credential).
Windows Authentication, SQL Server Authentication, Active Directory - Password, and Active Directory - Integrated are all supported.
For MFA support, please use Connect-DbaInstance.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value
-Table

Specifies the destination table name. If omitted, uses the CSV filename as the table name.
The table will be created automatically with -AutoCreateTable using nvarchar(max) columns, but for production use, create the table first with proper data types and constraints.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value
-Schema

Specifies the target schema for the table. Defaults to ‘dbo’ if not specified.
If the schema doesn’t exist, it will be created automatically when using -AutoCreateTable. This parameter takes precedence over -UseFileNameForSchema.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value
-Truncate

Removes all existing data from the destination table before importing. The truncate operation is part of the transaction.
Use this for full data refreshes where you want to replace all existing data with the CSV contents.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-Delimiter

Sets the field separator used in the CSV file. Defaults to comma if not specified.
Common values include comma (,), tab (`t), pipe (|), semicolon (;), or space for different export formats from various systems.
Multi-character delimiters are fully supported (e.g., “::”, “||”, “\t\t”).

PropertyValue
AliasDelimiterChar
RequiredFalse
Pipelinefalse
Default Value,
-SingleColumn

Indicates the CSV contains only one column of data without delimiters. Use this for simple lists or single-value imports.
Prevents the function from failing when no delimiter is found in the file content.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-BatchSize

Controls how many rows are sent to SQL Server in each batch during the bulk copy operation. Defaults to 50,000.
Larger batches are generally more efficient but use more memory, while smaller batches provide better granular control and error isolation.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value50000
-NotifyAfter

Sets how often progress notifications are displayed during the import, measured in rows. Defaults to 50,000.
Lower values provide more frequent updates but may slow the import slightly, while higher values reduce overhead for very large files.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value50000
-TableLock

Acquires an exclusive table lock for the duration of the import instead of using row-level locks.
Improves performance for large imports by reducing lock overhead, but blocks other operations on the table during the import.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-CheckConstraints

Enforces check constraints, foreign keys, and other table constraints during the import. By default, constraints are not checked for performance.
Enable this when data integrity validation is critical, but expect slower import performance.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-FireTriggers

Executes INSERT triggers on the destination table during the bulk copy operation. By default, triggers are not fired for performance.
Use this when your triggers perform essential business logic like auditing, logging, or cascading updates that must run during import.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-KeepIdentity

Preserves identity column values from the CSV instead of generating new ones. By default, the destination assigns new identity values.
Use this when migrating data and you need to maintain existing primary key values or referential integrity.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-KeepNulls

Preserves NULL values from the CSV instead of replacing them with column default values.
Use this when your data intentionally contains NULLs that should be maintained, rather than having them replaced by table defaults.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-Column

Imports only the specified columns from the CSV file, ignoring all others. Column names must match exactly.
Use this to selectively load data when you only need certain fields, reducing import time and storage requirements.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value
-ColumnMap

Maps CSV columns to different table column names using a hashtable. Keys are CSV column names, values are table column names.
Use this when your CSV headers don’t match your table structure or when importing from systems with different naming conventions.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value
-KeepOrdinalOrder

Maps columns by position rather than by name matching. The first CSV column goes to the first table column, second to second, etc.
Use this when column names don’t match but the order is correct, or when dealing with files that have inconsistent header naming.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-AutoCreateTable

Creates the destination table automatically if it doesn’t exist, using nvarchar(max) for all columns.
Convenient for quick imports or testing, but for production use, create tables manually with appropriate data types, indexes, and constraints.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-NoProgress

Disables the progress bar display during import to improve performance, especially for very large files.
Use this in automated scripts or when maximum import speed is more important than visual progress feedback.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-NoHeaderRow

Treats the first row as data instead of column headers. Use this when your CSV file starts directly with data rows.
When enabled, columns are mapped by ordinal position and you’ll need to ensure your target table column order matches the CSV.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-UseFileNameForSchema

Extracts the schema name from the filename using the first period as a delimiter. For example, ‘sales.customers.csv’ imports to the ‘sales’ schema.
If no period is found, defaults to ‘dbo’. The schema will be created if it doesn’t exist. This parameter is ignored if -Schema is explicitly specified.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-Quote

Specifies the character used to quote fields containing delimiters or special characters. Defaults to double-quote (").
Change this when your CSV uses different quoting conventions, such as single quotes from certain export tools.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value"
-Escape

Specifies the character used to escape quote characters within quoted fields. Defaults to double-quote (").
Modify this when dealing with CSV files that use different escaping conventions, such as backslash escaping.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value"
-Comment

Specifies the character that marks comment lines to be ignored during import. Defaults to hashtag (#).
Use this when your CSV files contain comment lines with metadata or instructions that should be skipped.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value#
-TrimmingOption

Controls automatic whitespace removal from field values. Options are All, None, UnquotedOnly, or QuotedOnly.
Use ‘All’ to clean up data with inconsistent spacing, or ‘None’ to preserve exact formatting when whitespace is significant.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueNone
Accepted ValuesAll,None,UnquotedOnly,QuotedOnly
-BufferSize

Sets the internal buffer size in bytes for reading the CSV file. Defaults to 4096 bytes.
Increase this value for better performance with very large files, but it will use more memory during the import process.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value4096
-ParseErrorAction

Determines how to handle malformed rows during import. ‘ThrowException’ stops the import, ‘AdvanceToNextLine’ skips bad rows.
Use ‘AdvanceToNextLine’ for importing data with known quality issues where you want to load as much valid data as possible.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueThrowException
Accepted ValuesAdvanceToNextLine,ThrowException
-Encoding

Specifies the text encoding of the CSV file. Defaults to UTF-8.
Change this when dealing with files from legacy systems that use different encodings like ASCII or when dealing with international character sets.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueUTF8
Accepted ValuesASCII,BigEndianUnicode,Byte,String,Unicode,UTF7,UTF8,Unknown
-NullValue

Specifies which text value in the CSV should be treated as SQL NULL. Common values include ‘NULL’, ’null’, or empty strings.
Use this when your source system exports NULL values as specific text strings that need to be converted to database NULLs.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value
-MaxQuotedFieldLength

Sets the maximum allowed length in bytes for quoted fields to prevent memory issues with malformed data.
Increase this when working with legitimate large text fields, or decrease it to catch data quality issues early.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value0
-SkipEmptyLine

Ignores completely empty lines in the CSV file during import.
Use this when your source files contain blank lines for formatting that should not create empty rows in your table.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-SupportsMultiline

Allows field values to span multiple lines when properly quoted, such as addresses or comments with embedded line breaks.
Enable this when your CSV contains multi-line text data that should be preserved as single field values.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-UseColumnDefault

Applies table column default values when CSV fields are missing or empty.
Use this when your CSV doesn’t include all table columns and you want defaults applied rather than NULLs or import failures.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-NoTransaction

Disables the automatic transaction wrapper, allowing partial imports to remain committed even if the operation fails.
Use this for very large imports where you want to commit data in batches, but be aware that failed imports may leave partial data.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-MaxDecompressedSize

Maximum size in bytes for decompressed data when reading compressed CSV files (.gz, .br, .deflate, .zlib).
This protects against decompression bomb attacks. Default is 10GB (10737418240 bytes).
Set to 0 for unlimited (not recommended for untrusted files).

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value10737418240
-SkipRows

Number of rows to skip at the beginning of the file before reading headers or data.
Useful for files with metadata rows before the actual CSV content.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value0
-QuoteMode

Controls how quoted fields are parsed.

  • Strict: RFC 4180 compliant parsing (default)
  • Lenient: More forgiving parsing for malformed CSV files with embedded quotes
PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueStrict
Accepted ValuesStrict,Lenient
-DuplicateHeaderBehavior

Controls how duplicate column headers are handled.

  • ThrowException: Throw an error (default)
  • Rename: Rename duplicates (Name_2, Name_3, etc.)
  • UseFirstOccurrence: Keep first, ignore duplicates
  • UseLastOccurrence: Keep last, rename earlier occurrences
PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueThrowException
Accepted ValuesThrowException,Rename,UseFirstOccurrence,UseLastOccurrence
-MismatchedFieldAction

Controls what happens when a row has more or fewer fields than expected.

  • ThrowException: Throw an error (default)
  • PadWithNulls: Pad missing fields with null
  • TruncateExtra: Remove extra fields
  • PadOrTruncate: Both pad and truncate as needed
PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueThrowException
Accepted ValuesThrowException,PadWithNulls,TruncateExtra,PadOrTruncate
-DistinguishEmptyFromNull

When specified, treats empty quoted fields ("") as empty strings and
unquoted empty fields (,,) as null values.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-NormalizeQuotes

When specified, converts smart/curly quotes (’ ’ " “) to standard ASCII quotes before parsing.
Useful when importing data exported from Microsoft Word or Excel.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-CollectParseErrors

When specified, collects parse errors instead of throwing immediately.
Use with -MaxParseErrors to limit the number of errors collected.
Errors can be retrieved from the reader after import completes.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-MaxParseErrors

Maximum number of parse errors to collect before stopping.
Only applies when -CollectParseErrors is specified. Default is 1000.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value1000
-StaticColumns

A hashtable of static column names and values to add to every row.
Useful for tagging imported data with metadata like source filename or import timestamp.
Keys are column names, values are the static values to insert.
Example: @{ SourceFile = “data.csv”; ImportDate = (Get-Date) }

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value
-DateTimeFormats

An array of custom date/time format strings for parsing date columns.
Useful when importing data with non-standard date formats (e.g., Oracle’s dd-MMM-yyyy).
Example: @(“dd-MMM-yyyy”, “yyyy/MM/dd”, “MM-dd-yyyy”)

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value
-Culture

The culture name to use for parsing numbers and dates (e.g., “de-DE”, “fr-FR”, “en-US”).
Useful when importing CSV files with locale-specific number formats (e.g., comma as decimal separator).
Default is InvariantCulture.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value
-Parallel

Enables parallel processing for improved performance on large files.
When enabled, line reading, parsing, and type conversion are performed in parallel
using a producer-consumer pipeline. This can provide 2-4x performance improvement
on multi-core systems.
Note: Parallel processing is most beneficial for large files (>100K rows) with
complex type conversions. For small files, sequential processing may be faster
due to lower overhead.
When Parallel is used, the progress bar is disabled because the progress callback
cannot run in background threads.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-ThrottleLimit

Sets the maximum number of worker threads for parallel processing.
Default is 0, which uses the number of logical processors on the system.
Set to 1 to effectively disable parallelism while still using the pipeline architecture.
Only used when Parallel is specified.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value0
-ParallelBatchSize

Sets the number of records to batch before yielding to the consumer during parallel processing.
Larger batches reduce synchronization overhead but increase memory usage and latency.
Default is 100. Minimum is 1.
Only used when Parallel is specified.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default Value100
-EnableException

By default, when something goes wrong we try to catch it, interpret it and give you a friendly warning message.
This avoids overwhelming you with “sea of red” exceptions, but is inconvenient because it basically disables advanced scripting.
Using this switch turns this “nice by default” feature off and enables you to catch exceptions with your own try/catch.

PropertyValue
Alias
RequiredFalse
Pipelinefalse
Default ValueFalse
-WhatIf

Shows what would happen if the command were to run. No actions are actually performed.

PropertyValue
Aliaswi
RequiredFalse
Pipelinefalse
Default Value
-Confirm

Prompts you for confirmation before executing any changing operations within the command.

PropertyValue
Aliascf
RequiredFalse
Pipelinefalse
Default Value