Import-DbaCsv
View SourceSynopsis
Imports CSV files into SQL Server tables using high-performance bulk copy operations.
Description
Import-DbaCsv uses .NET’s SqlBulkCopy class to efficiently load CSV data into SQL Server tables, handling files of any size from small datasets to multi-gigabyte imports. The function wraps the entire operation in a transaction, so any failure or interruption rolls back all changes automatically.
When the target table doesn’t exist, you can use -AutoCreateTable to create it on the fly with basic nvarchar(max) columns. For production use, create your table first with proper data types and constraints. The function intelligently maps CSV columns to table columns by name, with fallback to ordinal position when needed.
Supports various CSV formats including custom delimiters, quoted fields, gzip compression (.csv.gz files), and multi-line values within quoted fields. Column mapping lets you import specific columns or rename them during import, while schema detection can automatically place data in the correct schema based on filename patterns.
Perfect for ETL processes, data migrations, or loading reference data where you need reliable, fast imports with proper error handling and transaction safety.
Syntax
Import-DbaCsv
[[-Path] <Object[]>]
[-SqlInstance] <DbaInstanceParameter[]>
[[-SqlCredential] <PSCredential>]
[-Database] <String>
[[-Table] <String>]
[[-Schema] <String>]
[-Truncate]
[[-Delimiter] <String>]
[-SingleColumn]
[[-BatchSize] <Int32>]
[[-NotifyAfter] <Int32>]
[-TableLock]
[-CheckConstraints]
[-FireTriggers]
[-KeepIdentity]
[-KeepNulls]
[[-Column] <String[]>]
[[-ColumnMap] <Hashtable>]
[-KeepOrdinalOrder]
[-AutoCreateTable]
[-NoProgress]
[-NoHeaderRow]
[-UseFileNameForSchema]
[[-Quote] <Char>]
[[-Escape] <Char>]
[[-Comment] <Char>]
[[-TrimmingOption] <String>]
[[-BufferSize] <Int32>]
[[-ParseErrorAction] <String>]
[[-Encoding] <String>]
[[-NullValue] <String>]
[[-MaxQuotedFieldLength] <Int32>]
[-SkipEmptyLine]
[-SupportsMultiline]
[-UseColumnDefault]
[-NoTransaction]
[[-MaxDecompressedSize] <Int64>]
[[-SkipRows] <Int32>]
[[-QuoteMode] <String>]
[[-DuplicateHeaderBehavior] <String>]
[[-MismatchedFieldAction] <String>]
[-DistinguishEmptyFromNull]
[-NormalizeQuotes]
[-CollectParseErrors]
[[-MaxParseErrors] <Int32>]
[[-StaticColumns] <Hashtable>]
[[-DateTimeFormats] <String[]>]
[[-Culture] <String>]
[-Parallel]
[[-ThrottleLimit] <Int32>]
[[-ParallelBatchSize] <Int32>]
[-EnableException]
[-WhatIf]
[-Confirm]
[<CommonParameters>]
Examples
Example: 1
PS C:\> Import-DbaCsv -Path C:\temp\housing.csv -SqlInstance sql001 -Database markets
Imports the entire comma-delimited housing.csv to the SQL “markets” database on a SQL Server named sql001, using the first row as column names.
Since a table name was not specified, the table name is automatically determined from filename as “housing”.
Example: 2
PS C:\> Import-DbaCsv -Path .\housing.csv -SqlInstance sql001 -Database markets -Table housing -Delimiter "`t" -NoHeaderRow
Imports the entire tab-delimited housing.csv, including the first row which is not used for colum names, to the SQL markets database, into the housing table, on a SQL Server named sql001.
Example: 3
PS C:\> Import-DbaCsv -Path C:\temp\huge.txt -SqlInstance sqlcluster -Database locations -Table latitudes -Delimiter "|"
Imports the entire pipe-delimited huge.txt to the locations database, into the latitudes table on a SQL Server named sqlcluster.
Example: 4
PS C:\> Import-DbaCsv -Path c:\temp\SingleColumn.csv -SqlInstance sql001 -Database markets -Table TempTable -SingleColumn
Imports the single column CSV into TempTable
Example: 5
PS C:\> Get-ChildItem -Path \\FileServer\csvs | Import-DbaCsv -SqlInstance sql001, sql002 -Database tempdb -AutoCreateTable
Imports every CSV in the \FileServer\csvs path into both sql001 and sql002’s tempdb database. Each CSV will be imported into an automatically determined table name.
Example: 6
PS C:\> Get-ChildItem -Path \\FileServer\csvs | Import-DbaCsv -SqlInstance sql001, sql002 -Database tempdb -AutoCreateTable -WhatIf
Shows what would happen if the command were to be executed
Example: 7
PS C:\> Import-DbaCsv -Path c:\temp\dataset.csv -SqlInstance sql2016 -Database tempdb -Column Name, Address, Mobile
Import only Name, Address and Mobile even if other columns exist. All other columns are ignored and therefore null or default values.
Example: 8
PS C:\> Import-DbaCsv -Path C:\temp\schema.data.csv -SqlInstance sql2016 -database tempdb -UseFileNameForSchema
Will import the contents of C:\temp\schema.data.csv to table ‘data’ in schema ‘schema’.
Example: 9
PS C:\> Import-DbaCsv -Path C:\temp\schema.data.csv -SqlInstance sql2016 -database tempdb -UseFileNameForSchema -Table testtable
Will import the contents of C:\temp\schema.data.csv to table ’testtable’ in schema ‘schema’.
Example: 10
PS C:\> $columns = @{
>> Text = 'FirstName'
>> Number = 'PhoneNumber'
>> }
PS C:\> Import-DbaCsv -Path c:\temp\supersmall.csv -SqlInstance sql2016 -Database tempdb -ColumnMap $columns
The CSV field ‘Text’ is inserted into SQL column ‘FirstName’ and CSV field Number is inserted into the SQL Column ‘PhoneNumber’. All other columns are ignored and therefore null or default values.
Example: 11
PS C:\> $columns = @{
>> 0 = 'FirstName'
>> 1 = 'PhoneNumber'
>> }
PS C:\> Import-DbaCsv -Path c:\temp\supersmall.csv -SqlInstance sql2016 -Database tempdb -NoHeaderRow -ColumnMap $columns
If the CSV has no headers, passing a ColumnMap works when you have as the key the ordinal of the column (0-based).
In this example the first CSV field is inserted into SQL column ‘FirstName’ and the second CSV field is inserted into the SQL Column ‘PhoneNumber’.
Example: 12
PS C:\> Import-DbaCsv -Path C:\temp\data.csv -SqlInstance sql001 -Database tempdb -Table MyTable -Delimiter "::" -AutoCreateTable
Imports a CSV file that uses a multi-character delimiter (::). The new CSV reader supports delimiters of any length,
not just single characters.
Example: 13
PS C:\> Import-DbaCsv -Path C:\temp\data.csv.gz -SqlInstance sql001 -Database tempdb -Table MyTable -AutoCreateTable
Imports a gzip-compressed CSV file. Compression is automatically detected from the .gz extension and the file
is decompressed on-the-fly during import without extracting to disk.
Example: 14
PS C:\> Import-DbaCsv -Path C:\temp\export.csv -SqlInstance sql001 -Database tempdb -Table MyTable -SkipRows 3 -AutoCreateTable
Skips the first 3 rows of the file before reading headers and data. Useful for files that have metadata rows,
comments, or report headers before the actual CSV content begins.
Example: 15
PS C:\> Import-DbaCsv -Path C:\temp\data.csv -SqlInstance sql001 -Database tempdb -Table MyTable -DuplicateHeaderBehavior Rename -AutoCreateTable
Handles CSV files with duplicate column headers by automatically renaming them (e.g., Name, Name_2, Name_3).
Without this, duplicate headers would cause an error.
Example: 16
PS C:\> Import-DbaCsv -Path C:\temp\messy.csv -SqlInstance sql001 -Database tempdb -Table MyTable -MismatchedFieldAction PadWithNulls -AutoCreateTable
Imports a CSV where some rows have fewer fields than the header row. Missing fields are padded with NULL values
instead of throwing an error. Useful for importing data from systems that omit trailing empty fields.
Example: 17
PS C:\> Import-DbaCsv -Path C:\temp\data.csv -SqlInstance sql001 -Database tempdb -Table MyTable -QuoteMode Lenient -AutoCreateTable
Uses lenient quote parsing for CSV files with improperly escaped quotes. This is useful when importing data
from systems that don’t follow RFC 4180 strictly, such as files with embedded quotes that aren’t doubled.
Example: 18
PS C:\> # Import CSV with proper type conversion to a pre-created table
PS C:\> $query = "CREATE TABLE TypedData (id INT, amount DECIMAL(10,2), active BIT, created DATETIME)"
PS C:\> Invoke-DbaQuery -SqlInstance sql001 -Database tempdb -Query $query
PS C:\> Import-DbaCsv -Path C:\temp\typed.csv -SqlInstance sql001 -Database tempdb -Table TypedData
Imports CSV data into a table with specific column types. The CSV reader automatically converts string values
to the appropriate SQL Server types (INT, DECIMAL, BIT, DATETIME, etc.) during import.
Example: 19
PS C:\> Import-DbaCsv -Path C:\temp\large.csv -SqlInstance sql001 -Database tempdb -Table BigData -AutoCreateTable -Parallel
Imports a large CSV file using parallel processing for improved performance. The -Parallel switch enables
concurrent line reading, parsing, and type conversion, which can provide 2-4x speedup on multi-core systems
for files with 100K+ rows.
Example: 20
PS C:\> Import-DbaCsv -Path C:\temp\huge.csv -SqlInstance sql001 -Database tempdb -Table HugeData -AutoCreateTable -Parallel -ThrottleLimit 4
Imports a large CSV with parallel processing limited to 4 worker threads. Use -ThrottleLimit to control
resource usage on shared systems or when you want to limit CPU consumption during the import.
Example: 21
PS C:\> Import-DbaCsv -Path C:\temp\massive.csv -SqlInstance sql001 -Database tempdb -Table MassiveData -AutoCreateTable -Parallel -ParallelBatchSize 500
Imports a very large CSV with parallel processing using larger batch sizes. Increase -ParallelBatchSize
for very large files (millions of rows) to reduce synchronization overhead. The default is 100.
Example: 22
PS C:\> Import-DbaCsv -Path C:\temp\refresh.csv -SqlInstance sql001 -Database tempdb -Table LookupData -Truncate
Performs a full data refresh by truncating the existing table before importing. The truncate and import
operations are wrapped in a transaction, so if the import fails, the original data is preserved.
Example: 23
PS C:\> $static = @{ SourceFile = "sales_2024.csv"; ImportDate = (Get-Date); Region = "EMEA" }
PS C:\> Import-DbaCsv -Path C:\temp\sales.csv -SqlInstance sql001 -Database sales -Table SalesData -StaticColumns $static -AutoCreateTable
Imports CSV data and adds three static columns (SourceFile, ImportDate, Region) to every row.
This is useful for tracking data lineage and tagging imported records with metadata.
Example: 24
PS C:\> Import-DbaCsv -Path C:\temp\oracle_export.csv -SqlInstance sql001 -Database tempdb -Table OracleData -DateTimeFormats @("dd-MMM-yyyy", "dd-MMM-yyyy HH:mm:ss") -AutoCreateTable
Imports a CSV with Oracle-style date formats (e.g., “15-Jan-2024”). The -DateTimeFormats parameter
specifies custom format strings to parse non-standard date columns correctly.
Example: 25
PS C:\> Import-DbaCsv -Path C:\temp\german_data.csv -SqlInstance sql001 -Database tempdb -Table GermanData -Culture "de-DE" -AutoCreateTable
Imports a CSV with German number formatting where comma is the decimal separator (e.g., “1.234,56”).
The -Culture parameter ensures numbers are parsed correctly according to the specified locale.
Required Parameters
-SqlInstance
The SQL Server Instance to import data into.
| Property | Value |
|---|---|
| Alias | |
| Required | True |
| Pipeline | false |
| Default Value |
-Database
Specifies the target database for the CSV import. The database must exist on the SQL Server instance.
Use this to direct your data load to the appropriate database, whether it’s a staging, ETL, or production database.
| Property | Value |
|---|---|
| Alias | |
| Required | True |
| Pipeline | false |
| Default Value |
Optional Parameters
-Path
Specifies the file path to CSV files for import. Supports single files, multiple files, or pipeline input from Get-ChildItem.
Accepts .csv files and compressed .csv.gz files for large datasets with automatic decompression.
| Property | Value |
|---|---|
| Alias | Csv,FullPath |
| Required | False |
| Pipeline | true (ByValue) |
| Default Value |
-SqlCredential
Login to the target instance using alternative credentials. Accepts PowerShell credentials (Get-Credential).
Windows Authentication, SQL Server Authentication, Active Directory - Password, and Active Directory - Integrated are all supported.
For MFA support, please use Connect-DbaInstance.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value |
-Table
Specifies the destination table name. If omitted, uses the CSV filename as the table name.
The table will be created automatically with -AutoCreateTable using nvarchar(max) columns, but for production use, create the table first with proper data types and constraints.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value |
-Schema
Specifies the target schema for the table. Defaults to ‘dbo’ if not specified.
If the schema doesn’t exist, it will be created automatically when using -AutoCreateTable. This parameter takes precedence over -UseFileNameForSchema.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value |
-Truncate
Removes all existing data from the destination table before importing. The truncate operation is part of the transaction.
Use this for full data refreshes where you want to replace all existing data with the CSV contents.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-Delimiter
Sets the field separator used in the CSV file. Defaults to comma if not specified.
Common values include comma (,), tab (`t), pipe (|), semicolon (;), or space for different export formats from various systems.
Multi-character delimiters are fully supported (e.g., “::”, “||”, “\t\t”).
| Property | Value |
|---|---|
| Alias | DelimiterChar |
| Required | False |
| Pipeline | false |
| Default Value | , |
-SingleColumn
Indicates the CSV contains only one column of data without delimiters. Use this for simple lists or single-value imports.
Prevents the function from failing when no delimiter is found in the file content.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-BatchSize
Controls how many rows are sent to SQL Server in each batch during the bulk copy operation. Defaults to 50,000.
Larger batches are generally more efficient but use more memory, while smaller batches provide better granular control and error isolation.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | 50000 |
-NotifyAfter
Sets how often progress notifications are displayed during the import, measured in rows. Defaults to 50,000.
Lower values provide more frequent updates but may slow the import slightly, while higher values reduce overhead for very large files.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | 50000 |
-TableLock
Acquires an exclusive table lock for the duration of the import instead of using row-level locks.
Improves performance for large imports by reducing lock overhead, but blocks other operations on the table during the import.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-CheckConstraints
Enforces check constraints, foreign keys, and other table constraints during the import. By default, constraints are not checked for performance.
Enable this when data integrity validation is critical, but expect slower import performance.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-FireTriggers
Executes INSERT triggers on the destination table during the bulk copy operation. By default, triggers are not fired for performance.
Use this when your triggers perform essential business logic like auditing, logging, or cascading updates that must run during import.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-KeepIdentity
Preserves identity column values from the CSV instead of generating new ones. By default, the destination assigns new identity values.
Use this when migrating data and you need to maintain existing primary key values or referential integrity.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-KeepNulls
Preserves NULL values from the CSV instead of replacing them with column default values.
Use this when your data intentionally contains NULLs that should be maintained, rather than having them replaced by table defaults.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-Column
Imports only the specified columns from the CSV file, ignoring all others. Column names must match exactly.
Use this to selectively load data when you only need certain fields, reducing import time and storage requirements.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value |
-ColumnMap
Maps CSV columns to different table column names using a hashtable. Keys are CSV column names, values are table column names.
Use this when your CSV headers don’t match your table structure or when importing from systems with different naming conventions.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value |
-KeepOrdinalOrder
Maps columns by position rather than by name matching. The first CSV column goes to the first table column, second to second, etc.
Use this when column names don’t match but the order is correct, or when dealing with files that have inconsistent header naming.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-AutoCreateTable
Creates the destination table automatically if it doesn’t exist, using nvarchar(max) for all columns.
Convenient for quick imports or testing, but for production use, create tables manually with appropriate data types, indexes, and constraints.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-NoProgress
Disables the progress bar display during import to improve performance, especially for very large files.
Use this in automated scripts or when maximum import speed is more important than visual progress feedback.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-NoHeaderRow
Treats the first row as data instead of column headers. Use this when your CSV file starts directly with data rows.
When enabled, columns are mapped by ordinal position and you’ll need to ensure your target table column order matches the CSV.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-UseFileNameForSchema
Extracts the schema name from the filename using the first period as a delimiter. For example, ‘sales.customers.csv’ imports to the ‘sales’ schema.
If no period is found, defaults to ‘dbo’. The schema will be created if it doesn’t exist. This parameter is ignored if -Schema is explicitly specified.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-Quote
Specifies the character used to quote fields containing delimiters or special characters. Defaults to double-quote (").
Change this when your CSV uses different quoting conventions, such as single quotes from certain export tools.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | " |
-Escape
Specifies the character used to escape quote characters within quoted fields. Defaults to double-quote (").
Modify this when dealing with CSV files that use different escaping conventions, such as backslash escaping.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | " |
-Comment
Specifies the character that marks comment lines to be ignored during import. Defaults to hashtag (#).
Use this when your CSV files contain comment lines with metadata or instructions that should be skipped.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | # |
-TrimmingOption
Controls automatic whitespace removal from field values. Options are All, None, UnquotedOnly, or QuotedOnly.
Use ‘All’ to clean up data with inconsistent spacing, or ‘None’ to preserve exact formatting when whitespace is significant.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | None |
| Accepted Values | All,None,UnquotedOnly,QuotedOnly |
-BufferSize
Sets the internal buffer size in bytes for reading the CSV file. Defaults to 4096 bytes.
Increase this value for better performance with very large files, but it will use more memory during the import process.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | 4096 |
-ParseErrorAction
Determines how to handle malformed rows during import. ‘ThrowException’ stops the import, ‘AdvanceToNextLine’ skips bad rows.
Use ‘AdvanceToNextLine’ for importing data with known quality issues where you want to load as much valid data as possible.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | ThrowException |
| Accepted Values | AdvanceToNextLine,ThrowException |
-Encoding
Specifies the text encoding of the CSV file. Defaults to UTF-8.
Change this when dealing with files from legacy systems that use different encodings like ASCII or when dealing with international character sets.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | UTF8 |
| Accepted Values | ASCII,BigEndianUnicode,Byte,String,Unicode,UTF7,UTF8,Unknown |
-NullValue
Specifies which text value in the CSV should be treated as SQL NULL. Common values include ‘NULL’, ’null’, or empty strings.
Use this when your source system exports NULL values as specific text strings that need to be converted to database NULLs.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value |
-MaxQuotedFieldLength
Sets the maximum allowed length in bytes for quoted fields to prevent memory issues with malformed data.
Increase this when working with legitimate large text fields, or decrease it to catch data quality issues early.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | 0 |
-SkipEmptyLine
Ignores completely empty lines in the CSV file during import.
Use this when your source files contain blank lines for formatting that should not create empty rows in your table.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-SupportsMultiline
Allows field values to span multiple lines when properly quoted, such as addresses or comments with embedded line breaks.
Enable this when your CSV contains multi-line text data that should be preserved as single field values.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-UseColumnDefault
Applies table column default values when CSV fields are missing or empty.
Use this when your CSV doesn’t include all table columns and you want defaults applied rather than NULLs or import failures.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-NoTransaction
Disables the automatic transaction wrapper, allowing partial imports to remain committed even if the operation fails.
Use this for very large imports where you want to commit data in batches, but be aware that failed imports may leave partial data.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-MaxDecompressedSize
Maximum size in bytes for decompressed data when reading compressed CSV files (.gz, .br, .deflate, .zlib).
This protects against decompression bomb attacks. Default is 10GB (10737418240 bytes).
Set to 0 for unlimited (not recommended for untrusted files).
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | 10737418240 |
-SkipRows
Number of rows to skip at the beginning of the file before reading headers or data.
Useful for files with metadata rows before the actual CSV content.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | 0 |
-QuoteMode
Controls how quoted fields are parsed.
- Strict: RFC 4180 compliant parsing (default)
- Lenient: More forgiving parsing for malformed CSV files with embedded quotes
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | Strict |
| Accepted Values | Strict,Lenient |
-DuplicateHeaderBehavior
Controls how duplicate column headers are handled.
- ThrowException: Throw an error (default)
- Rename: Rename duplicates (Name_2, Name_3, etc.)
- UseFirstOccurrence: Keep first, ignore duplicates
- UseLastOccurrence: Keep last, rename earlier occurrences
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | ThrowException |
| Accepted Values | ThrowException,Rename,UseFirstOccurrence,UseLastOccurrence |
-MismatchedFieldAction
Controls what happens when a row has more or fewer fields than expected.
- ThrowException: Throw an error (default)
- PadWithNulls: Pad missing fields with null
- TruncateExtra: Remove extra fields
- PadOrTruncate: Both pad and truncate as needed
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | ThrowException |
| Accepted Values | ThrowException,PadWithNulls,TruncateExtra,PadOrTruncate |
-DistinguishEmptyFromNull
When specified, treats empty quoted fields ("") as empty strings and
unquoted empty fields (,,) as null values.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-NormalizeQuotes
When specified, converts smart/curly quotes (’ ’ " “) to standard ASCII quotes before parsing.
Useful when importing data exported from Microsoft Word or Excel.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-CollectParseErrors
When specified, collects parse errors instead of throwing immediately.
Use with -MaxParseErrors to limit the number of errors collected.
Errors can be retrieved from the reader after import completes.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-MaxParseErrors
Maximum number of parse errors to collect before stopping.
Only applies when -CollectParseErrors is specified. Default is 1000.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | 1000 |
-StaticColumns
A hashtable of static column names and values to add to every row.
Useful for tagging imported data with metadata like source filename or import timestamp.
Keys are column names, values are the static values to insert.
Example: @{ SourceFile = “data.csv”; ImportDate = (Get-Date) }
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value |
-DateTimeFormats
An array of custom date/time format strings for parsing date columns.
Useful when importing data with non-standard date formats (e.g., Oracle’s dd-MMM-yyyy).
Example: @(“dd-MMM-yyyy”, “yyyy/MM/dd”, “MM-dd-yyyy”)
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value |
-Culture
The culture name to use for parsing numbers and dates (e.g., “de-DE”, “fr-FR”, “en-US”).
Useful when importing CSV files with locale-specific number formats (e.g., comma as decimal separator).
Default is InvariantCulture.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value |
-Parallel
Enables parallel processing for improved performance on large files.
When enabled, line reading, parsing, and type conversion are performed in parallel
using a producer-consumer pipeline. This can provide 2-4x performance improvement
on multi-core systems.
Note: Parallel processing is most beneficial for large files (>100K rows) with
complex type conversions. For small files, sequential processing may be faster
due to lower overhead.
When Parallel is used, the progress bar is disabled because the progress callback
cannot run in background threads.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-ThrottleLimit
Sets the maximum number of worker threads for parallel processing.
Default is 0, which uses the number of logical processors on the system.
Set to 1 to effectively disable parallelism while still using the pipeline architecture.
Only used when Parallel is specified.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | 0 |
-ParallelBatchSize
Sets the number of records to batch before yielding to the consumer during parallel processing.
Larger batches reduce synchronization overhead but increase memory usage and latency.
Default is 100. Minimum is 1.
Only used when Parallel is specified.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | 100 |
-EnableException
By default, when something goes wrong we try to catch it, interpret it and give you a friendly warning message.
This avoids overwhelming you with “sea of red” exceptions, but is inconvenient because it basically disables advanced scripting.
Using this switch turns this “nice by default” feature off and enables you to catch exceptions with your own try/catch.
| Property | Value |
|---|---|
| Alias | |
| Required | False |
| Pipeline | false |
| Default Value | False |
-WhatIf
Shows what would happen if the command were to run. No actions are actually performed.
| Property | Value |
|---|---|
| Alias | wi |
| Required | False |
| Pipeline | false |
| Default Value |
-Confirm
Prompts you for confirmation before executing any changing operations within the command.
| Property | Value |
|---|---|
| Alias | cf |
| Required | False |
| Pipeline | false |
| Default Value |
dbatools