63
votes

I'm trying to import a correctly quoted CSV file, meaning data is only quoted if it contains a comma, e.g.:

41, Terminator, Black
42, "Monsters, Inc.", Blue

I observe that the first row imports correctly, but the second row errors in a manner that suggests the quoted comma was treated as a field separator.

I have seen suggestions such as this one

SQL Bulk import from CSV

to change the field terminator

FIELDTERMINATOR='","'

However, my CSV file only quotes fields that need it, so I do not believe that suggestion would work.

Can SQL Server's BULK IMPORT statement import a correctly quoted CSV file? How?

11
Can't be done. The SQL Server Import methods (both BCP and BULK INSERT) do not understand quoting.RBarryYoung
Here is a similar post with more options stackoverflow.com/questions/782353/…Murray Foxcroft
That isn't correctly quoted. As per RFC4180, quotes should either be around ALL fields or NO fields. Partially-quoted CSVs are invalid, and cannot be loaded with BULK IMPORT. They can be opened in Excel, in PowerShell or even by OPENROWSET, but they can't be bulk imported directly.Geoff Griswald
@GeoffGriswald I don't agree with your conclusion about the RFC4180 spec. Sections 2.5 and 2.6 show examples of mixed-use of quotes and non-quotes, and the ABNF grammar seems to state that any given field can be quoted or non-quoted. However, it is solid advice where possible to always quote every field and use the answer from Dominix.Eric J.
Eric, like many people you have misinterpreted the examples. 2.5 shows 2 examples, one fully quoted and one non-quoted. 2.6 shows 2 examples, one fully quoted which spans two lines (to demonstrate that one field contains a CRLF) and one non-quoted which spans one line. There is no partially-quoted CSV spec and RFC4180 does not provide for one. This is purely something that has come about through Excel and its historically poor handling of the CSV format (which was done deliberately, to promote ".XLS"). Excel 2019 finally provides options to handle CSV files correctly, but the damage is done.Geoff Griswald

11 Answers

32
votes

Unfortunately SQL Server interprets the quoted comma as a delimiter. This applies to both BCP and bulk insert .

From http://msdn.microsoft.com/en-us/library/ms191485%28v=sql.100%29.aspx

If a terminator character occurs within the data, it is interpreted as a terminator, not as data, and the data after that character is interpreted as belonging to the next field or record. Therefore, choose your terminators carefully to make sure that they never appear in your data.

30
votes

I know this is an old topic but this feature has now been implemented since SQL Server 2017. The parameter you're looking for is FIELDQUOTE= which defaults to '"'. See more on https://docs.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-2017

21
votes

There is another solution for this.

Consider the quotes as part of the fields delimiter, by editing the fmt file.

You can check this out for more information:

http://blogs.msdn.com/b/sqlserverfaq/archive/2010/02/04/how-to-remove-unwanted-quotation-marks-while-importing-a-data-file.aspx

An extract of the link above:

The only way to remove the quotation marks would be to modify the column delimiters specified during the import operation. The only drawback here is that if you inspect the data to be inserted, you will very quickly realize that the column delimiters are different for each column (Delimiters highlighted above).

So to specify different column delimiters for each column, you would need to use a format file if you plan to use Bulk Insert or BCP. If you generate a format file for the above table structure, it would be as follows:

9.0
3
1       SQLCHAR       0       5       "\t"     1     FName              SQL_Latin1_General_CP1_CI_AS
2       SQLCHAR       0       5       "\t"     2     LName              SQL_Latin1_General_CP1_CI_AS
3       SQLCHAR       0       50      "\r\n"   3     Company            SQL_Latin1_General_CP1_CI_AS

Modify the format file to represent the correct column delimiters for each column. The new format file to be used will look like this:

9.0
4
1       SQLCHAR       0       0     "\""      0     FIRST_QUOTE      SQL_Latin1_General_CP1_CI_AS
2       SQLCHAR       0       5     "\",\""   1     FNAME               SQL_Latin1_General_CP1_CI_AS
3       SQLCHAR       0       5     "\",\""   2     LNAME            SQL_Latin1_General_CP1_CI_AS
4       SQLCHAR       0       50    "\"\r\n"  3     COMPANY          SQL_Latin1_General_CP1_CI_AS
7
votes

Make sure you have enabled TextQualified option and set it to be ".

5
votes

I had this same problem, and I didn't want to have to go the SSIS route, so I found a PowerShell script that is easy to run and handles the case of the quotes with the comma in that particular field:

Source Code and DLL for the PowerShell Script: https://github.com/billgraziano/CsvDataReader

Here's a blog that explains the usage: http://www.sqlteam.com/article/fast-csv-import-in-powershell-to-sql-server

3
votes

You could also look at using OpenRowSet with the CSV text file data provider.

This should be possible with any version of SQL Server >= 2005 although you need to enable the feature.

http://social.msdn.microsoft.com/forums/en-US/sqldataaccess/thread/5869d247-f0a0-4224-80b3-ff2e414be402

3
votes

I've spent half a day on this problem. It's best to import using SQL Server Import & Export data wizard. There is a setting in that wizard which solves this problem. Detailed screenshots here: https://www.mssqltips.com/sqlservertip/1316/strip-double-quotes-from-an-import-file-in-integration-services-ssis/ Thanks

2
votes

I had the same problem, with data that only occasionally double-quotes some text. My solution is to let the BULK LOAD import the double-quotes, then run a REPLACE on the imported data.

For example:

bulk insert CodePoint_tbl from "F:\Data\Map\CodePointOpen\Data\CSV\ab.csv" with (FIRSTROW = 1, FIELDTERMINATOR = ',', ROWTERMINATOR='\n');

update CodePoint_tbl set Postcode = replace(Postcode,'"','') where charindex('"',Postcode) > 0

To make it less painful to write the REPLACE script, just copy and paste what you need from the results of something like this:

select C.ColID, C.[name] as Columnname into #Columns
from syscolumns C
join sysobjects T on C.id = T.id
where T.[name] = 'User_tbl'
order by 1;

declare @QUOTE char(1);
set @QUOTE = Char(39);
select 'Update User_tbl set '+ColumnName+'=replace('+ColumnName+','
 + @QUOTE + '"' + @QUOTE + ',' + @QUOTE + @QUOTE + ');
GO'
from #Columns
where ColID > 2
order by ColID;
1
votes

I had the same problem, however, it worked for me with the following settings:

bulk insert schema.table
from '\\your\data\source.csv'
with (
datafiletype = 'char'
,format = 'CSV'
,firstrow = 2
,fieldterminator = '|'
,rowterminator = '\n'
,tablock
)

My CSV-File looks like this:

"col1"|"col2"
"val1"|"val2"
"val3"|"val4"

My problem was, I had rowterminator set to '0x0a' before, it did not work. Once I changed it to '\n', it started working...

0
votes

Per CSV format specification, I don't think it matters if data is correctly quoted or not, as long as it adheres to specification. Excessive quotes should be handled by the parser, if it's properly implemented. FIELDTERMINATOR should be comma and ROWTERMINATOR is line end - this denotes a standard CSV file. Did you try to import your data with these settings?

0
votes

Been stung by the same :)

I wrapped this logic into a function to clean up data that had been was already imported

DECLARE @str NVARCHAR(MAX);
DECLARE @quote_identifier NVARCHAR(MAX);

SET @quote_identifier = N'"';

SET @str = N'"quoted stuff"';

SELECT IIF(
           LEFT(@str, 1) = @quote_identifier
           AND RIGHT(@str, 1) = @quote_identifier,
           SUBSTRING(@str, DATALENGTH(@quote_identifier), LEN(@str) - DATALENGTH(@quote_identifier)),
           @str);