Invalid field count in CSV input on line 1 phpmyadmin - csv

I have read many threads but can't find the right specific answer. I get this error message when I try to import additional data into an existing table. The field names are all aligned correctly, but not every row has data in every field. For example, although I have a field named middle_name, not every row has a middle name in it. During the import process, is this blank field not counted as a field and thus throwing off the field count?
I have managed to get most of the data to import by making sure I had a blank column to allow for the auto-increment of ID, as well as leaving the header row in the file but choosing 1 row to skip on the import.
Now the problem is the last row won't import - get error message Invalid format of CSV input on line 19. When I copy the file to Text Wrangler, the last row ends with ,,,,,. This accounts for the last 5 columns which are blank. I need to know what the trick is to get the last row to import.
Here are the settings I have been using:

I’ve had similar problems (with a tab-separated file) after upgrading from an ancient version of phpMyAdmin. The following points might be helpful:
phpMyAdmin must have he correct number of columns. In older versions of phpMyAdmin you could get away with not supplying empty values for columns at the end of the row, but this is no longer the case.
If you export an Excel file to text and columns at the start or end of rows are completely empty, Excel will not export blanks for those rows. You need to put something in, or leave blank then edit the resulting file in a text editor with regular expressions, e.g. to add a blank first row, search for ^ and replace with , (CSV file) or \t (tab file); to add two columns to the end search for $ and replace with ,, or \t\t etc.
Add a blank line to the bottom of the file to avoid the error message referring to the last line of data. This seems to be a bug that has been fixed in newer versions.
Whilst in the text editor, also check the file encoding as Excel sometimes saves as things like UTF-16 with BOM which phpMyAdmin doesn’t like.

I saw the same error while trying to import a csv file with 20,000 rows into a custom table in Drupal 7 using phpmyadmin. My csv file didn't have headers and had one column with many blanks in it.
What worked for me: I copied the data from Excel into Notepad (a Windows plain text editor) and then back into a new Excel spreadsheet and re-saved it as a csv. I didn't have to add headers and left the blank rows blank. All went fine after I did that.

You'll never have this problem if you keep your 1st row the header row, even if your table already has a header. You can delete the extra header row later.
Why this helps is that, then mysql knows how many cells can possibly contain data, and you wont have to fill in dummy data or edit the csv or any of those things.

I’ve had a similar problem, Then I have tried in Mysql Workbench.
table data import through CSV file easily and I have done my work perfectly in MySQL Workbench.

As long as the CSV file you're importing has the proper number of empty columns, it should be no problem for phpMyAdmin (and that looks like it's probably okay based on the group of commas you pasted from your last line).
How was the CSV file generated? Can you open it in a spreadsheet program to verify the line count of row 19?
Is the file exceptionally large? (At 19 rows, I can't imagine it is, but if so you could be hitting PHP resource limits causing early termination of the phpMyAdmin import).

Make sure you are trying to import the table into the database and not the database > table.
I had this same issue, tried all listed and finally realized I needed to go up a level to the database

Related

SSIS - Exporting data with commas to a csv file

I am trying to export a list of fields to a csv file from a database.
It keeps putting all the data onto one column and doesn't separate it. When checking the preview it seems to be okay but on export its not working. Currently trying to following settings. Any help would be appreciated.
SSIS settings
Excel file output issue
Actually it seems to work, Excel is just too dumb to recognize it.
Mark the whole table, then go to Data -> Text in Rows
And configure the wizard (Separeted, Semikolon as Separator):
Now you have seperated the rows and cells:

Access Export to CSV, trouble keeping leading zeros, 55-60 Million Records

Let me preface this by saying yes, I do need all of the records. It's part of a FOIL request. Not necessarily in one worksheet or file.
I'm having trouble figuring this out. I am currently able to export about 500k records at a time without timing out or exceeding access file size(I think this is due to working with state systems legacy data) or worrying about excel row limit. I can preserve column headers but lose leading zeros in one field.
This is done using the export wizard to text file. I change the destination file name ending from .txt to .csv and that gives me the option to keep headers. On the preview of the .csv file in the wizard and when opened in notepad it shows the field with leading zeros correctly with double quotes around it, for example "00123456" but then when opened in excel it shows as 123456. If I update the row from General format to Text the contents remain the same.
I have tried VBA method of DoCmd.TransferSpreadsheet but when I try to run it I am prompted with a Macros box. And honestly am less familiar with VBA than in am SQL. Overall I consider myself a novice.

When I import a CSV to MySQL Workbench two extra columns appears

I have a CSV file with 9 columns but when I import it via MySQL Workbench it appears two extra columns with no values. A kind of solution is to deselect them but I want an explanation and a permanent solution to avoid this. Why is this happening? I attach a screenshot to see exactly what I say.
Sometimes in excel file type something and delete, excel file include or read blank columns or row.
If you delete those two columns after gameid in CSV file, after that import CSV file in mysql.

SSIS BCP & Flat File - Is there a limit to how many records can be in the file?

The file I am working with has about 207 million rows. In SSIS it keeps failing.
The column is too long in the data file for row 1, column 2. Verify that the field terminator and row terminator are specified correctly.
Now when I copy a chunk of rows and place into another txt and import I don't get the error.
I can get the rows into sql if I don't use Bulk Insert and use a regular data flow task.
There are two things you should check:
Column length definition for column 2. It's probably set to something like 100 and you try to import a row having a column with a length higher than that.
Check, if you got a column delimiter that may occur inside the data. Imaging you got a file with ; as delimiter, the flat file will run into problem when you got a value containing a semicolon.
The file is pretty long, but i don't think it has something to do with it because the error would be something else.
One other thing you can do is make sure bulk insert is turned off on the oledb destination. On rare occasions I'll get records that don't insert with it turned on.
In fact, if someone knows why that is, I'd love to know.

When SQL Server 2008 query results are exported to CSV file extra rows are added

When I am exporting my query results from SQL Server 2008 to CSV or Tab Delimited txt format I always end up seeing extra records (that are not blank) when I open the exported file in Excel or import it into Access.
The SQL query results return 116623 rows
but when i export to csv and open with excel i see 116640 records. I tried importing the csv file into access and i also see extra records.
The weird thing is that if i add up the totals in excel up to row 116623 I have the correct totals meaning i have the right data up to that point but the extra 17 records after that are bad data which i don't know how it is being added.
Does anyone know what might be causing these extra records/rows to appear at the end of my CSV file?
The way i am exporting is by right clicking on the results and export to csv (comma delimited) or txt (tab delimited) files and both are causing the problem.
I would bet that in those huge number of rows you have some data that had a carriage return internal to the record (such as an address record that includes a line break). look for rows that have empty data in some fo the columns you would expect data in. I ususally reimport the file to a work table (with an identity so you can identify which rows are near the bad ones) and then run queries on it to find the ones that are bad.
Actually, there is a bug in the export results as feature. After exporting the results, open csv file in a Hex editor and look up unique key of last record. You will find it towards the end of the file. Find the OD OA for that record and delete everything else that follows. It's not Excel or Access. For some reason SQL just can't export a csv without corrupting the end of the file.