DataGrip fails to import CSV file with entries-count exceeding a specific threshold - csv

I have a MaridaDB table and importing a CSV file of 36,000 entries using the DataGrip "import" dialog box succeeds with no issues at all; but, when a single entry is added to the CSV file to have 36,001 entries, the import fails with the following not-so-helpful notice:
Can't rollback changes with error records. Check connection and
database settings and try again.
I'm using the latest version of DataGrip (v2022.3.2, build #DB-223.8214.62) with MariaDB (v10.4.24) and the latest stable version of DataGrip MariaDB Driver (v3.0.7). My table doesn't have any restrictions (like having a unique column, or whatever), and the issue is certainly not with the added entry (it's absolutely valid and all).
To me, it seems like some kind of "overflow" issue. So, how may I successfully import larger count of CSV entries using DataGrip?
Interestingly, the same CSV file of 36,001 entries gets imported to the same table using phpMyAdmin with no problems at all. I've gone even further and successfully imported a CSV file of 100K+ entries using phpMyAdmin; but sadly, the small CSV file of 36,001 entries still doesn't get imported using DataGrip!

Please open a support ticket:
https://youtrack.jetbrains.com/newIssue?project=DBE
Attach the idea.log and activate the error log option from the Import window.
If you could additionally provide the csv file in question, that'd be great.

Related

How does one properly import large JSON files into MySQL Workbench?

I've got an extremely large JSON file (200,000 rows) that I'd like to import to MySQL Workbench for manipulation and extraction into tables. As far as I can tell, it seems the way to do this is to import the file as a single cell in a table which has the JSON data type. My multiple attempts to import this have been met with errors, as follows.
Importing via LOAD DATA INFILE returns Error Code 1290, the secure-file-priv one, even though I've disabled that setting and placed the file in the correct location anyway just to be safe
Importing via Table Data Import Wizard returns "Can't analyze file. Please try to change encoding type. If that doesn't help, maybe the file is not: json, or the file is empty.
Figuring it was safe to assume that these errors had something to do with the outrageous size of this file, I attempted this with a much smaller JSON file (20 rows), and encountered the following issues:
Importing via LOAD DATA INFILE returns Error Code 1290 again, even though I've taken the steps listed above
Importing via Table Data Import Wizard seems to work at first, however spits out "0 Records imported" at the end, leaving the table untouched.
I'm only moderately experienced with MySQL and hardly experienced at all with handling JSON data, so I'm betting this is an obvious error caused by my inexperience in this topic. Any guidance on this issue would be greatly appreciated!

Importing text file to Access database in Windows 7

I am trying to import data from a text file to Access 2007 and 2010, delimited by vertical bar. I use Import Text Wizard of MS Access, but when I try to import it after choosing appropriate delimiter (preview looks fine), I get the following error:
The changes you requested to the table were not successful because they would create duplicate values in the index, primary key, or relationship. Change the data in the field or fields that contain duplicate data, remove the index, or redefine the index to permit duplicate entries and try again.
There are no primary keys, nor relationships, as it is just a text file. I was able to insert this text file in Windows XP, but the problem arises in Windows 7. I was able to successfully export the data into Excel. I also set Indexed to No in Field Options of Import Text Wizard, but that didn't help either.
Any help would be greatly appreciated, as I couldn't find any useful info anywhere.
Edit: I tried inserting into new and existing tables, even a new database (accdb and mdb format), every time I get the same error.
Edit2: I opened the text file in WordPad, and saved it again as txt file, the Access didn't generate any error. The previous txt file didn't show the new lines, but Access was still aware where each record ends, and preview looked fine. The new text file specifically shows new lines (each record separately). If someone has a suggestion about how to overcome this issue without creating a new txt file, please let me know
I don't think there is anything you can do except convert the file to Windows CRLF format.
Although I wouldn't use WordPad for this, but Notepad++.
It has an explicit command for this: Edit -> EOL conversion, and you can be fairly sure that it won't change anything else in your file (I wouldn't be so sure about WordPad).
Actually most text editors that are more sophisticated than Notepad have a command for this, I think. :)

SSIS Package not reading the last row in flat file

I have SSIS Package which will load .EXT file into my Database table.
The package Flat File connection manager Editor properties are
Format: Ragged Right
Code Page: 1252 ANSI (Latin-I)
Text Qualifier: <None>
Header Row Delimiter: <LF>
While trying to preview the file before loading, i am able to see all the rows in columns and
preview tab of Flat File connection manager Editor.
But in actual loading of the file, last record alone is not getting imported into table.
It was loading fine and still it is processing the file on daily basis.
Only for two days file, it was not imported last records. I am trying to find the root cause.
I suspected something wrong with the file, but i do not find any differences between the
working and not-working version of files.
Please suggest us to resolve the same. Kindly let me know if any informations required.
I ran into the same issue and did some research to find a solution that worked from me. Apparently the SSIS package had gone through a conversion from an earlier version at one point. When the conversion was done, the text qualifier property on the flat file connection was mangled. It had originally been <none>, but the conversion changed it to _x003C_none_x003E_. I opened the flat file connection manager and changed the text qualifier property on the general tab back to the proper value of <none>.
Credit goes to this thread for providing the answer.
I had a similar issue. My flat file didn't had any text qualifiers. When i added a text qualifier the package ran successfully. My guess is that the file is read as text and the CRLF is not recognized at the last line.
If you can provide a sample of the data from the file

MySQL Workbench 6.1 - Error importing recordset

I'm going to be getting a new computer soon and I don't want to lose all of the data I have entered in my tables, so I decided to test out the feature that allows you to export and import CSV files. I exported a table successfully (data was transferred to Microsoft Excel in CSV file), but when I opened the file in Microsoft Excel and added a few rows and tried to import it back in to MySQL Workbench, I got the following error:
"Error importing recordset
error calling Python module function
SQLIDEUtils.importRecordsetDataFromFile"
I've searched all over for info on this, but can't find any solutions. Does anyone know what I'm doing wrong?
In Workbench, open a MySQL connection and then navigate to [Server] --> [Data Export]. There are several backup options here, including saving the data as an individual file or folder. Choose the databases you want to export, and then click [Start Export].
If you ever prefer using Excel for editing and such, then use the MySQL for Excel plugin to access MySQL databases from within Excel. However, I don't think you need it here.
To export your mySQL data, use mysqldump, which will create all the schema for you.
Excel probably added some stuff to your file and now mySQL can't understand it. The best way to find out is by comparing the files before and after the change.
That error indicates a format problem. If the file is small enough, try opening it in wordpad (or the mac equivalent) and see if there's any difference in the formatting? Could be that the delimiting got a little messed up (this can happen especially with end of row markers in MySQL, I've noticed, it can also happen in mac to pc handoffs). If all else fails you could try exporting using a different format and see if that makes a difference (maybe tsv) when you add new rows.
Another reason can be the line endings used. Depending on the system and editor used to work with the cvs file it the line endings might get changed. For me mysql supported UNIX line endings. And in the editor the line ending had been set to MAC OS 9 since I was using a MAC.
Changing it to UNIX line ending worked.
I found that it might be due to a wrong encoding of the input file.
Using Notepad++ for example (or another similar editor) you need to change file encoding to UTF-8.

Easiest way to continually import data to MySQL from a dbf file on my local computer

I have a problem that has been annoying me for quite some time now and a few days ago I started googling for a solution, but I haven't really gotten anything to work. I've read a little about something called SSIS, but I'm not sure it does what I'm looking for or if there is something else I should research in order to accomplish my goal. This is my problem:
My accounting program produces and updates a .dbf file with information about all vouchers and places it in a folder on my local computer. Our MySQL must continually be updated with this information. So this is what I do twice a day:
I open up the .dbf file in excel
Save it as a .csv.
Close Excel
Open the file in notepad++
Convert the formating to utf8
Save
log in to MySQL
Go to the right table
Upload the .csv
Replace the old data with the new
As this takes quite a bit of time, I feel that there must be better ways to do this. It would be great if I could have this scheduled to be done automatically or if there is some kind of an SQL query that could do this, because then I could use PHP to make a website that I could enter and have the query run when I press a button or something.
So my question is: What is the most simple way to continually get the info from the .dbf file into my SQL server?
There is a way to do your job by shedule with DBF Commander Pro's command-line interface. Use the following command in a *.BAT file:
dbfcommander.exe -edb <dbf_file_name> <server_table_name> <connection_string>
After that, create a shedule for this BAT file using Windows Sheduler.
The only issue remains, that you need to clear the destination table on MySQL database before the export process.
In order to try the export process in app GUI, click 'File -> Export to DBMS'. In the window appears click Build button in order to build the connection string: select MS OLEDB Provider for MySQL Server, then choose your server from the list, provide login and password, select a database, click OK:
In the Export to DBMS window select the destination table you want to import source DBF file to, then click Export. The command line you need you can find at the bottom part of the window.
More info on import and export DBF to a database you can find here. Detailed using of command-line is here.
As you mention of doing in PHP. What is stopping you from doing it there.
You could create one connection handle using a VFPOleDB provider to open the path location of the table, open and read the table. Then have a SECOND connection to your MySQL database open and ready to push the data there.
Then, for each row read from the VFP OleDB connection result set, do whatever special cleansing you need to.
Then, query from the MySQL connection if its an existing entry or not and if an add or update is necessary, then send the data respectively.
Continue for the rest of the records from the VFP result set.
No need to open in Excel, save to CSV format, load yet another tool, etc...