I exported a MySQL table as CSV (Just CSV and not "CSV for MS Excel") and edited in excel. Then I opened it with Notepad++ in order to change character encoding to UTF-8. Now I want to import this Notepad++ file back to the MySQL table.
I don't know what next. I tried copying it to excel and save as CSV and import the CSV file, but it doesn't work.
I use shared hosting. I exported the table from phpMyAdmin.
Export->Format->CSV
Please help me.
925,1,2020-04-29 20:00:00,2020-04-29 20:00:00,,ck-X{´¯nsâ ]nXmhv,,publish,closed,closed,,ck-x{´¯nsâ-]nxmhv,,,2020-04-29 20:00:00,2020-04-29 20:00:00,,0,https://example.com/questions/ck-x{´¯nsâ-]nxmhv,0,lp_question,,0
926,1,2020-04-29 20:00:00,2020-04-29 20:00:00,,B[p-\nI ck-X{´¯nsâ ]nXmhv,,publish,closed,closed,,b[p-\ni-ck-x{´¯nsâ-]nxmhv,,,2020-04-29 20:00:00,2020-04-29 20:00:00,,0,https://example.com/questions/b[p-\ni-ck-x{´¯nsâ-]nxmhv,0,lp_question,,0
Related
We have a .txt file with encoding UTF-16 LE (discussed here, as well). We need to load this file into an Azure SQL database. We are first trying to convert this file to a csv format by using Text Import Wizard of Data Excel 365 wizard. But if we use the ^|^,^|^ as a custom delimiter, the first and last columns still end up with ^|^ value.
Question: What may be possible solutions/work arounds for converting this type of file to csv?
Remarks: This is a huge file (1GB) with about 150 columns. Following is just a sample for explaining the scenario in this post.
Sample of the txt file:
^|^Col0^|^,^|^Col1^|^,^|^Col2^|^,^|^Col3^|^,^|^Col4^|^,^|^Col5^|^,^|^Col6^|^,^|^Col7^|^
^|^1234^|^,^|^4600869848^|^,^|^6000.00^|^,^|^2021-12-20 10:16:19.3600000^|^,^|^False^|^,^|^^|^,^|^^|^,^|^2^|^
^|^5431^|^,^|^3425143451^|^,^|^30000.00^|^,^|^2021-12-13 10:27:44.9030000^|^,^|^False^|^,^|^^|^,^|^^|^,^|^2^|^
.....................
............................
After using the delimiter ^|^,^|^ in Excel text import wizard
Instead of mentioning the ^|^,^|^ as custom delimiter, you can mention comma as a delimiter, that will give you a result like below:
Then you can record a macro to replace the desired characters which is ^|^ after importing is done as mentioned in below link:
Create A Macro Code To Achieve Find And Replace Text In Excel
I have a question in connection with this problem. I try to import csv file into mysql table but I get this error message: "can't analyze file. "please try to change the encoding type. If that doesn't help, maybe the file is not: csv or the file is empty". I set the coding to utf 8 in excel when I saved the csv file but doesn't work.
Thanks your help guys :)
These are the first three rows of the csv:
is ist possible to convert .DBF files to any other format?
Does anybody knows a script, that can be used to convert .DBF files to an mysql query.
It would be also fine, to convert the DBF files to CSV files.
I always got problems with the codec of the DBF files.
Konstantin
https://www.dbase.com/Knowledgebase/faq/import_export_data.asp
Q: How do I export data from a dBASE table to a text file?
A: Exporting data from dBASE to a text file is handled through the COPY TO command.
Like the APPEND FROM command, there are a number of ways to use this command. Here we are only interested in it's most basic use. Once you understand how to use this command, you can go to your on-line help for further details on what can be accomplished with the COPY TO command.
In order to export data you must first be using the table from which the data will be exported. As before, you will be employing the USE command in the command window.
USE <tablename>
For example:
USE Mytest.dbf
Once the table is in use, all you need to do is type the following command in the command window:
COPY TO <filename> TYPE DELIMITED
For example:
COPY TO Myexport.txt TYPE DELIMITED
This would result in a file being created in the current directory called Myexport.txt which would be in the DELIMITED or *.CSV format.
If we had wanted to export the data in the *.SDF format, we would have typed:
COPY TO Myexport.txt TYPE SDF
This would result in a file being created in the current directory called Myexport.txt which would be in the System Delimted or *.SDF format.
Those are the basics on how to import and export text data into a dBASE table. For further information consult the on-line help for the APPEND FROM and COPY TO commands.
I converted old (circa 1997) DBF files to CSV using Python and the dbfread module.
After installation of Python, from the Python interpreter (<WIN> + 'Python') install the dbfread module:
>>> pip install dbfread
The module has many method to read DBF files and excellent documentation.
Then a Python script does the job, or typing directly into the interpreter:
# Read the DBF file
table = DBF('C:/my_dbf_file.dbf', encoding='1252')
outFileName = 'C:/my_export.csv'
with open(outFileName, 'w', newline='', encoding='1252' ) as file:
writer = csv.writer(file)
writer.writerow(table.field_names)
for record in table:
writer.writerow(list(record.values()))
Note that each record in the database is read and save one at a time and that the first line of the CSV file are the column's names.
Encoding could be problematic, a list of encoding to try is here: The dbread.DBF() method tries to guess the encoding but is not perfect. This is why in the code I specify the parameters encoding in both DBF() and csv.open().
PCC V10-have exported the table (MS Excel), identified which transactions are missing and have isolated them in same format as original file.
Have saved as .csv with and without headers - wont' import.
I have saved as .txt file, says it imported the lines successfully, but I can find them? any suggestions as to the correct import procedure from csv format into pcc (open item file) table
Exporting a recordset to an external csv file generates a csv file which is comma separated in Mysql workbench. Is there a way to make it export to a semicolon separated csv file instead?.
I tried looking if there is some settings in Mysql workbench which can be changed to make the csv as semicolon separated but couldnt find. Is the exported csv file comma separated by default and cannot be changed or is there a workaround?
Yes, you can. When you are exporting the result set, choose "CSV (; separated)" instead of "CSV" as the output format. See the related documentation (and a screenshot).
Alternatively, the table "Table Data Export Wizard" also has an option to use ";" as the field separator. Get there by right-clicking on a table in the schema viewer.
1.run the query for output in Workbench
2.select the following button for export to records
Insert the following values
Click on export. The records would be exported to a csv file mentioned in the "File Path"