How to convert dbase III files to mysql? - mysql

is ist possible to convert .DBF files to any other format?
Does anybody knows a script, that can be used to convert .DBF files to an mysql query.
It would be also fine, to convert the DBF files to CSV files.
I always got problems with the codec of the DBF files.
Konstantin

https://www.dbase.com/Knowledgebase/faq/import_export_data.asp
Q: How do I export data from a dBASE table to a text file?
A: Exporting data from dBASE to a text file is handled through the COPY TO command.
Like the APPEND FROM command, there are a number of ways to use this command. Here we are only interested in it's most basic use. Once you understand how to use this command, you can go to your on-line help for further details on what can be accomplished with the COPY TO command.
In order to export data you must first be using the table from which the data will be exported. As before, you will be employing the USE command in the command window.
USE <tablename>
For example:
USE Mytest.dbf
Once the table is in use, all you need to do is type the following command in the command window:
COPY TO <filename> TYPE DELIMITED
For example:
COPY TO Myexport.txt TYPE DELIMITED
This would result in a file being created in the current directory called Myexport.txt which would be in the DELIMITED or *.CSV format.
If we had wanted to export the data in the *.SDF format, we would have typed:
COPY TO Myexport.txt TYPE SDF
This would result in a file being created in the current directory called Myexport.txt which would be in the System Delimted or *.SDF format.
Those are the basics on how to import and export text data into a dBASE table. For further information consult the on-line help for the APPEND FROM and COPY TO commands.

I converted old (circa 1997) DBF files to CSV using Python and the dbfread module.
After installation of Python, from the Python interpreter (<WIN> + 'Python') install the dbfread module:
>>> pip install dbfread
The module has many method to read DBF files and excellent documentation.
Then a Python script does the job, or typing directly into the interpreter:
# Read the DBF file
table = DBF('C:/my_dbf_file.dbf', encoding='1252')
outFileName = 'C:/my_export.csv'
with open(outFileName, 'w', newline='', encoding='1252' ) as file:
writer = csv.writer(file)
writer.writerow(table.field_names)
for record in table:
writer.writerow(list(record.values()))
Note that each record in the database is read and save one at a time and that the first line of the CSV file are the column's names.
Encoding could be problematic, a list of encoding to try is here: The dbread.DBF() method tries to guess the encoding but is not perfect. This is why in the code I specify the parameters encoding in both DBF() and csv.open().

Related

Loading data from a UTF-16 Le (.txt) file to Azure SQL Db

We have a .txt file with encoding UTF-16 LE (discussed here, as well). We need to load this file into an Azure SQL database. We are first trying to convert this file to a csv format by using Text Import Wizard of Data Excel 365 wizard. But if we use the ^|^,^|^ as a custom delimiter, the first and last columns still end up with ^|^ value.
Question: What may be possible solutions/work arounds for converting this type of file to csv?
Remarks: This is a huge file (1GB) with about 150 columns. Following is just a sample for explaining the scenario in this post.
Sample of the txt file:
^|^Col0^|^,^|^Col1^|^,^|^Col2^|^,^|^Col3^|^,^|^Col4^|^,^|^Col5^|^,^|^Col6^|^,^|^Col7^|^
^|^1234^|^,^|^4600869848^|^,^|^6000.00^|^,^|^2021-12-20 10:16:19.3600000^|^,^|^False^|^,^|^^|^,^|^^|^,^|^2^|^
^|^5431^|^,^|^3425143451^|^,^|^30000.00^|^,^|^2021-12-13 10:27:44.9030000^|^,^|^False^|^,^|^^|^,^|^^|^,^|^2^|^
.....................
............................
After using the delimiter ^|^,^|^ in Excel text import wizard
Instead of mentioning the ^|^,^|^ as custom delimiter, you can mention comma as a delimiter, that will give you a result like below:
Then you can record a macro to replace the desired characters which is ^|^ after importing is done as mentioned in below link:
Create A Macro Code To Achieve Find And Replace Text In Excel

My schema.ini file is being ignored when using DoCmd.TransferText() from .Net

My schema.ini file is being ignored.I get the same results whether I have a scheme.ini file in the same folder as my tab file or not. All of the columns end up in a single column. I am trying to use a schema.ini as I am importing tab delimited files. The results make perfect sense if it is trying to import a comma delim file.
So my postulate is that the schema.ini file is just being ignored.
I am running Access from a .Net program using Microsoft Access 14.0 Object.Library.
I am using this command from .net:
Access.DoCmd.TransferText( Microsoft.Office.Interop.Access.AcTextTransferType.acImportDelim, , TableName, TabFile, HasFieldNames)
Here is my schema.ini file, not that it matters since it is being completely ignored:
[impacts.txt]
Format=TabDelimited
ColNameHeader=True
MaxScanRows=0
Clues? Thanks!
EDIT:
I tried running this from within an Access Module with the same results.
I tried editing the registry to change the Format value there. Same results.
Consider an action query, either append or make-table, as the use of schema.ini files can work directly in an Access query of a text file. Below assumes .ini file is in same directory as text file.
INSERT INTO mytableName
SELECT * FROM [text;Database=C:\Path\To\Text\File].[impacts.txt]
SELECT * INTO newtableName FROM [text;Database=C:\Path\To\Text\File].[impacts.txt]

how to convert dbf to csv?

How to convert a DBF to CSV?
I need, use this library but it gave error: http://pythonhosted.org/dbf
import dbf
dbf.export('crop1-fx')
print 'Done'
"C:\Users\User\Anaconda2\python.exe"
"C:/Users/User/Desktop/Python/23/dbf/insertValuesDBF.py" Traceback
(most recent call last): File
"C:/Users/User/Desktop/Python/23/dbf/insertValuesDBF.py", line 3, in
dbf.export('crop1-fx') File "C:\Users\User\Anaconda2\lib\site-packages\dbf\ver_2.py", line 7824,
in export
table = source_table(table_or_records[0]) File "C:\Users\User\Anaconda2\lib\site-packages\dbf\ver_2.py", line 7956,
in source_table
table = thingie._meta.table() AttributeError: 'str' object has no attribute '_meta'
Process finished with exit code 1
You almost had it:
import dbf
db = dbf.Table('crop1-fx')
dbf.export(db)
The above will create a crop1-fx.csv file; however, I'm not sure this will work with a 24-digit numeric field in the table.
To convert a .DBF file to .CSV, download dBASE III PLUS or any other dBASE
software available on NET. Please note I am referring to 16 bit platform on
DOS.
Once dBASE is downloaded, go to the DOT prompt and give the following commands:
Type
use <the dbf file in question without the extension .dbf>
You will see the name of the dbf file on the display bar
Then, type "copy to" <file name you want, limited to 8 characters>
"delimited"
Now the data in the dbf file is sent to a TEXT file with data in each field surrounded by " " (double inverted commas) and separated by , (comma)
Now this file can be used to export the data to any other DATABASE SYSTEM which has got provision to convert this .CSV or DELIMITED FILE to the new database.
If only comma-separated file without the " " marks are required a procedure
can be written in dBASE to achieve that also.

Importing data from csv files into Cratedb

I have created a table in Crate 0.38.x with columns having integer, string and timestamp data types. I want to load data into this table from delimited text files. Is there a utility to do a bulk import? Sorry, but I could not find one in the documentation or on Github
In order to do bulk imports from file the COPY FROM statement can be used (see https://crate.io/docs/stable/sql/reference/copy_from.html). But there is only support for JSON formatted files so you'll probably need to convert the text files first.
Not sure if there are any plans to add support for other formats, but if you create a github issue requesting the feature you'll get feedback once it has been implemented.
There are also docs available on how to migrate from mysql and mongodb
I have quickly imported data from MySQL to Crate 0.40 installing Ruby on Rails in the same server of the MySQL DB, and then using the Mysql2JSON gem (See the Mysql2xxx part).
Crate requires a one line per register JSON file. So, you have to edit the output replacing the [", ",", "] with ", "/n", " in the mysql2xxXX gem source, in order to have a format like this in the output:
{"id": 1, "quote": "Don't panic"}
{"id": 2, "quote": "Would it save you a lot of time if I just gave up and went mad now?"}
After exporting the MySQL JSON info with the Mysql2Json gem you have to upload the file to the Create server and put in the Crate console:
COPY table_name FROM 'file:///tmp/import_data/quotes.json'
Read this:
https://crate.io/docs/crate/reference/en/latest/general/dml.html#import-and-export
just make sure that you have created the table with schema beforehand using the copy function to import dataset from json or csv.

How to export sqlite into CSV using RSqlite?

How to export sqlite into CSV using RSqlite?
I am asking because I am not familiar with database files, so I want to convert it using R.
It may be very simple, but I haven't figure out.
not quite sure if you have figured this out. I am not quite sure how to do it within R either but it seems pretty simple to export to csv using SQLite itself, or by writing out csv from the database you have loaded to R.
In SQLite, you can do something like this at your command prompt
>.mode csv
>.export output.csv
>.header on
>select * from table_name;
>.exit
SQLite will automatically wrote out your table to a output.csv file
If the table is not too large, you can first export it into an data frame or matrix in R using the dbGetQuery or the dbSendQuery and fetch commands. Then, you can write that data frame as a .csv file.
my.data.frame <- dbGetQuery(My_conn, "SELECT * FROM My_Table")
write.csv(my.data.frame, file = "MyFileName.csv", ...)