I have a exported sql schema file. (similar to what we have here https://livesql.oracle.com/apex/livesql/file/content_O5AEB2HE08PYEPTGCFLZU9YCV.html)
This file is having all Create table and insert values commands.
I want to export all the database to a CSV or JSON format.
Is there a way to achieve the same?
You appear to be asking how to convert the data from the .sql script directly to a raw data format. That would require a SQL parser capable of reading the .sql script format. This is implemented in MySQL using a combination of the mysql client and the MySQL Server SQL parser. It would be an awful lot of work to duplicate this.
Honestly, the easiest solution is to use the mysql client to import the .sql script's tables and data into a MySQL instance. Then you could dump the data in CSV format, or whatever other format you want.
You can run queries using the mysql client in batch mode to dump results to CSV format (really tab-delimited), or you could write a simple client in Python or whatever your favorite language is.
Another alternative is to use mysqldump --tab to dump CSV files. I'll encourage you read the documentation about that.
Related
I am trying to import data from MySQL dump .sql file to get imported into MongoDB. But I could not see any mechanism for RDBMS to NoSQL data migration.
I have tried to convert the data into JSON and CSV but it is not giving m the desired output in the MongoDB.
I thought to try Apache Sqoop but it is mostly for SQL or NoSQL to Hadoop.
I could not understand, how it can be possible to migrate data from 'MySQL' to 'MongoDB'?
I there any thought apart from what I have tried till now?
Hoping to hear a better and faster solution for this type of migration.
I suggest you dump Mysql data to a CSV file,also you can try other file format,but make sure the file format is friendly so that you can import the data into MongoDB easily,both of MongoDB and Mysql support CSV file format very well.
You can try to use mysqldump or OUTFILE keyword to dump Mysql databases for backup,using mysqldump maybe takes a long time,so have a look at How can I optimize a mysqldump of a large database?.
Then use mongoimport tool to import data.
As far as I know,there are three ways to optimize this importing:
mongoimport --numInsertionWorkers N It will start several insertion workers, N can be the number of cores.
mongod --njournal Most of the continuous disk usage come from the journal,so disable journal might be a good way for optimizing.
split up your file and start parallel jobs.
Actually in my opinion, importing data and exporting data aren't difficulty,it seems that your dataset is large,so if you don't design you document structure,it still make your code slower,it is not recommended doing automatic migrations from relational database to MongoDB,the database performance might not be good.
So it's worth designing your data structure, you can check out Data models.
Hope this helps.
You can use Mongify which helps you to move/migrate data from SQL based systems to MongoDB. Supports MySQL, PostgreSQL, SQLite, Oracle, SQLServer, and DB2.
Requires ruby and rubygems as prerequisites. Refer this documentation to install and configure mongify.
I have a CSV file consisting of 78,000 records.Im using smarter_csv (https://github.com/tilo/smarter_csv) to parse the csv file. I want to import that into a MySQL database from my rails app. I have the following two questions
What would be the best approach to quickly importing such a large data-set into MySQL from my rails app ?. Would using resque or sidekiq to create multiple workers be a good idea ?
I need to insert this data into a given table which is present in multiple databases. In Rails, i have a model talk to only one database. So how can i scale the solution to talk to multiple mysql databases from my model ?
Thank You
One way would be to use the native interface of the database application itself for importing and exporting; it would be optimised for that specific purpose.
For MySQL, the mysqlimport provides that interface. Note that the import can also be done as an SQL statement and that this executable provides a much saner interface for the underlying SQL command.
As far as implementation goes, if this is a frequent import exercise, the sidekiq/resque/cron job is the best possible approach.
[EDIT]
The SQL command referred to above is the LOAD DATA INFILE as the other answer points out.
Performance wise probably the best method is the use MYSQL's LOAD DATA INFILE syntax and execute an import command on each database. This requires the data file to be local to each database instance.
As the other answer suggests, mysqlimport can be used to ease the import as the LOAD DATA INFILE statement syntax is highly customisable and can deal with many data formats.
So, I've got a MySQL database consisting of a bunch of tables that I want to give to my uncle.
Problem is, he doesn't know much about computers, so I can't just hook him up with the database.
Instead, I would like to extract all the data from the database into a more readable format, e.g. Excel spreadsheets.
I've tried mysqldump, but that just gives me a *.sql file which doesn't help much.
Any ideas?
If you only have command line, this answer explains how to dump into a tab delimited and this answer how to dump into a comma delimited file. You can then import the tab delimited file into Excel.
Alternatively, you can use phpMyAdmin to export to *.csv if you have PHP.
PHPmyadmin allows you to Export to many different formats - why don't you access your database through that?
I have to make some queries on several Excel sheets and I think it would be easier if I can put them on a DB and make the queries with SQL.
Is there a tool that creates SQL tables from a CSV file with headers?
MySQL has functions built in to load data from CSV files. This is probably your best option and one that gives you the most control.
Have a look at Data Import tool (Excel, CSV or TEXT formats) in dbForge Studio for MySQL. Import allows to create new table and customize fields in a wizard.
I'm using Firebird database and I need to load Excel file into a database table. I need a tool that does this well. I tried some I found on Google, but all of them have some bugs.
Since Excel data is not created by me, it would be good if it could scan the file and discover what kind of data is inside and suggest a table to be created in the database.
Also, it would be nice if I could compare the file against the data that is already in the database table, and I can pick which data to load and which not.
Tools that load CSV files are also fine, I can "Save as" CSV from Excel before loading.
Well, if you can use CSV, the I guess XMLWizard is the right tool for you. It can load a CSV file and compare with database data. And you can select the changes you wish to make to the table.
Don't let the name fool you, it does work with XML, but it also works very well with CSV files. And it can also estimate the column datatypes and offer CREATE TABLE statement for your file.
Have you tried FSQL?
It's a freeware very similar to Firebird's standard ISQL, but with some extra features, like import data from CSV files.
I've used it with DBF files and it worked fine.
There is also EMS Data import tool for Firebird and Interbase
http://www.sqlmanager.net/en/products/ibfb/dataimport
Not free, though, but it accepts a big variety of formats, including CSV and Excel.
EDIT
Another similar payware tool is Firebird Data Wizard http://www.sqlmaestro.com/products/firebird/datawizard/
There are some online tools which can help you to generate DDL/DML scripts from csv header/sample dump file, check out: http://www.convertcsv.com/csv-to-sql.htm
You can then use sql-workbench's Data Pumper or WbImport Tool from command line.
Orbada has GUI which support for importing csv file also.
DBeaver Free edition also support importing csv out of the box.
BULK INSERT
Other way is on Excell you build formula in new cells with data you want to export. The formula consists to format in strings and lenght to your field according lenght your field in firebird. So you can copy all this cells from excell and past on txt editor, so is possible to use the strategy of BULK INSERT in Firebird.
See more details in http://www.firebirdfaq.org/faq209/
The problem is if you have blob or null data to import, so see if you have this kind of values and if this way is to you.
If you have formated data in txt file, BULK INSERT will be quick way.
Hint: You can too to disable trigger and index associated with your table to accelerate BULK INSERT, and after enable them.
Roberto Novakosky
I load the excel file to lazarus spreadsheet and then export to firebird db. Everythong is fine and the only problem is fpspreadsheet will consider string field with numbers only as a number field. I can check the titles in the first row to see whether the excel file is valid or not.
As far as I can see all replies so far focus on tools that essentially read the Excel (or CSV) file and uses SQL inserts to insert the records into the Firebird database. While this works, I have always found this approach painstakingly slow.
That's why I created a tool that reads an Excel file and writes one file that has a (text) format suitable for Firebird external table (including support for UTF8 char columns) and one DDL file to create the external table in Firebird.
I then use regular SQL to select from the external table, cast as needed, and insert into whatever normal Firebird table I want. The performance with this approach is orders of magnitude faster than SQL inserts from a client app in my experience.
I would be willing to publish the tool. It's written in C#. Let me know if there's any interest.