I'm using HBase 0.92.1-cdh4.1.2. I have almost 100 tables in HBase [Size about 10GB]. I want to export all the data to MYSQL. I think sqoop isn't an viable option as it is used mostly to export from sql-nosql databases. And an other option would be exporting HBase data as flat files to local file system and dump the data to mysql using mysqlimport option !
Is there anyway, we can directly dump the data to mysql rather than exporting as flat files?
Related
I have a PostgreSQL schema file, and I have an SQL dump from a MySQL database. I want to know how I can import my MySQL dump file into Postgresql, using the postgresql schema file.
You cna't direct restore mysql dump file into postgres
You use psotgresql wiki or specific software trasnfer like:
Postgresql wiki
ESF Database Migration Toolkit
Can you install another MySQL, load the dump there, and export the data in CSV format? This should allow you to load the data into PostgreSQL, using the COPY command.
I am a new bie to Sqoop. As per my understanding, Sqoop commands are for importing data from database like MySql to HDFs and viceversa and HDFS commands are for dealing with data in HDFS, such as getting data from HDFS to local file system and viceversa. Cant we use sqoop commands to deal with data in HDFS - to get the data from local file system to hdfs and viceversa. Please let me know the exact differences between Sqoop and HDFS commands. Why do we have two separate things. Why they did not put all these commands into one set. Apologies, if my question does not make sense.
Sqoop commands serves below purposes:
1)Import/export data from any database to hdfs/hive/hbase and vice versa. Its not restrict only to hdfs import and export.
2)data can be sqooped at one go if we need to move a whole database/list of tables.
3)only incremental data can be imported via sqoop commands.
4) It also required connection driver to connect to databases
In short it deals with tables/databases.
hdfs commands:
1) It only used to transfer any type(csv,text,xls) of file from local to hdfs or vice versa. Its just serve basic functionality of moving or copying data from one system to other just like unix commands.
Sqoop only functionality to import and export data from RDBMS (Structured) to Hadoop. It does not provide any other HDFS inside activities. Once if you get the data using Sqoop to HDFS, HDFS commands will be used to process the data (copy, move,etc)
For more Sqoop functionalities http://hortonworks.com/apache/sqoop/
Yes your understanding is correct.
Sqoop commands are for :
importing data from any relational database(like mysql) to HDFS/Hive/Hbase
exporting data from HDFS/Hive/Hbase to any relational database(like mysql)
hdfs commands are for :
Copying/transferring any files (like :.txt,.csv,.xls,..etc) from local to hdfs or vice versa.
for :
Why do we have two separate things. Why they did not put all these commands into one set.
answer :
Sqoop commands
(for copying structured data b/w two different systems)
Hdfs commands
(for copying files b/w local and hdfs)
using sqoop we cannot copy files from local to hdfs and viceversa
and also
using hdfs commands we cannot copy data from hdfs to any other external databases (like mysql) and viceversa.
I'm trying to use percona xtrabackup to backup a mysql database. in the restoring the database according to the documentation:
rsync -avrP /data/backup/ /var/lib/mysql/
this will copy the ibdata1 as well.
what if I have want to restore the backup into an existing mysql instance with some some existing databases? would this corrupt my other databases? clearly this will overwrite existing ibdata1.
I suppose you have a local http/php server, so in case you don't need to batch import or export information, I suggest you use a database manager app that can import or export as sql, csv or tsv files.
I use a web-based admin tool called Adminer and it works great (plus, it's just a single php file). It has options to export or import a whole database or just certain tables and even specific registers. Its usage is pretty straightforward.
I have recently switched web hosting providers and the new provider does not allow 'load data infile' commands in MySQL. However, I have many CSV files that update tables in a database weekly using that command. What is the best way to import into MySQL without the typical load data option? I tried mysqlimport, but that seems to fail since the data isn't in SQL format, its just standard CSV data. Thanks for your help.
Use the following process:
Convert the CSV to the MySQL CSV dump format
Upload the file to the MySQL server or to the shared hosting file system
Use one of the following commands to import it:
mysqladmin:
mysqladmin create db1
mysql db1 < dump.csv
mysql:
mysql> CREATE DATABASE IF NOT EXISTS db1;
mysql> USE db1;
mysql> source dump.csv
References
MySQL :: MySQL 5.7 Reference Manual :: 7.4.2 Reloading SQL-Format Backups
Text::CSV::Auto - Comprehensive and automatic loading, processing, and analysis of CSV files. - metacpan.org
MySQL :: MySQL 5.7 Reference Manual :: 8.4.3 Dumping Data in Delimited-Text Format with mysqldump
Restore data from a tab delimited file to MySQL - Electric Toolbox
Using mysqldump to save data to CSV files - Electric Toolbox
Mysqldump in CSV format
I am using hbase-0.90.6. I want to export the data from HBase to mysql. I know two-step process , first by running a mapreduce job to pull Hbase data into flat files, then exports flat file data into mysql.
Is their any other tool which I can use to reduce this two-step to one. Or can we use sqoop to do the same in one step. Thanks.
I'm afraid that Sqoop do not support exports directly from HBase at the moment. Sqoop can help you in the two-step process with the second step - e.g. Sqoop can take data from HDFS and export them to MySQL.
Yes Sqoop is the tool that can be used for both importing as well as exporting ur data from/to mysql and HBase
You can know more about Sqoop #
http://sqoop.apache.org