How many ways of importing data into mysql - mysql

I have a page in my website which is used for insertion of properties by users which has 54 boxes.
I don't want these information should go directly to my database cause it make it heavy if there will be 200 record per day.
The way i want is to collect the data from users and confirm it, after confirmation i should be able to imported.
May i know how many ways are there for importing data into mysql ?

How many ways of importing data into mysql:
It should be as simple as...
LOAD DATA INFILE '/tmp/mydata.txt' INTO TABLE PerformanceReport;
By default LOAD DATA INFILE uses tab delimited, one row per line, so should take it in just fine
IMPORT
1.Make sure the database you need has already been created. If it has not, please first create the database:
How do I create a database?
CAUTION:
If you import a backup file to a database that already has content, it will replace the existing content.
Use FTP to upload your SQL file to your server. You can upload it to
your default FTP directory. Or, see Step 1 in the "Export"
instructions above for another suggestion. Alternately, you can use
scp to upload your file via SSH.
Log into your server via SSH.
Use the command cd to navigate into the directory where you uploaded
your backup file in Step 1. If you uploaded the backup to your data
directory, go here (replace 00000 with your site number):
cd /home/00000/data/
Import the database by executing the following command:
`mysql -h internal-db.s00000.gridserver.com -u username -p dbname < dbname.sql`
OR:
`mysql -h internal-db.s00000.gridserver.com -u username -p dbname -e 'source dbname.sql'`
Once you execute this command, you will be prompted for your
database password. Type it in and hit enter. Your database will now
import. It may take a few minutes if you have a large database. When
the import is done, you will be returned to the command prompt.
NOTE:
Variables are the same as in Step 3 from the Export section above.
Please check Step 3 in the "Export" section to make sure you are
correctly replacing the example code with your own information.
dbname.sql is the actual name of your SQL file.
If you have a gzipped backup of your database, you can use this line instead:
`gunzip < dbname.gz | mysql -h internal-db.s00000.gridserver.com -u username -p dbname`
You can enter in your own username, database name, and backup file
name, as before. dbname.gz is the name of your gzipped backup file.
Use "unzip" instead of "gunzip" for zipped files.
Remove the SQL file from your web-accessible directory, if you
uploaded it to a public folder. Otherwise, anyone can download it
from the web.
If you get an error that looks like this:
Got Error: 1045: Access denied for user 'db00000#internal-db.s00000.gridserver.com' (using password: YES) when trying to connect
You have entered an incorrect password. Please retype it carefully,
or reset your password via the AccountCenter Control Panel. See
Database users on the Grid for instructions.
If you get an SQL error during the import, you can force it to finish by adding "-f" to the command, which stands for "force." For example:
`mysql -f -h internal-db.s00000.gridserver.com -u username -p dbname -e 'source dbname.sql'`
This can help you finish an import if you have a few corrupt tables,
but need to get the database as a whole imported before you do
anything else.
http://dev.mysql.com/doc/refman/5.0/en/load-data.html
https://dev.mysql.com/doc/refman/5.0/en/loading-tables.html
https://www.mysql.com/why-mysql/windows/excel/import/
http://www.itworld.com/it-management/359857/3-ways-import-and-export-mysql-database

$file = '/pathtocsviportdatabase/csv/importtabledata.csv';
$import = "LOAD DATA LOCAL INFILE '".$file."' INTO TABLE `imports` FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '\"'
LINES TERMINATED BY '\n'
(AllCOLUMN SEparated by ',');";
mysql_query($import) or die(mysql_error());

Related

How to insert a CSV into Mysql with Xampp shell

I am trying to upload a sql file in the database I am running following code
# mysql - u root -p idecon123 < "d:\IdeOffline\answersnew.sql"
But when I parse this it shows the help manual .
Is there any mistake in my command
Thanks
Go to http://localhost/ after starting Apache and MySQL. On the left side you will see a link for phpMyAdmin. Click that, then upload your .sql file through the browser. See http://wiki.phpmyadmin.net/pma/import for how to import files.
edit: for big files, set an uploadDir per http://docs.phpmyadmin.net/en/latest/config.html . Copy the file into that directory and you'll be able to import it directly.
You can do that by using the mysqlimport command in the command prompt.
Here is the syntax,
mysqlimport -u root -p abc#123 --ignore-lines=1 --fields-terminated-by="," --lines-terminated-by="\n" --local upam_dev C:\Users\ukrishnan\Desktop\Generated-Players\PlayerProfile.dataSet-1.csv
--ignore-line - To exclude if you have something like title in your first line of CSV
--fields-terminated-by - Can specify, how the field is terminated like , or | symbol
The filenames must correspond with the tables into which their data will be imported. If a filename contains one or more . (dots), the portion before the first dot will be assumed to be the name of the table.
If you have data in multiple files, you can specify like Profile.File-1.csv, Profile.File-2.csv. So that it will take the table name as Profile when you import these files.
These is ultimately fast when compare with the importing through PhpMyAdmin. It took me around 3-4 seconds to import a file with 100K records.

Import *.ods file in MySQL without phpMyAdmin

I'm trying to import a *.ods file using mysqlimport or 'load data' instead of phpMyAdmin interface because I need to automate the process.
This is my first attempt:
mysqlimport --ignore-lines=1 -u root -p DATABASE /home/luca/Scrivania/lettura.ods
mysqlimport can't upload the file because there are two spreadsheets, I need both and I can't modify the file structure.
With phpMyAdmin i'm able to upload the file correctly. The content of the two spreadsheets fills correctly the two tables. I know that when importing an *.ods file like this, phpMyAdmin uses the sheet name as the table name for import the file, but this is not the behavior of mysqlimport. Mysqlimport uses the file name, not the sheet name.
So I've tried this:
mysql -u root -p -e "load data local infile '/home/luca/Scrivania/lettura.ods' into table TABLE_NAME" DATABASE
Returns no error but the data in the table is totaly inconsistent.
Any suggestions?
Thank You
MySQL doesn't know how to read spreadsheets. Your best bet is to use your spreadsheet program to export a CSV, then load that new CSV into the database. Specify comma-delimited loading in your query and try it:
LOAD LOCAL DATA INFILE '/home/luca/Scrivania/lettura.csv'
INTO TABLE TABLE_NAME
FIELDS TERMINATED BY ','
ENCLOSED BY '' ESCAPED BY '\'
LINES TERMINATED BY '\n'
STARTING BY ''
The documentation for file loading contains tons of possible options, if you need them.

Export a large MySQL table as multiple smaller files

I have a very large MySQL table on my local dev server: over 8 million rows of data. I loaded the table successfully using LOAD DATA INFILE.
I now wish to export this data and import it onto a remote host.
I tried LOAD DATA LOCAL INFILE to the remote host. However, after around 15 minutes the connection to the remote host fails. I think that the only solution is for me to export the data into a number of smaller files.
The tools at my disposal are PhpMyAdmin, HeidiSQL and MySQL Workbench.
I know how to export as a single file, but not multiple files. How can I do this?
I just did an import/export of a (partitioned) table with 50 millions record, it needed just 2 minutes to export it from a reasonably fast machine and 15 minutes to import it on my slower desktop. There was no need to split the file.
mysqldump is your friend, and knowing that you have a lot of data it's better to compress it
#host1:~ $ mysqldump -u <username> -p <database> <table> | gzip > output.sql.gz
#host1:~ $ scp output.sql.gz host2:~/
#host1:~ $ rm output.sql.gz
#host1:~ $ ssh host2
#host2:~ $ gunzip < output.sql.gz | mysql -u <username> -p <database>
#host2:~ $ rm output.sql.gz
Take a look at mysqldump
Your lines should be (from terminal):
export to backupfile.sql from db_name in your mysql:
mysqldump -u user -p db_name > backupfile.sql
import from backupfile to db_name in your mysql:
mysql -u user -p db_name < backupfile.sql
You have two options in order to split the information:
Split the output text file into smaller files (as many as you need, many tools to do this, e.g. split).
Export one table each time using the option to add a table name after the db_name, like so:
mysqldump -u user -p db_name table_name > backupfile_table_name.sql
Compressing the file(s) (a text file) is very efficient and can minimize it to about 20%-30% of it's original size.
Copying the files to remote servers should be done with scp (secure copy) and interaction should take place with ssh (usually).
Good luck.
I found that the advanced options in phpMyAdmin allow me to select how many rows to export, plus the start point. This allows me to create as many dump files as required to get the table onto the remote host.
I had to adjust my php.ini settings, plus the phpMyAdmin config 'ExecTimeLimit' setting
as generating the dump files takes some time (500,000 rows in each).
I use HeidiSQL to do the imports.
As an example of the mysqldump approach for a single table
mysqldump -u root -ppassword yourdb yourtable > table_name.sql
Importing is then as simple as
mysql -u username -ppassword yourotherdb < table_name.sql
Use mysqldump to dump the table into a file.
Then use tar with -z option to zip the file.
Transfer it to your remote server (with ftp, sftp or other file transfer utility).
Then untar the file on remote server
Use mysql to import the file.
There is no reason to split the original file or to export in multiple files.
If you are not comfortable with using the mysqldump command line tool, here are two GUI tools that can help you with that problem, although you have to be able to upload them to the server via FTP!
Adminer is a slim and very efficient DB Manager tool that is at least as powerful as PHPMyAdmin and has only ONE SINGLE FILE that has to be uploaded to the server which makes it extremely easy to install. It works way better with large tables / DB than PMA does.
MySQLDumper is a tool developed especially to export / import large tables / DBs so it will have no problem with the situation you describe. The only dowside is that it is a bit more tedious to install as there are more files and folders (~350 files in ~1.5MB), but it shouldn't be a problem to upload it via FTP either, and it will definately get the job done :)
So my advice would be to first try Adminer and if that one also fails go the MySQLDumper route.
How do I split a large MySQL backup file into multiple files?
You can use mysql_export_explode
https://github.com/barinascode/mysql-export-explode
<?php
#Including the class
include 'mysql_export_explode.php';
$export = new mysql_export_explode;
$export->db = 'dataBaseName'; # -- Set your database name
$export->connect('host','user','password'); # -- Connecting to database
$export->rows = array('Id','firstName','Telephone','Address'); # -- Set which fields you want to export
$export->exportTable('myTableName',15); # -- Table name and in few fractions you want to split the table
?>
At the end of the SQL files are created in the directory where the script is executed in the following format
---------------------------------------
myTableName_0.sql
myTableName_1.sql
myTableName_2.sql
...

How do I get a tab delimited MySQL dump from a remote host ?

A mysqldump command like the following:
mysqldump -u<username> -p<password> -h<remote_db_host> -T<target_directory> <db_name> --fields-terminated-by=,
will write out two files for each table (one is the schema, the other is CSV table data). To get CSV output you must specify a target directory (with -T). When -T is passed to mysqldump, it writes the data to the filesystem of the server where mysqld is running - NOT the system where the command is issued.
Is there an easy way to dump CSV files from a remote system ?
Note: I am familiar with using a simple mysqldump and handling the STDOUT output, but I don't know of a way to get CSV table data that way without doing some substantial parsing. In this case I will use the -X option and dump xml.
mysql -h remote_host -e "SELECT * FROM my_schema.my_table" --batch --silent > my_file.csv
I want to add to codeman's answer. It worked but needed about 30 minutes of tweaking for my needs.
My webserver uses centos 6/cpanel and the flags and sequence which codeman used above did not work for me and I had to rearrange and use different flags, etc.
Also, I used this for a local file dump, its not just useful for remote DBs, because I had too many issues with selinux and mysql user permissions for SELECT INTO OUTFILE commands, etc.
What worked on my Centos+Cpanel Server
mysql -B -s -uUSERNAME -pPASSWORD < query.sql > /path/to/myfile.txt
Caveats
No Column Names
I cant get column names to appear at the top. I tried adding the flag:
--column-names
but it made no difference. I am still stuck on this one. I currently add it to the file after processing.
Selecting a Database
For some reason, I couldn't include the database name in the commandline. I tried with
-D databasename
in the commandline but I kept getting permission errors, so I ended using the following the top of my query.sql:
USE database_name;
On many systems, MySQL runs as a distinct user (such as user "mysql") and your mysqldump will fail if the MySQL user does not have write permissions in the dump directory - it doesn't matter what your own write permissions are in that directory. Changing your directory (at least temporarily) to world-writable (777) will often fix your export problem.

Problem in importing database in MySQL

I have a .sql file with some database backups inside. Now I want to restore them back to MySQL. How can I this using command line of MySqL please? I found this:
mysql -u username -p -h localhost database_name < dumpfile.sql
but I don't know what username should be, what database_name should be and how I could browse to a .sql file in another folder.
You need to replace username with your database username and it will prompt you for a password. If the dump file has the "create database [name];" and "use [name];" instructions then you dont need to specify the database_name attribute.
To pull the .sql from another folder you just need to specify the path (/home/user/Downloads/file.sql, for example).
You could also try downloading mysql administrator from the mysql website.
Check this link too
http://www.techiecorner.com/31/how-to-restore-mysql-database-from-sql-dump-file/
Redirecting a .sql file into the MySQL CLI works because that's the format that mysqldump produces. And people usually call mysqldump to dump a whole database, so they get one file afterwards.
The username and password are dependant on what's been setup on the database instance you want to reload the data in to. On a clean, empty install, the MySQL root user will work (and probably won't have a password). On an established install, you should find an appropriate user. The user you use will need substantial permissions as it needs to create and write to tables.
The .sql file may have CREATE database and USE database statements near the top. If this is present, then make sure that database does not exist before you pipe the file in. If not, you will need to find out what name is expected by whatever program will be using the database.
As for piping another file in in a different directory, this is simple shell notation. The < filename notation fully supports paths so you can do < some/other/path/filename.sql or < ~/sql/filename.sql, for example. (Note that I've assumed you're using a Unix shell.)
You can use cmd
type cmd run as adminstration (C:\windows\system32>)
give path of mysql of bin folder (C:\windows\system32>
cd `C:\xampp\mysql\bin)
C:\xampp\mysql\bin>mysql -u username -p -h localhost database_name
type-> use database_name
type-> source F:/example.sql