Mysql outfile to current working directory? - mysql

Is there a way I can make the OUTFILE statement in a .sql file point to the path of the current working directory of the .sql file itself, without manually specifying an absolute path name? As it is now, the default location is the data directory of the schema that I'm working with (ie. C:\progra~1\mysql\etc\etc).
Thanks!

Seems like scripting would be your best bet on this. Using Perl or PHP to generate the query with the outfile in cwd.

Not directly, but you can get similar functionality by using mysql at the command line.
First, some background: the LOAD DATA INFILE statement has a LOCAL variant, LOAD LOCAL DATA INFILE, that allows you to read data from a file on your local file system; without LOCAL it reads from the specified location on the server.
Unfortunately, the SELECT INTO OUTFILE statement always writes to the specified location on the server's filesystem (which can include server-configured mounts such as NFS or CIFS shares); there is no LOCAL equivalent.
However, you can approximate such functionality by using the mysql client in batch mode:
mysql -e 'SELECT * FROM db1.tbl1' -B > output.tsv
The -B option causes the statement to run in batch mode, so the query results are printed in a tab-delimited way, with no borders. Using > somefile.txt allows you to redirect that output to the specified file, so you have something equivalent to what you want, without the FIELDS or LINES options of SELECT INTO OUTFILE, but close enough!

Related

How to migrate large mysql database where remote server has stringent limits?

I have a local database thats about 1GB and my remote host is a free host that I am using for testing. want to make sure everything works before i spend money on a paid host. The problem is the phpmyadmin on the remote server only allows 50mb files which which just doesn't cut it, especially since the restore usually fails due to execution time limits. Below is the list of everything I've tried.
LOCAL
phpmyadmin -----> backingup of table no longer work because of timeout even with modified php.ini settings because of shear size of db
mysqldumper -----> program creates dumps with inserts, there is no option for me to make it create insert ignores. ill explain the problem later below.
mysqlworkbench -----> creates database using database name of my local server (problem is my remote server has a different database name and i cant open a 1gb .sql file to edit the database name at the very top. computer just craps out and I have to force quit workbench)
sqlsplitter (mac program) cuts up large .sql or .sql.gz files
REMOTE
phpmyadmin with .gz/.sql files cut up into 20mb chunks
-----> timeout. phpmyadmin resume function doesnt work either. it just overwrites old data
mysqldumper -----> process ends up in an error randomly midway through my restore on remote server using a backup created with mysqldumper on my local computer (single file or multipart, both dont work). could be at 10% completion, could be at 50%.
bigdump -----> used single and multipart dumps from mysqldumper, same problem. randomly quits halfway through. some multiparts were successful in completing, but when one failed and I tried the failed part again, it would give me an error saying unique key already exists in table. i dont want to unset all my unique key stuff and have to go through and delete all duplicates later.
mysqldumper -----> does not work with dump from mysqlworkbench
bigdump -----> gives me an error sql error denied for creating database using dump from mysqlworkbench (i cannot open up a 1 gb file to delete that 1 line that says create database)
Does anybody know of a better method to upload to my host? I have no command line access on there and only a 500mb space limit (no limit on sql space though).
Thanks
Use mysqldump. Figure out what the error you're seeing is, and fix it. The mysqldump utility works. I've restored dumpfiles with hundreds of gigabytes of data to servers, and never use anything else. If it doesn't work for you, you're doing something incorrectly.
You can prevent it from writing a USE database-name; statement at the top of the file by invoking it with the database name as the last argument, without using the --databases option before it.
You can add the --insert-ignore command line option to write all the INSERT statements as INSERT IGNORE to work around your partial insert issues
You can use --no-data to extract a dump file that contains table definitions, not data, and get all of the tables declared, first.
You can use the --no-create-info option to extract a dump file with just the inserts, not the table definitions.
http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html
You can also use a simple bash loop to extract each table into its own file, so you have smaller files to work with:
for TABLE in `mysql [args] -e 'show tables in database-name'`; do mysqldump [args] database-name $TABLE > $TABLE.sql; done
When restoring the files, add the --compress option to the mysql command line arguments for a faster transfer, and specify your (new) database name as the last argument, so the client will use the correct database before applying the file, which no longer contains the database name.

Execute INSERT statements present in a file

I have a file with many INSERT statements. How do I execute them without having to manually copy them and paste them for execution. It is quite tough to do it that way, as the file is about 60 MB.
You can import a file for mysql to run both from the mysql executable (command prompt) and the client itself. I think the former is easier:
mysql < file-to-import.sql
(you may need username/password, etc.)
Create your file with extension :.sql Then put all your queries on the file. Don't forget to add ";" after each instruction. After that, via a command line, execute mysql < "path-to-your-file/theFilename.sql"

How to import a csv file into MySQL workbench?

I have a CSV file. It contain 1.4 million rows of data, so I am not able to open that csv file in Excel because its limit is about 1 million rows.
Therefore, I want to import this file in MySQL workbench. This csv file contains columns like
"Service Area Code","Phone Numbers","Preferences","Opstype","Phone Type"
I am trying to create a table in MySQL workbench named as "dummy" containing columns like
ServiceAreaCodes,PhoneNumbers,Preferences,Opstyp,PhoneTyp.
The CSV file is named model.csv. My code in workbench is like this:
LOAD DATA LOCAL INFILE 'model.csv' INTO TABLE test.dummy FIELDS TERMINATED BY ',' lines terminated by '\n';
but I am getting an error like model.CSV file not found
I guess you're missing the ENCLOSED BY clause
LOAD DATA LOCAL INFILE '/path/to/your/csv/file/model.csv'
INTO TABLE test.dummy FIELDS TERMINATED BY ','
ENCLOSED BY '"' LINES TERMINATED BY '\n';
And specify the csv file full path
Load Data Infile - MySQL documentation
In case you have smaller data set, a way to achieve it by GUI is:
Open a query window
SELECT * FROM [table_name]
Select Import from the menu bar
Press Apply on the bottom right below the Result Grid
Reference:
http://www.youtube.com/watch?v=tnhJa_zYNVY
In the navigator under SCHEMAS, right click your schema/database and select "Table Data Import Wizard"
Works for mac too.
You can use MySQL Table Data Import Wizard
At the moment it is not possible to import a CSV (using MySQL Workbench) in all platforms, nor is advised if said file does not reside in the same host as the MySQL server host.
However, you can use mysqlimport.
Example:
mysqlimport --local --compress --user=username --password --host=hostname \
--fields-terminated-by=',' Acme sales.part_*
In this example mysqlimport is instructed to load all of the files named "sales" with an extension starting with "part_". This is a convenient way to load all of the files created in the "split" example. Use the --compress option to minimize network traffic. The --fields-terminated-by=',' option is used for CSV files and the --local option specifies that the incoming data is located on the client. Without the --local option, MySQL will look for the data on the database host, so always specify the --local option.
There is useful information on the subject in AWS RDS documentation.
If the server resides on a remote machine, make sure the file in in the remote machine and not in your local machine.
If the file is in the same machine where the mysql server is, make sure the mysql user has permissions to read/write the file, or copy teh file into the mysql schema directory:
In my case in ubuntu it was: /var/lib/mysql/db_myschema/myfile.csv
Also, not relative to this problem, but if you have problems with the new lines, use sublimeTEXT to change the line endings to WINDOWS format, save the file and retry.
It seems a little tricky since it really had bothered me for a long time.
You just need to open the table (right click the "Select Rows- Limit 10000") and you will open a new window. In this new window, you will find "import icon".
https://www.convertcsv.com/csv-to-sql.htm
This helped me a lot. You upload your excel (or .csv) file and it would give you an .sql file with SQL statements which you can execute - even in the terminal on Linux.

How to use LOAD_FILE to load a file into a MySQL blob?

I tried to load a file into a MySQL blob (on a Mac).
My query is
INSERT INTO MyTable VALUES('7', LOAD_FILE('Dev:MonDoc.odt'))
No error appears but the file is not loaded into the blob.
The manual states the following:
LOAD_FILE(file_name)
Reads the file and returns the file contents as a string. To use this
function, the file must be located on the server host, you must
specify the full path name to the file, and you must have the FILE
privilege. The file must be readable by all and its size less than
max_allowed_packet bytes. If the secure_file_priv system variable is
set to a nonempty directory name, the file to be loaded must be
located in that directory.
If the file does not exist or cannot be read because one of the
preceding conditions is not satisfied, the function returns NULL.
As of MySQL 5.0.19, the character_set_filesystem system variable
controls interpretation of file names that are given as literal
strings.
mysql> UPDATE t
SET blob_col=LOAD_FILE('/tmp/picture')
WHERE id=1;
From this, I see more than one thing that could be wrong in your case...
are you passing the full path?
are privileges set correctly?
what does the function return? NULL?
have you tried it with the query given in the manual?
I had the same problem with Linux ...
select load_file('/tmp/data.blob');
+-----------------------------+
| load_file('/tmp/data.blob') |
+-----------------------------+
| NULL |
+-----------------------------+
Eventually i could load the file successfully after user and group ownership were changed to 'mysql':
sudo chown mysql:mysql /tmp/data.blob
double escape the slahes in the full path if you're in windows.
I just wanted to add one more caveat that I found in my testing:
when using select load_file('/path/to/theFile.txt'); The file that you are loading HAS to be on the machine the sql instance is running on.
This bit me in the butt for a long time because I use MySQL workbench to load files all the time into our various sql instances and when using commands like LOAD DATA LOCAL INFILE 'C:/path/to/theFile.csv' INTO TABLE those would easily grab the file off of my local hard drive and process it into the tables regardless of where the actual sql instance was running. However the load_file command doesn't seem to behave at least for me in the same way (Maybe there exists a local_load_file() command I don't know about). MySQL seems to only allow it to look for files from the local system where the sql instance is running.
So if you're like me and you can't figure out why load_file is always returning NULL have no fear...upload the files to the sql server instance and then use that path from your Query browser and all will be well.
After ensure other conditions, my solution is change a global variable named secure-file-priv. Its default value is NULL, which means mysqld can't read/wirite files.
I changed its value by add secure-file-priv= in /etc/my.cnf behind [mysqld], then restart mysql service. Then,
load_file() worked!
Thanks.
The user that is running mysql, needs to OWN the file. My mistake was, I thought it just needed to be able to READ or EXECUTE the file.

How do I migrate a populated mySQL database from dev to a shared host?

The title pretty much says it all, but to elaborate: If I build a mySQL database on my local dev machine, populate it with data, and subsequently want to migrate the database to a shared host (in this case, Siteground,) how do I do so in a way that keeps structure and data intact?
In this case, I don't have file access to the database server.
use mysqldump (doc) and dump your database (mysqldump [databasename] for a simple configuration) on your development machine to a dump (a file containing sql statements needed to recover both schema and data). Now insert the dump on your shared-host using the provided utilities (normaly you get phpMyAdmin preinstalled from your hoster, which can import dumps)
In addition to the response made by theomega (namely, do a dump of your development database and then insert the dump into your production database), be aware that you may need to enable large SQL insert statements if you have a lot of data. I would recommend you first FTP the file to the host, and then do the insert from a file. Each host has their own way of doing it, but if you can connect to the remote server using SSH, there is likely the ability to run the insert using the command line.
also in addition to theomega: most tools for mysql has dump / execute functions for sql files.
if you're using navicat, for an example, you're just a right-click away:
right-click on the database you want to export, and choose "dump sql file". this will allow you to save the .sql file on your local drive in the folder of your choosing.
then, right click on the destination database and choose "execute batch file". browse to the newly-created .sql file and it will execute all sql commands from that file in the destination database. namely, creating a copy of the exported db.