I have an SQL file with 30,000 UPDATE lines. When I upload it through phpmyadmin it freezes at a certain point and doesn't update everything.
Is there a way to execute all 30,000 lines, without problems, all at once? or do I have to go through and manually execute 200 lines at a time?
Line Example:
UPDATE `table` SET `value1`='Some text', `value2`=0 `value3`=1 WHERE id=500;
^ I have 30,000 lines like that.
PHPMyAdmin's query parsing is slow. It's much better to log into server via SSH and execute the command using mysql client:
$ mysql -uUsername -pPassword DatabaseName < script.sql
If you don't have SSH access, you can upload the sql script (via FTP, for example) and write a small PHP script that calls the command using system, exec or similar PHP function:
<?php
system('mysql -uUsername -pPassword DatabaseName < script.sql');
Then invoke the script via browser.
Make sure you use full paths to mysql (/usr/bin/mysql usually) and your script file.
If you use non-system character set, make sure you add the default_character_set option as well.
Related
I've been searching to understanding the following MySQL command:
framework_mariadb.sql | mysql -u username -p password -h 127.0.0.1 -P 26257 target
My guess was the sql statements within the sql file get executed by mysql for the given target/database. But then I came across the source command in MySQL, ie
\bin\mysql -u root -p testdatabase < C:\Users\Juan\Desktop\databasebackup.sql
So my question is, does the first command and the second command essentially do the same thing? My apologies if this has already been asked, I haven't been able to find details for the first SQL command.
This is more about Linux shell capabilities than it is about MySQL.
The second form runs the mysql client, and uses the < symbol to tell it to take its input from the specified file.
The first form does essentially the same thing, but uses the pipe character | to indicate that the output of the first command should be sent to the input of the second command.
However, for the first form I'd expect the line to start with cat (as in cat framework_mariadb.sql | mysql ...) because the SQL script won't normally run as a shell command. cat is a command that reads one or more files and send s them to its output.
It is possible to set the SQL script up to run like this, but that requires a specific line (#! /bin/cat or similar) to be present at the top of the file, and the file must have the execution bit set. (At least, that's how I'd do it. There might be some other bash magic I'm not aware of. bash is not my forté)
There are many resources on the web that can teach the fundamentals of the Linux shell. You could try Microsoft's Introduction to bash, but there are many others.
I have 5GB database that needs to be uploaded to phpmyadmin and that too on the shared server where i cannot access the shell.Is there any solution that can take lesser time to upload? Please do help me by providing the steps to upload the sql file. I have searched through internet but could not find an answer.
Do not use phpmyadmin.
Assuming you have shell, upload the file and feed it directly to mysql command.
Your shell command will look like:
cat file.sql | mysql -uuser -ppassword database
or you can do gzipped file:
zcat file.sql.gz | mysql -uuser -ppassword database
Prior doing this check:
database connection works (correct database, user and password)
database is empty :)
mysql max packet size is OK
you have enough diskspace
* UPDATE *
You said you do not have shell access.
Then you have following options -
upload the file and contact support, let they do it for you.
feed it remote, cpanel have special menu where you can get remove access, other panels have same ability too.
in this case code will be executed on your computer and look like:
cat file.sql | mysql -uroot -phipopodil -hwebsite.com
or for windows:
/path/to/mysql -uroot -phipopodil -hwebsite.com < file.sql
do some "hack" - feed it through crontab, at or via php system() command.
If you choose "hack" option, note following:
php have max_execution_time - even if you set it to zero, there could be some limit "imposed" from hosting.
usually hosts have limited mysql updates per hour.
there could be some ulimit restrictions.
if you execute feeding of 5 GB on shared server, server will slow down and administrator will check what you are doing.
This depends on your database, you tagged it with 3 different database types, mysql, sql-server, and postgresql. I know mysql and postgresql have import features, although I'd be surprised if SQL Server didn't as well. You could import the database file via the command line instead of having to use phpmyadmin.
Incidentally, the phpmyadmin tool also has an import feature, but that again depends on the format of your database. If it's a compatible sql file, you could upload it to phpmyadmin and import it there, but I'd recommend the previous method I mentioned, upload it to your host, then use whatever database tool (mysqlimport for mysql, or if it's the result of a pg_dump command, you can just run:
psql <dbname> < <yourfile>
ie
psql mydatabase < inputfile.sql
I am learning how to use mysql with Cloud9, I have a script used to create a default database and tables and loading sample data into the new database. How do I use the mysql-ctl tool to execute a script file?
It connects to a database just fine and I can execute ad-hoc queries without an issue.
You cannot use mysql-ctl to execute the script (you can see the source code running less $(which mysql-ctl) but you can use the usual mysql client command:
mysql -h 127.0.0.1 -u username < yourfile.sql
When you are in 'mysql-ctl' just run
source file_name.sql and your script will run.
I would like to import a 350MB MySQL .sql file on a Windows 7 machine. I usually do this by using
mysql -uuser -p -e "source c:/path/to/file.sql" database
since < doesn't work in Powershell.
My .sql file has an error in it this time. I'd prefer to just skip the bad row and continue the import. How can I force the import to continue on Windows?
On a unix/linux based system, I could use
mysql --force ... < file.sql
but --force doesn't seem to work with the -e "source ..." command (understandably so).
Thanks,
Mike
You're probably going to have to have Powershell execute this in the standard console in order to use < properly. Technically you could use get-content and pipe the output to mysql, but I've always found that to be slow, and it somehow still keeps the file contents in memory of the Powershell session.
This is how I would execute it from the Powershell prompt (changed file path to include spaces to demonstrate inner quotes, just in case):
cmd /C 'mysql -uuser -p --force < "C:\path\with spaces\to\file.sql"'
[GC]::collect() would apparently clear it up the memory, but you can't do that until after it's done anyway. When it comes to mysql and mysqldump, I don't bother with Powershell. The default encoding used in > is Unicode, making dump files twice as big out of Powershell as out of cmd unless you remember to write | out-file dump.sql -enc ascii instead of > dump.sql.
I'd suggest to also have a look at this SO answer, that takes advantage of source SQL command:
I am trying to create a batch script that would connect to a mySQL database and issue a delete command:
#echo off
echo Resetting all assessments...
mysql -hlocalhost -urdfdev -p%1 rdf_feedback
delete from competency_question_answer;
I will run this script providing the password as a command-line argument, but all this script does is, connects to the database, and the mysql> prompt will be shown. After I exit from mysql, the rest of the batch commands get to execute (and fail, no surprise).
How can I pass the SQL commands from the batch script to the mysql console? Is this even possible?
You need to use command line tools. I don't know if there exists any for MySQL but for SQL there is SQLCMD and for Oracle there is OSQL.
What you can also do is something like this.
mysql -uuser -ppass < foo.sql
Where foo.sql is the commands you want to execute.
You may need to connect multiple times:
#echo off
echo Resetting all assessments...
mysql -hlocalhost -urdfdev -p%1 rdf_feedback -e delete from competency_question_answer;
Alternatively, you should be able to put all your commands in a separate file such as input.sql and use:
mysql -hlocalhost -urdfdev -p%1 rdf_feedback <input.sql
echo "delete from competency_question_answer;" | mysql -hlocalhost -ur... etc.
Putting multiple sets of commands into .sql batch files works best, and you can execute multiples of these in the .bat file.