Running thousands of MySQL queries via putty - mysql

Every time I try to run more than a few insert into queries on my ubunutu mysql database via putty I get errors from one or more rows that fail to update, and if it's hundreds or more, it usually crashes or is pauses from an incomplete query presumably. This has nothing to do with the syntax of the queries as when I run them individually they run fine. I there anything I can do to run fix this?

I've tried Rocket's solution but it did much the same thing (skipping rows and then hangs).
I've just noticed there are carriage returns and line feeds in the data so after removing those it seems to be working wihtout any errors but is taking absolutely ages using the BEGIN, COMMIT method. Maybe because it is only parsing one single really long line now instead of several lines at a time.

I'm copying and pasting queries from an excel spreadsheet onto the mysql command line in putty. Putty is connected the whole time. It's tricky to debug as putty puts a limit to the number of lines it displays. – garry 45 mins ago
Don't do this. Putty will drop parts of the content you paste, or place length limits on it.
Instead:
Export the queries from Excel into a text file on your PC, for example "exported_queries.sql".
Transfer that text file to your Ubuntu server, using scp.
Then open an ssh session to the Ubuntu server, and run the text file as input to the mysql program. You can do this with the source command in the mysql shell:
mysql> source exported_queries.sql
I also recommend running tmux or screen in your Ubuntu ssh window, because those programs are good for keeping your session alive even if Putty disconnects. If you have a long-running command in your ssh session, you can reconnect and "reattach" to the session in progress.

Related

Unreadable or empty strings when connecting to an MySQL database with Database Explorer App or command line

In Matlab I succesfully connected the Database Explorer App to a MySQL database that I've created on a virtual machine to run some test.
Here you can see the content of the database from the Ubuntu Virtual Machine.
The problem is that when I try to read the data from Matlab I get unreadable strings that change every time I run the query (if I use the app) or empty cells (if I use command line). See the following screenshots.
From command line I can even add entries to the database. These entries are correctly visualized inside of the Ubuntu Virtual Machine, but when I try to read them back from Matlab I have the same issue (empty or non-readable).
Insert id 12:
And reading back in Matlab.
What am I missing?
Thank you!

How to migrate large mysql database where remote server has stringent limits?

I have a local database thats about 1GB and my remote host is a free host that I am using for testing. want to make sure everything works before i spend money on a paid host. The problem is the phpmyadmin on the remote server only allows 50mb files which which just doesn't cut it, especially since the restore usually fails due to execution time limits. Below is the list of everything I've tried.
LOCAL
phpmyadmin -----> backingup of table no longer work because of timeout even with modified php.ini settings because of shear size of db
mysqldumper -----> program creates dumps with inserts, there is no option for me to make it create insert ignores. ill explain the problem later below.
mysqlworkbench -----> creates database using database name of my local server (problem is my remote server has a different database name and i cant open a 1gb .sql file to edit the database name at the very top. computer just craps out and I have to force quit workbench)
sqlsplitter (mac program) cuts up large .sql or .sql.gz files
REMOTE
phpmyadmin with .gz/.sql files cut up into 20mb chunks
-----> timeout. phpmyadmin resume function doesnt work either. it just overwrites old data
mysqldumper -----> process ends up in an error randomly midway through my restore on remote server using a backup created with mysqldumper on my local computer (single file or multipart, both dont work). could be at 10% completion, could be at 50%.
bigdump -----> used single and multipart dumps from mysqldumper, same problem. randomly quits halfway through. some multiparts were successful in completing, but when one failed and I tried the failed part again, it would give me an error saying unique key already exists in table. i dont want to unset all my unique key stuff and have to go through and delete all duplicates later.
mysqldumper -----> does not work with dump from mysqlworkbench
bigdump -----> gives me an error sql error denied for creating database using dump from mysqlworkbench (i cannot open up a 1 gb file to delete that 1 line that says create database)
Does anybody know of a better method to upload to my host? I have no command line access on there and only a 500mb space limit (no limit on sql space though).
Thanks
Use mysqldump. Figure out what the error you're seeing is, and fix it. The mysqldump utility works. I've restored dumpfiles with hundreds of gigabytes of data to servers, and never use anything else. If it doesn't work for you, you're doing something incorrectly.
You can prevent it from writing a USE database-name; statement at the top of the file by invoking it with the database name as the last argument, without using the --databases option before it.
You can add the --insert-ignore command line option to write all the INSERT statements as INSERT IGNORE to work around your partial insert issues
You can use --no-data to extract a dump file that contains table definitions, not data, and get all of the tables declared, first.
You can use the --no-create-info option to extract a dump file with just the inserts, not the table definitions.
http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html
You can also use a simple bash loop to extract each table into its own file, so you have smaller files to work with:
for TABLE in `mysql [args] -e 'show tables in database-name'`; do mysqldump [args] database-name $TABLE > $TABLE.sql; done
When restoring the files, add the --compress option to the mysql command line arguments for a faster transfer, and specify your (new) database name as the last argument, so the client will use the correct database before applying the file, which no longer contains the database name.

Schedule periodic mysql script to run automatically every hour

I've been searching, but nothing clear came up.
I wonder if there is a way to schedule a big MYSQL script which takes normally up to 2 minutes to run (with workbench not able to run in phpmyadmin cause of timeout -> server not in my hands) so that it would automatically run every hour?
First, just to clarify, MySQL Workbench is a native application. It cannot run in PMA or any other web application.
MySQL Workbench itself (or MySQL for that matter) does not have a scheduler or something like that. You can however use your OS' means. E.g. use AT or crontab to run MySQL Workbench and pass it the script to execute. Run MySQL Workbench from the commandline with the -h (or --h on non-Win) switch to get a list of possible parameters.

What is a mysql disconnect command line (why is it useful)?

I am running a mysql database and I connect to it just fine. My question is: whenever I connect to the database (to add new input via php) do I also have to include a disconnect command line?
I ask because my bandwidth usage is growing faster than I expected so I am happy thinking that I am getting traffic, but perhaps it is growing because I connect and do not "disconnect"?
From the mysql docs
mysql is a simple SQL shell with input line editing capabilities. It
supports interactive and noninteractive use.
The fact is that the SQL shell should not be causing major load on your box. The standard practice is to just close the shell and kill the program.
Typing Control+C causes mysql to attempt to kill the current
statement. If this cannot be done, or Control+C is typed again before
the statement is killed, mysql exits
When you exit mysql command line tool the process will end and mysql will continue doing its thing. But the answer to your question is no SQL shell should not be slowing things.
From PHP its a good idea to close the connection when you are done using it. To check out what processes are running open up mysql cmd tool and try the following to see what is connected to your mysql instance.
SHOW PROCESSLIST
if showprocesslist isnt what you were looking for give this a shot:
mysql > show status like '%onn%';
Hopefully this will give you enough information to handle the traffic load.
devzone.zend.com :
"Open connections (and similar resources) are automatically destroyed at the end of script execution. However, you should still close or free all connections, result sets and statement handles as soon as they are no longer required. This will help return resources to PHP and MySQL faster."
My advice:
It is a good practise to close a connection after doing the queries you wanted.

phpMyAdmin crashing the MySQL host server

I have encountered this problem a couple of times, in the last few days. So, it happens occasionally. I have setup mysql on a remote machine, and there is a java program on another machine querying the database to read and write records every few seconds.
I am using phpMyAdmin to administer my database. And, at times, after running some SQL query, the mysql server stops responding. Even the pinging the host machine doesn't succeed. And, I have to ask someone with physical access to the machine to boot it up again.
I checked for log files but couldn't find them in the mysql directory. Is logging disabled by default? What is missing here? And, how can I go about troubleshooting this?
EDIT:
I was able to ping the server after some while. So, the server must have been temporarily busy. It's not a specific query but things like re-ordering the data of a table serially under the browse tab.
Use a mysqlclient to make a connection and keep it open.
I personally use the mysql from the commandline.
If the server becomes unresponsive execute
SHOW PROCESSLIST;
It will list all mysql processes and will show how long queries are waiting/executing.
Optionally use the KILL statement to terminate the query that locking the tables.
KILL $pid
I'd highly recommend using MySQL's own GUI tools for database management, for a vriety of reasons:
They have full support for InnoDB tables, including Foreign Key management
You can use database-level security to make sure only you get into your data (unlike phpMyAdmin, which at best can only be root access installed behind a .htaccess password)
It is official and supported. No extra binaries run on the server, so you run no risk of it crashing and taking the server down with it (unless your query itself is locking it...)