I've been searching, but nothing clear came up.
I wonder if there is a way to schedule a big MYSQL script which takes normally up to 2 minutes to run (with workbench not able to run in phpmyadmin cause of timeout -> server not in my hands) so that it would automatically run every hour?
First, just to clarify, MySQL Workbench is a native application. It cannot run in PMA or any other web application.
MySQL Workbench itself (or MySQL for that matter) does not have a scheduler or something like that. You can however use your OS' means. E.g. use AT or crontab to run MySQL Workbench and pass it the script to execute. Run MySQL Workbench from the commandline with the -h (or --h on non-Win) switch to get a list of possible parameters.
Related
This may have been answered elsewhere, but I can't seem to locate it, so please accept my sincere apologies if this is a duplicate question.
Complete newbie to CentOS command line operation of MySQL.
I'm trying to migrate 200,000,000 + records from a MSSQL database to MySQL and the Workbench migration tool fails. Given up trying to sort that so I've written a migration package in VB.Net to get all of the other 7-800 tables migrated directly, and they work great, but I have a few very large tables with around 15,000,000 records or more in each and my migration method would take several days to complete!
So - brainwave... I have the migration program create "insert into..." SQL statements in a single sql file, ftp this to my CentOS box and execute it locally on the CentOS machine.
Works fine, using:
mysql --user=user --password=password
to log in to MySQL, then executing the script as
source mysqlscript.sql
...but I will have a lot of scripts, such as
script1.sql
script2.sql
script3.sql
...
script27.sql
Is there a way within MySQL to batch process all these SQL scripts so I can just leave it running without having to manually set each of the 27 scripts off manually?
So I installed mysql and mysql workbench now my goal is to run 20 different queries on the schema and database I installed onto it. to install the schema and database I used File => Open Sql Script and then selected the schema then the database to install them. However I seem to have a issue where the lightning bolt icon you use to run scripts isn't high lighted and it's not allowing me to run anything and I don't know why.
I previously used MySQL Workbench to do this, in an environment that was already set up.
How do I set up a minimal working environment to just create and join tables on my own computer? (Connections???)
More details:
I downloaded and installed MySQL Workbench, and I can't even run SELECT sysdate();. There's a red x next to it. If I try "CREATE DATABASE MY_DATABASE; there's a green check, but the execute button is grey.
Doing some reading I apparently need "connections." Reading about that, I apparently need to also install MySQL Database Server. Who knows what else.
So, again, the question is how do I set up a minimally working environment to just create tables from .csv files, join them with MySQL commands, and export the results to another .csv file? (I know the syntax of the command to import a .csv file, and how to join tables.)
Thanks.
Install MySQL WorkBench AND MySQL Server.
From the command line, in the directory where MySQL server is installed, execute "mysqld --initialize" (One time only.)
execute "mysqld" from the command line, after the initialization given in step 1, and after any reboots. (It runs in the background, and doesn't exit when you exit MySQL WorkBench. (It can optionally be installed as an automatically running Windows service during installation.)
Execute Database -> Connect to Database upon starting MySQL WorkBench (each time you start the application). The default local host connection works fine.
After doing File -> New Model and setting up table(s), do Database -> Forward Engineer. This will place your new database in the Schemas section on the home/main window.
Double click on the Schema you created (default name is mydb) and it changes to bold font. Now scripts you run from that main window will run against the database you created.
Every time I try to run more than a few insert into queries on my ubunutu mysql database via putty I get errors from one or more rows that fail to update, and if it's hundreds or more, it usually crashes or is pauses from an incomplete query presumably. This has nothing to do with the syntax of the queries as when I run them individually they run fine. I there anything I can do to run fix this?
I've tried Rocket's solution but it did much the same thing (skipping rows and then hangs).
I've just noticed there are carriage returns and line feeds in the data so after removing those it seems to be working wihtout any errors but is taking absolutely ages using the BEGIN, COMMIT method. Maybe because it is only parsing one single really long line now instead of several lines at a time.
I'm copying and pasting queries from an excel spreadsheet onto the mysql command line in putty. Putty is connected the whole time. It's tricky to debug as putty puts a limit to the number of lines it displays. – garry 45 mins ago
Don't do this. Putty will drop parts of the content you paste, or place length limits on it.
Instead:
Export the queries from Excel into a text file on your PC, for example "exported_queries.sql".
Transfer that text file to your Ubuntu server, using scp.
Then open an ssh session to the Ubuntu server, and run the text file as input to the mysql program. You can do this with the source command in the mysql shell:
mysql> source exported_queries.sql
I also recommend running tmux or screen in your Ubuntu ssh window, because those programs are good for keeping your session alive even if Putty disconnects. If you have a long-running command in your ssh session, you can reconnect and "reattach" to the session in progress.
I am running a mysql database and I connect to it just fine. My question is: whenever I connect to the database (to add new input via php) do I also have to include a disconnect command line?
I ask because my bandwidth usage is growing faster than I expected so I am happy thinking that I am getting traffic, but perhaps it is growing because I connect and do not "disconnect"?
From the mysql docs
mysql is a simple SQL shell with input line editing capabilities. It
supports interactive and noninteractive use.
The fact is that the SQL shell should not be causing major load on your box. The standard practice is to just close the shell and kill the program.
Typing Control+C causes mysql to attempt to kill the current
statement. If this cannot be done, or Control+C is typed again before
the statement is killed, mysql exits
When you exit mysql command line tool the process will end and mysql will continue doing its thing. But the answer to your question is no SQL shell should not be slowing things.
From PHP its a good idea to close the connection when you are done using it. To check out what processes are running open up mysql cmd tool and try the following to see what is connected to your mysql instance.
SHOW PROCESSLIST
if showprocesslist isnt what you were looking for give this a shot:
mysql > show status like '%onn%';
Hopefully this will give you enough information to handle the traffic load.
devzone.zend.com :
"Open connections (and similar resources) are automatically destroyed at the end of script execution. However, you should still close or free all connections, result sets and statement handles as soon as they are no longer required. This will help return resources to PHP and MySQL faster."
My advice:
It is a good practise to close a connection after doing the queries you wanted.