I am executing a query remotely using mysql utility and also through mysqlsh utility. I see that with mysql, entire dataset is loaded in memory at once, whereas in case of mysqlsh, it appears to be fetching a single row or a subset of rows at a time. I want to achieve the same behaviour as mysql in mysqlsh. Any configurations available for this in mysqlsh ?
mysql -h <> -p <> database < query.sql
Related
I've dumped a table on a remote server from one database (MySQL 5.5) to a file. It took the server about 2 seconds to perform the operation. Now I'm trying to undump data from the file into another DB (same version) on the server.
The server outputs the data being processed on the screen in spite of the fact I didn't specify --verbose parameter. How can I prevent the output?
It takes the server some 10 minutes to perform the operation. Is that time acceptable or can I make it much faster? If yes, how can I do this?
Loading (undumping) is via the mysql commandline tool:
mysql -u user -p dbname < mydump.sql
I've been migrating ddbb (a few GB size) in mySQL workbench 6.1, from one mySQL server to another mySQL. Never having done this before I thought it was 99% reliable. Instead, 2 out of 3 tries have failed.
My ddbb dont have complex features (triggers, SP & functions,...). The errors, though, are difficult to interpret, almost always about tables failing to export, reason unknown. There might be occasionally a duplicated key index in source, but that shouldn't prevent an export from happening?
I've tried all the different methods available in the interface:
1) Server > Data Export > Data Import
2) Migration wizard
3) Schema transfer wizard
4) Reverse engineer
but no real difference.
Also, all methods seem variants of the same, do these menu options rely on the same procedure internally, how really different are they?
My questions are generic:
1) Is there a foolproof method, relaxed about errors, e.g. is
mysqldbcopy from myQL utilities much better that workbench wizards?
2) Does mySQL wizards configuration make any difference (e.g. a checkbox that causes errors by being too demanding if the source db has a problem) I just want to transfer the db, not perfection in the target server. I've switched SSL=NO, but still not working.
3) What is the single most important cause of errors in migration, e.g. server overloaded, enough memory, table structure?
Thanks in advance,
There might be occasionally a duplicated key index in source, but that shouldn't prevent an export from happening?
Yeah, It shouldn't prevent export operation.
I've tried all the different methods available in the interface:
All interface you have used might have some timeout configured so it don't really execute fully as your database is BIG.
So how to migrate MySQL database from one server to another?
To do it properly, I suggest you use command line like this:
Step 1: create backup file on old server
mysqldump -u [[user_name]] -p[[password]] [[db_name]] > db_backup.sql
Step 2: Transfer backup file to new server.
Step 3: Import backup file in new server.
mysql -u [[user_name]] -p[[password]] [[db_name]] < db_backup.sql
Pro tip:
you can combine step 1 & 2 if you have remote MySQL enabled on old server. Just execute this command on new server so it will download the backup file in current directory of new server.
mysqldump -h [[xxx.xx.xxx.xxx]] -u [[user_name]] -p[[password]] [[db_name]] > db_backup.sql
where [[xxx.xx.xxx.xxx]] represents ip address/hostname for old server.
Extra Note:
Please note that there is no space between -p and [[password]]. you can also omit the [[password]] if you think it's security issue to include password in command.
If you have access to your terminal you can try using "mysqldump" and also you could try percona xtrabackup tool.
Mysql dump : (If your DB is too large then I suggest you to use screens)
Backup all DB : mysqldump -u root -pxxxx --all-databases > all_db_backup.sql
Backup Tables : mysqldump -u root -pxxxx DatabaseName table1 table2 > tables.sql
Backup Individual databases : mysqldump -u root -pxxx --databases DB1 DB2 > Only_DB.sql
To import : Sync all the files to another server and try importing as show below
mysql -u root -pxxxx < all_db_backup.sql (Use Screen for large Databases)
Individual DB : mysql -u root -pxxx DBName < DB.sql
( Note : Before you import make sure your backuped file already has create database if not exists statements or you could create those DB names before importing )
I am trying to use pt-upgrade from the Percona Toolkit to test running a load on a MySQL 5.1 and a MySQL 5.6 database server. I want to see if any queries I captured from a MySQL 5.1 slow log will fail on a MySQL 5.6 system. I read over the documentation at https://www.percona.com/doc/percona-toolkit/2.2/pt-upgrade.html and created the following command:
pt-upgrade h=IPADDRESS1 -uUSERNAME -pPASSWORD h=IPADDRESS2 uUSERNAME -pPASSWORD --type='slowlog' --max-class-size=1 --max-examples=1 --run-time=1m 'slow_log_mysqld.log' 1>report.txt 2>err.txt &
I restored a copy of all the databases where the slow log was taken onto two separate servers.
My command works fine and I've set it to only run for 1 minute for testing. The problem is all I see in the report is that queries fail on both hosts over and over again.
On both hosts:
DBD::mysql::st execute failed: No database selected [for Statement "....
It appears that pt-upgrade is not changing databases.
I've reviewed the slow query log and I clearly see statements like this before each SELECT statement:
4 9640337 Query USE database1
9 9640337 Query USE database2
I have over 100 hundred databases on the server where I got the slow log. Is there some limitation where pt-upgrade cannot switch between databases? How do I get pt-upgrade to work with multiple databases?
It seems that something is odd with the format of the slow log on my system.
I have to first "massage" my log with pt-query-digest before I can run pt-upgrade. Here is how I run the massage on my slow log using pt-query-digest:
pt-query-digest --filter '$event->{arg} =~ m/^select/i' --sample 5 --no-report --output slowlog mysql_slow.log > massaged_mysql_slow.log
Now I can run this:
pt-upgrade h=IPADDRESS1 -uUSERNAME -pPASSWORD h=IPADDRESS2 uUSERNAME -pPASSWORD --type='slowlog' --max-class-size=1 --max-examples=1 --run-time=1m 'massaged_mysql_slow.log' 1>report.txt 2>err.txt &
select if(l1,l2
concatinate(av_prdct_l1, av_prdct_l2)) as level, count(orders) as ord, sum(price) as sales, sum(price-cost/price) as margin, 1-sum(pricevat)/1.14/sum(av_prdct.pricevat/1.14) as discrate from db.av_sales_order_items
left join av_prdct
on db.av_sales_order_items.orders=av_prdct.orders
where net=1 and order_date=currentdate()-1 and l1 is not null
group by l
order by sales desc "Example"
Take a look at the documentation for the database software you are using.
For example, if you're running MySQL run: USE where is the name of the database, if done via CLI. If you're doing this programmatically, consult the documentation used there. For example, using mysqli in PHP
Assuming an active connection:
mysqli_select_db(<databaseName>);
again where database name is a string of the name of the database. You call this before making any queries.
Other RDBMS systems use other methods, for example, on command line DB2 you would issue
CONNECT TO <databaseName> USER <userName> USING <password>
Explicit use of USE command is not mandatory so long as you use the database qualifiers on the tables or other database specific objects used in your statements.
Looking close at your statements posted, the error of DB not selected must be on the desc Example statement.
The first select ... statement has db qualifiers included and hence is correct, though it had some other syntax errors.
On the Example table you can try:
desc my_db_name.Example;
Replace the my_db_name in the above example with your valid database name and it should be working.
MySQL documentation says:
USE db_name
The USE db_name statement tells MySQL to use the db_name
database as the default (current) database for subsequent statements.
The database remains the default until the end of the session or
another USE statement is issued:
USE db1;
SELECT COUNT() FROM mytable; # selects from db1.mytable
USE db2;
SELECT COUNT() FROM mytable; # selects from db2.mytable
Making a particular database the default by means of the USE statement
does not preclude you from accessing tables in other databases. The
following example accesses the author table from the db1 database and
the editor table from the db2 database:
USE db1; SELECT author_name,editor_name FROM author,db2.editor
WHERE author.editor_id = db2.editor.editor_id;
if you are connecting to mysql from command line, you can use below command for connecting to db
mysql -u [username] -p [database_name you want to connect to]
OR
mysql -u [username] -p
--after connecting to DB--
use [database_name]
And if you are using UI client like 'MySQL workbench'; you can give default database while creating 'New Connection' OR can change DB from Left Pane available.
Situation: our production mysql database makes a daily dump into a .sql file. I'd like to keep a shadow database that is relatively up to date.
I know that to create a mysql database from a .sql file, one uses:
mysql -u USERNAME -p DATABASENAME < FILE.SQL
For our db, this took 4-5 hours. Needless to say, I'd like to cut that down, and I'm wondering if there's a way to just update the db with what's new/changed. On Day 2, is there a way to just update my shadow database with the new .sql file dumped from the production db?
MySQL Replication is the way to go.
But, in cases, where that is not possible, use the following procedure:
Have a modifed timestamp column in all your tables and update this value whenever a row is inserted/changed.
Use the following mysqldump options to take the incremental SQL file (this uses REPLACE commands instead of insertcommands, since the existing record will be updated in the backup database).
Keep a timestamp value somewhere placed in the file system. and use it in the where condition. MDFD_DATE is the column name on which you need to filter. On successful backup, update the value stored in the file.
skip-tz-utc prevents MSQL from automatically adjusting the timestamp values, based on your timezone.
mysqldump --databases db1,db2 --user=user --password=password --no-create-info --no-tablespaces --replace --skip-tz-utc --lock-tables --add-locks --compact --where=MDFD_DATE>='2012-06-13 23:09:42' --log-error=dump_error.txt --result-file=result.sql
Use the new sql file and run it in your server.
Limitations:
This method will not work if some records are deleted in your database. You need to manually delete them from the backup databases. Otherwise, keep a DEL_FLAG column and update it to 'Y' in production for deleted records and use this condition to delete records in the backup databases.
This problem can be solved using mysql synchronization.
Some links to guide you:
http://www.howtoforge.com/mysql_database_replication
Free MySQL synchronization tool
https://launchpad.net/mysql-proxy
https://www.google.com.br/search?q=mysql+synchronization