Aurora MySQL - Slow CREATE TABLE statement in Stored Proc - mysql

We have been struggling with slow (never finishing) running stored procedures. We have been focusing on a particular CREATE TABLE ... SELECT ... from table with some joins.
When we run this query on it's own it completes consistently in about 14 seconds. When it is run as part of the Stored Proc it wont complete even after hours.
What we found if we take all the code in the stored proc and just run it as a normal SQL script then it is also slow. Further up in the stored proc tables are created from base data and prepared to be used by the Stored Proc.
We are running 5.7.mysql_aurora.2.07.2 and tested on 5.7.mysql_aurora.2.07.1 as well. I suspect we need to tune some database settings but I lack the experience in InnoDB and Aurora.
We migrated from MyISAM into Aurora where we now use InnoDB and any any guidance on what could be the cause of this would be much appreciated.
EDIT:
I did try to change the statements from CREATE TABLE ... SELECT ... from table to a CREATE TABLE followed by INSERT INTO which made no difference.
What seems to have worked is to use
create table A (PRIMARY KEY (name)) select ... for all tables created instead of first creating the table and using an ALTER statement to add the key.
I am stumped as to why this mode it work???

This sounds like a table locking issue. Typically
CREATE TABLE AS SELECT ... FROM table_name ...
results in the source table (table_name) needing locking.
Try breaking apart the CREATE and the SELECT, i.e.
CREATE TABLE table_name ...;
INSERT INTO table_name SELECT ...;

Related

MySQL - Trigger or Replication is better?

I want to replicate certain table from one database into another database in the same server. This tables contain exactly the same fields.
I was considering to use MySQL Replication to replicate that table but some people said that it will increase IO so i find another way to create 3 Trigger (Insert, update and Delete) that will perform exactly the same thing like what i expect.
My Question is, which way is better? Is it using MySQL replication is better even though it's in the same server or using Trigger to replicate the data is better.
Thanks.
I don't know what is your goal, but I got mine getting use of the VIEW functionality.
I had two different applications with separate databases but in the same Mysql server. Application2 needed to get a few data from Application1. In general, this is a trivial situation that you can handle with USE DB1; or USE DB2; as your needing, but my programming framework does not work very well with multiple DBs.
So, lets see my solution...
Here is my select query to retrieve this data:
SELECT id, name FROM DB1.customers;
So, using DB2 as default schema, I've created a VIEW:
USE DB2;
CREATE VIEW app1_customers AS SELECT id, name FROM DB1.customers;
Now I can retrieve this data in DB2 as a regular table with a regular SELECT statement.
SELECT * FROM DB2.app1_customers;
Hope ts useful. BR
Assuming you have two databases on the same server i.e DB1 and DB2 and the table is called tbl1 and it is sitting in DB1 you can query the table like this:
USE DB1;
SELECT * FROM tbl1;
USE DB2;
SELECT * FROM DB1.tbl1;
This way you wont need to copy the data and worry about extra space and extra code. You can query a table in another database on the same server. Replication and triggers are not your answer here. You could also create a view to encapsulate the SQL statement.
Definitely triggers is the way to go. Having another server (slave) will need to spare several MB for installation, logs, cpu and memory usage.
I'd use triggers to keep both tables equal. If you want to create a table with the same columns definition and data use:
USE db2;
CREATE TABLE t1 AS SELECT * FROM db1.t1;
After that, go ahead and create the triggers for Update, Insert and Delete statemetns.
Also you could ALTER the new table to a different engine like MEMORY or add indexes to see if you can improve something.

Mysql temp tables are dropped between executions

I want be able to debug easily my scripts in Mysql, like in MSSQL (run a chunk of the script then verify the tables and so on), but the temporary tables are not persisted on the server.
For example :
CREATE temporary table a(i int);
INSERT INTO a VALUE (1);
SELECT * FROM a;
If I run the whole script it returns me the right result, but if I run it statement by statement on the insert I get the following error:
SQL.sql: Error (2,13): Table 'test.a' doesn't exist
I suppose this is a server configuration problem.
Temporary tables are dropped when the transaction is over.
from dev.mysql:
Temporary Tables:
You can use the TEMPORARY keyword when creating a
table. A TEMPORARY table is visible only to the current connection,
and is dropped automatically when the connection is closed. This means
that two different connections can use the same temporary table name
without conflicting with each other or with an existing non-TEMPORARY
table of the same name. (The existing table is hidden until the
temporary table is dropped.) To create temporary tables, you must have
the CREATE TEMPORARY TABLES privilege.
Note CREATE TABLE does not automatically commit the current active
transaction if you use the TEMPORARY keyword.
So if you run all these sql in deferent transactions you temporary table wont exist when you run the insert statement.
If these executions are executed in diferent transactions depend on what interface you use. Thats wy if you "run the whole script it returns me the right result" because its all in the same transaction.
You can try to force it to run on the same transaction with:
START TRANSACTION;
<SQL QUERYS>
COMMIT;
anyway i recomend you MySQL Workbench as interface.
my best regards, i hope this help you.

Transactional ALTER statements in MySQL

I'm doing an update to MySQL Database which includes MySQL scripts that make ALTER TABLE sentences, as well as DIU sentences (delete, insert, update).
The idea is to make a transactional update, so if a sentence fails, a rollback is made, but if I put ALTER TABLE sentences or others specified in http://dev.mysql.com/doc/refman/5.0/en/implicit-commit.html an implicit commit is made, so I can't make a complete rollback, because the indicated operations remains commited.
I tried to use mysqldump to make a backup which is used in case of error (mysql returns distinct to zero), but it is too slow and can fail too.
What can I do? I need this to ensure that future updates are safe and not too slow, because databases contains between 30-100 GB of data.
dump and reload might be your best options instead of alter table.
From mysql prompt or from the database script:
select * from mydb.myt INTO OUTFILE '/var/lib/mysql/mydb.myt.out';
drop table mydb.myt;
create tablemyt(your table ddl here)
load data infile '/var/lib/mysql/mydb.myt.out' INTO TABLE mydb.myt;
Check this out:
http://everythingmysql.ning.com/profiles/blogs/whats-faster-than-alter
I think it offers good guidance on "alternatives to alter".
Look at pt-online-schema change.
You can configure it to leave the 'old' table around after the online ALTER is completed. The old table will have an underscore prefix. If bad things happen, drop the tables you altered and renamed the OLD tables to the original tables. If everything is OK, then just drop the OLD tables.
http://www.percona.com/doc/percona-toolkit/2.1/pt-online-schema-change.html

Alternative for a MySQL temporary table in Oracle

I noticed that the concept of temporary tables in these two systems is different, and I have a musing.. I have the following scenario in MySQL:
Drop temporary table 'a' if exists
Create temporary table 'a'
Populate it with data through a stored procedure
Use the data in another stored procedure
How can I implement the same scenario in Oracle? Can I (in one procedure preferable) create a temporary table, populate it, and insert data in another (non-temporary) table?
I think that I can use a (global) temporary table which truncates on commit, and avoid steps 1&2, but I need someone else's opinion too.
In Oracle, you very rarely need a temporary table in the first place. You commonly need temporary tables in other databases because those databases do not implement multi-version read consistency and there is the potential that someone reading data from the table would be blocked while your procedure runs or that your procedure would do a dirty read if it didn't save off the data to a separate structure. You don't need global temporary tables in Oracle for either of these reasons because readers don't block writers and dirty reads are not possible.
If you just need a temporary place to store data while you perform PL/SQL computations, PL/SQL collections are more commonly used than temporary tables in Oracle. This way, you're not pushing data back and forth from the PL/SQL engine to the SQL engine and back to the PL/SQL engine.
CREATE PROCEDURE do_some_processing
AS
TYPE emp_collection_typ IS TABLE OF emp%rowtype;
l_emps emp_collection_type;
CURSOR emp_cur
IS SELECT *
FROM emp;
BEGIN
OPEN emp_cur;
LOOP
FETCH emp_cur
BULK COLLECT INTO l_emps
LIMIT 100;
EXIT WHEN l_emps.count = 0;
FOR i IN 1 .. l_emps.count
LOOP
<<do some complicated processing>>
END LOOP;
END LOOP;
END;
You can create a global temporary table (outside of the procedure) and use the global temporary table inside your procedure just as you would use any other table. So you can continue to use temporary tables if you so desire. But I can count on one hand the number of times I really needed a temporary table in Oracle.
You are right, temporary tables will work work you.
If you decide stick with regular tables you may want to use the advice #Johan gave, along with
ALTER TABLE <table name> NOLOGGING;
to make this perform a bit faster.
I see no problem in the scheme your are using.
Note that it doesn't have to be a temp-table, you can use a sort of kind of memory table as well.
Do this by creating a table as usual, then do
ALTER TABLE <table_name> CACHE;
This will prioritize the table for storage in memory.
As long as you fill and empty the table in short order you don't need to do step 1 & 2.
Remember the cache modifier is just a hint. The table still ages in the cache and will be pushed out of memory eventually.
Just do:
Populate cache-table with data through a stored procedure
Use the data in another stored procedure, but don't wait to long.
2a. Clear the data in the cache table.
In your MySQL version, I didn't see a step 5 to drop the table a. So, if you want or don't mind having the data in the table persist you could also use a materialized view and simply refresh on demand. With a materialized view you do not need to manage any INSERT statements, just include the SQL:
CREATE MATERIALIZED VIEW my_mv
NOCACHE -- NOCACHE/CACHE: Optional, cache places the table in the most recently used part of the LRU blocks
BUILD IMMEDIATE -- BUILD DEFERRED or BUILD IMMEDIATE
REFRESH ON DEMAND
WITH PRIMARY KEY -- Optional: creates PK column
AS
SELECT *
FROM ....;
Then in your other stored procedure, call:
BEGIN
dbms_mview.refresh ('my_mv', 'c'); -- 'c' = Complete
END;
That said, a global temporary table will work as well, but you manage the insert and exceptions.

create an index without locking the DB

I have a table with 10+ million rows. I need to create an index on a single column, however, the index takes so long to create that I get locks against the table.
It may be important to note that the index is being created as part of a 'rake db:migrate' step... I'm not adverse to creating the index manually if that will work.
UPDATE: I suppose I should have mentioned that this a write often table.
MySQL NDBCLUSTER engine can create index online without locking the writes to the table. However, the most widely used InnoDB engine does not support this feature. Another free and open source DB Postgres supports 'create index concurrently'.
you can prevent the blockage with something like this (pseudo-code):
create table temp like my_table;
update logger to log in temp;
alter table my_table add index new_index;
insert into my_table select * from temp;
update logger to log in my_table;
drop table temp
Where logger would be whatever adds rows/updates to your table in regular use(ex.: php script). This will set up a temporary table to use while the other one updates.
Try to make sure that the index is created before the records are inserted. That way, the index will also be filled during the population of the table. Although that will take longer, at least it will be ready to go when the rake task is done.