I've a problem with MySql, here is the details :
I created a new schema/database, executed (only) this queries :
create table mytable (
id varchar(50) not null,
name varchar(50) null default '',
primary key (id));
create view myview as
select id,name from mytable;
insert into mytable values ('1','aaa');
insert into mytable values ('2','bbb');
insert into mytable values ('3','ccc');
and then, if I run these queries :
select * from mytable;
select * from myview;
prepare cmd from 'select id,name from mytable where id=?';
set #param1 = '2';
execute cmd using #param1;
the queries give the correct result (3 rows,3 rows,1 row).
but, the problem exists if I run this query:
prepare cmd from 'select id,name from myview where id=?';
set #param1 = '2';
execute cmd using #param1;
ERROR: #1615 - Prepared statement needs to be re-prepared
I've done some research and found that the increment of configurations below "may" solve the problem :
increase table_open_cache_instances value
increase table_open_cache value
increase table_definition_cache value
As far as I know, the queries above are the common and standard MySql queries, so I think there is no problem with the syntax.
I'm on a shared webhosting and using MySql version is 5.6.22
But the things that make me confused is, it only contain 1 schema/database, with 1 table with 3 short records and 1 view,
and I executed a common and standard MySql select query,
does the increment of values above really needed?
is there anyone with the same problem had increase the values and really solve the problem?
or, perhaps do you have any other solution which you think may or will works to solve this problem?
ps: it does not happen once or twice in a day (which assumed caused by some backup or related), but in all day (24 hours).
Thank you.
Do you do this after each execute?
deallocate prepare cmd;
The closest guess until now is some other shared members on the server dont write their code quite well (because it is a shared webhosting), either doing large alter while doing the large select, or dont deallocate the prepared statement after using it, like Rick James said. (want to make the post usefull, but I dont have the reputation, sorry Rick)
I can not make sure if the increment of "table_definition_cache" will works because the system administrator still wont change the value until now, but incase you having the same problem and you can modify it, it worth to try.
My current solution is I change all my views in my query strings into non-view or subqueries, it works for me, but the problem is still in the air.
eg. from
select myview.id, myview.name
from myview
inner join other_table on ...
where myview.id=?
into
select x.id, x.name
from (select id,name from mytable) x
inner join other_table on ...
where x.id=?
Related
Maybe my question above may be could be stupid , but I just want to know if is it possible to have insert query inside select or where.
The reason that I want to know that is if someone hack website or any application database, can the hacker input data to hacked database without my knowledge ?
the following example of SQL injection I see in other sites
http://www.example.com/empsummary.php?id=1 AND 1=-1 union select 1,group_concat(name,0x3a,email,0x3a,phone,0x2a),3,4,5,6,7,8,9 from employee
I know what exactly that above query does, but can the hacker input (use insert query) on the database or on any table ?
Yes, it can happen, if the database interface is configured to allow multiple statements in a query.
An INSERT can't run as part of a SELECT statement. But it's possible that the exploit of a vulnerability could finish a SELECT and then execute a separate insert.
Say you have a vulnerable statement like this:
SELECT foo FROM bar WHERE fee = '$var'
Consider the SQL text when $var contains:
1'; INSERT INTO emp (id) VALUES (999); --
The SQL text could be something like this:
SELECT foo FROM bar WHERE fee = '1'; INSERT INTO emp (id) VALUES (999); --'
If multi-statement queries are enabled in the database interface library, it's conceivable that an INSERT statement could be executed.
See: https://www.owasp.org/index.php/SQL_Injection
Is there a way to delete a row if any of the columns have a NULL value in them? I know I could do it one by one and check the columns but I would like to do this programmatically in MySQL where it would scale if I had 4 columns or 4000 columns. I believe I could do this with PHP, but I much rather do this in straight MySQL.
Thank you
Ok, since you just mentioned you are new to MySQL, your database design is new too and most probably does not have a lot of data as of now.
Why not kill the roots of the problem instead of letting those grow into a big tree and then looking for tools to cut all the branches first?
You should go ahead and use MySQL NOT NULL option and disallow null values for your column since you are deleting them. So if you don't need to keep any null values then you can simply disallow them and they will not be saved in the first place.
Queries come long after a proper database design, if your design does not match what your system requires then you can only optimize the queries to an extent. Base structure is the first thing you should learn and improve. Google and SO both are filled with thousands of articles on Efficient database design and some basic concepts to get started.
You could delete those records with without so much ORs:
DELETE FROM myTable
WHERE CONCAT(column1,column2,column3) is null
It may not make sense to delete what can be done, but can use this trick to get what should be done.
INSERT INTO NEW_TABLE
SELECT column1,column2,column3
FROM myTable
WHERE not CONCAT(column1,column2,column3) is null
I am not quite sure if this works in mysql because i can't test it (i edited the sql server code to may work with mysql)
CREATE PROCEDURE myProc()
BEGIN
DECLARE COL varchar(4000);
SELECT GROUP_CONCAT(C.COLUMN_NAME, ' IS NULL ' SEPARATOR ' OR ') FROM INFORMATION_SCHEMA.COLUMNS C WHERE C.TABLE_NAME = 'tbl_a' INTO COL;
SET #s = CONCAT('SELECT * FROM test WHERE ', COL);
PREPARE stmt FROM #s;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END//
i am using the select statement to check you just need to change that to delete
i have 10 tables with same structure except table name.
i have a sp (stored procedure) defined as following:
select * from table1 where (#param1 IS NULL OR col1=#param1)
UNION ALL
select * from table2 where (#param1 IS NULL OR col1=#param1)
UNION ALL
...
...
UNION ALL
select * from table10 where (#param1 IS NULL OR col1=#param1)
I am calling the sp with the following line:
call mySP('test') //it executes in 6,836s
Then I opened a new standard query window. I just copied the query above. Then replaced #param1 with 'test'.
This executed in 0,321s and is about 20 times faster than the stored procedure.
I changed the parameter value repeatedly for preventing the result to be cached. But this did not change the result. The SP is about 20 times slower than the equivalent standard query.
Please can you help me to figure out why this is happening ?
Did anybody encounter similar issues?
I am using mySQL 5.0.51 on windows server 2008 R2 64 bit.
edit: I am using Navicat for test.
Any idea will be helpful for me.
EDIT1:
I just have done some test according to Barmar's answer.
At finally i have changed the sp like below with one just one row:
SELECT * FROM table1 WHERE col1=#param1 AND col2=#param2
Then firstly i executed the standart query
SELECT * FROM table1 WHERE col1='test' AND col2='test' //Executed in 0.020s
After i called the my sp:
CALL MySp('test','test') //Executed in 0.466s
So i have changed where clause entirely but nothing changed. And i called the sp from mysql command window instead of navicat. It gave same result. I am still stuck on it.
my sp ddl:
CREATE DEFINER = `myDbName`#`%`
PROCEDURE `MySP` (param1 VARCHAR(100), param2 VARCHAR(100))
BEGIN
SELECT * FROM table1 WHERE col1=param1 AND col2=param2
END
And col1 and col2 is combined indexed.
You could say that why dont you use standart query then? My software design is not proper for this. I must use stored procedure. So this problem is highly important to me.
EDIT2:
I have gotten query profile informations. Big difference is because of "sending data row" in SP Profile Information. Sending data part takes %99 of query execution time. I am doing test on local database server. I am not connecting from remote computer.
SP Profile Informations
Query Profile Informations
I have tried force index statement like below in my sp. But same result.
SELECT * FROM table1 FORCE INDEX (col1_col2_combined_index) WHERE col1=#param1 AND col2=#param2
I have changed sp like below.
EXPLAIN SELECT * FROM table1 FORCE INDEX (col1_col2_combined_index) WHERE col1=param1 AND col2=param2
This gave this result:
id:1
select_type=SIMPLE
table:table1
type=ref
possible_keys:NULL
key:NULL
key_len:NULL
ref:NULL
rows:292004
Extra:Using where
Then i have executed the query below.
EXPLAIN SELECT * FROM table1 WHERE col1='test' AND col2='test'
Result is:
id:1
select_type=SIMPLE
table:table1
type=ref
possible_keys:col1_co2_combined_index
key:col1_co2_combined_index
key_len:76
ref:const,const
rows:292004
Extra:Using where
I am using FORCE INDEX statement in SP. But it insists on not using index. Any idea? I think i am close to end :)
Just a guess:
When you run the query by hand, the expression WHERE ('test' IS NULL or COL1 = 'test') can be optimized when the query is being parsed. The parser can see that the string 'test' is not null, so it converts the test to WHERE COL1 = 'test'. And if there's an index on COL1 this will be used.
However, when you create a stored procedure, parsing occurs when the procedure is created. At that time, it doesn't know what #param will be, and has to implement the query as a sequential scan of the table.
Try changing your procedure to:
IF #param IS NULL
THEN BEGIN
SELECT * FROM table1
UNION ALL
SELECT * FROM table2
...
END;
ELSE BEGIN
SELECT * FROM table1 WHERE col1 = #param
UNION ALL
SELECT * FROM table2 WHERE col1 = #param
...
END;
END IF;
I don't have much experience with MySQL stored procedures, so I'm not sure that's all the right syntax.
Possible character set issue? If your table character set is different from your database character set, this may be causing a problem.
See this bug report: http://bugs.mysql.com/bug.php?id=26224
[12 Nov 2007 21:32] Mark Kubacki Still no luck with 5.1.22_rc - keys
are ingored, query takes within a procedure 36 seconds and outside
0.12s.
[12 Nov 2007 22:30] Mark Kubacki After having changed charsets to UTF-8 (especially for the two used), which is used for the
connection anyways, keys are taken into account within the stored
procedure!
The question I cannot answer is: Why does the optimizer treat charset
conversions an other way within and outside stored procedures?
(Indeed, I might be wrong asking this.)
Interesting question, because I am fond of using stored procedures. Reason is maintenance and the encapsulation principle.
This is information I found:
http://dev.mysql.com/doc/refman/5.1/en/query-cache-operation.html
It states that the query cache is not used for queries that
1. are a subquery that belong to an outer query, and
2. are executed within the body of a stored procedure, trigger or event.
This implies that it works as designed.
I had seen this behavior, but it wasn't related to the character set.
I had a table that held self-referencing hierarchical data (a parent with children, and some children had children of their own, etc.). Since the parent_id had to reference the primary id's (and the column specified a constraint to that effect), I couldn't set the parent id to NULL or 0 (zero) to disassociate a child from a parent, so I simply referenced it to itself.
When I went to run a stored procedure to perform the recursive query to find all children (at all levels) of a particular parent, the query took between 30 & 40 times as long to run. I found that altering the query used by the stored procedure to make sure it excluded the top-level parent record (by specifying WHERE parent_id != id) restored the performance of the query.
The stored procedure I'm using is based on the one shown in:
https://stackoverflow.com/questions/27013093/recursive-query-emulation-in-mysql.
I am trying to combine these two queries in twisted python:
SELECT * FROM table WHERE group_id = 1013 and time > 100;
and:
UPDATE table SET time = 0 WHERE group_id = 1013 and time > 100
into a single query. Is it possible to do so?
I tried putting the SELECT in a sub query, but I don't think the whole query returns me what I want.
Is there a way to do this? (even better, without a sub query)
Or do I just have to stick with two queries?
Thank You,
Quan
Apparently mysql does have something that might be of use, especially if you are only updating one row.
This example is from: http://lists.mysql.com/mysql/219882
UPDATE mytable SET
mycolumn = #mycolumn := mycolumn + 1
WHERE mykey = 'dante';
SELECT #mycolumn;
I've never tried this though, but do let me know how you get on.
This is really late to the party, but I had this same problem, and the solution I found most helpful was the following:
SET #uids := null;
UPDATE footable
SET foo = 'bar'
WHERE fooid > 5
AND ( SELECT #uids := CONCAT_WS(',', fooid, #uids) );
SELECT #uids;
from https://gist.github.com/PieterScheffers/189cad9510d304118c33135965e9cddb
You can't combine these queries directly. But you can write a stored procedure that executes both queries. example:
delimiter |
create procedure upd_select(IN group INT, IN time INT)
begin
UPDATE table SET time = 0 WHERE group_id = #group and time > #time;
SELECT * FROM table WHERE group_id = #group and time > #time;
end;
|
delimiter ;
So what you're trying to do is reset time to zero whenever you access a row -- sort of like a trigger, but MySQL cannot do triggers after SELECT.
Probably the best way to do it with one server request from the app is to write a stored procedure that updates and then returns the row. If it's very important to have the two occur together, wrap the two statements in a transaction.
There is a faster version of the return of updated rows, and more correct when dealing with highly loaded system asks for the execution of the query at the same time on the same database server
update table_name WITH (UPDLOCK, READPAST)
SET state = 1
OUTPUT inserted.
UPDATE tab SET column=value RETURNING column1,column2...
I have an ID field that is my primary key and is just an int field.
I have less than 300 rows but now every time someone signs up that ID auto inc is inputted really high like 11800089, 11800090, etc.... Is there a way to get that to come back down so it can follow the order (310,311,312).
Thanks!
ALTER TABLE table_name AUTO_INCREMENT=310;
Beware though, you don't want to repeat an ID. If the numbers are that high, they got that way somehow. Be very sure you don't have associated data with the lower ID numbers.
https://dev.mysql.com/doc/refman/8.0/en/example-auto-increment.html
There may be a quicker way, but this is how I would do it to be sure I am recreating the IDs;
If you are using MySQL or some other SQL server, you will need to:
Backup your database
Drop the id column
Export the data
TRUNCATE or 'Empty' the table
Recreate the id column as auto_increment
Reimport the data
This will destroy the IDs of the existing rows, so if these are important, it is not a viable option.
The auto increment counter for a table can be (re)set two ways:
By executing a query, like others already explained:
ALTER TABLE <table_name> AUTO_INCREMENT=<table_id>;
Using Workbench or other visual database design tool. I am gonna show in Workbench how it is done - but it shouldn't be much different in other tool as well. By right click over the desired table and choosing Alter table from the context menu. On the bottom you can see all the available options for altering a table. Choose Options and you will get this form:
Then just set the desired value in the field Auto increment as shown in the image.
This will basically execute the query shown in the first option.
Guessing that you are using mysql because you are using PHP. You can reset the auto_increment with a statement like
alter table mytable autoincrement=301;
Be careful though - because things will break when the auto inc value overlaps
I believe that mysql does a select max on the id and puts the next. Try updating the ids of your table to the desired sequence. The problem you will have is if they're linked you should put a Cascade on the update on the fk.
A query that comes to my mind is:
UPDATE Table SET id=(SELECT max(id)+1 FROM TAble WHERE id<700)
700 something less than the 11800090 you have and near to the 300 WHERE id>0;
I believe that mysql complaints if you don't put a where
I was playing around on a similar problem and found this solution:
SET #newID=0;
UPDATE `test` SET ID=(#newID:=#newID+1) ORDER BY ID;
SET #c = (SELECT COUNT(ID) FROM `test`);
SET #s = CONCAT("ALTER TABLE `test` AUTO_INCREMENT = ",#c);
PREPARE stmt FROM #s;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
I hope that helps someone in a similar situation!