The query listed below is running fine on localhost but it's somehow hanging when executed remotely targeting my service provider database (both the PHP script and the SQL query in the phpMyAdmin console hang), although every chunk is returning the expected table when run individually.
What's wrong? Any suggestions on how to make it shorter or simpler and thus help the remote MySQL?
SELECT * FROM `Table1`
WHERE `Tag1` IN (
SELECT DISTINCT `Tag1` FROM `Table2`
WHERE `Tag1` NOT IN (
SELECT `Tag1` FROM `Table2` WHERE `Tag2` = '$keyWord'
)
)
Related
Its error when I send a query to mysql from clickhouse server.
Mysql can't understand query like
SELECT /*+ MAX_EXECUTION_TIME(1000) */ column1, column2
from mysql_tables.table1
from clickhouse through a table created with the mysql engine.
How to correctly enter the MAX_EXECUTION_TIME() constraint? to the mysql engine when creating a table like
CREATE TABLE mysql_tables.table1
(
`id` Int32,
`status` Int32
)
ENGINE = MySQL('host',
'db',
'table1',
'user',
'password',
)
SETTINGS [MAX_EXECUTION_TIME=1000]
or to the query itself?
Unfortunately, you can't pass comment to MySQL
According to
https://clickhouse.com/docs/en/engines/table-engines/integrations/mysql/#read-write-timeout
you can set up SETTINGS read_write_timeout=XXX
Unfortunately, this is not max execution time, and query still run on MySQL side
but if you will use ProxySQL, and setup default_query_timeout
https://proxysql.com/documentation/global-variables/mysql-variables/
it could work
I'm trying to delete (and update, but if I can delete than I'll be able to update) product data from MySQL website database using SSIS, when those products have been marked in our ERP (and in the sql server database used for reporting) as discontinued. I've tried the following:
First Attempt: Saving the rows-to-be-deleted to a recordset and using a for-each loop with an execute sql task to delete them as described here.
Result: Partially works, but is extremely slow and fails after about 500 deletes each time. Makes me wonder if the MySql database has some kind of hacker-protection feature.
Second Attempt: Converting the primary key for all rows-to-be-deleted into a comma-separated string variable using FOR XML PATH : as described here (or, rather, a series of them because of the 4000 char limit).
SQL Select Code (works fine)
WITH CTE (Product_sku,rownumber) AS
(
SELECT product_sku
, row_number() over(order by product_sku)
FROM product_updates
WHERE action = 'delete'
)
SELECT
Delete1= cast(
(SELECT TOP 1
STUFF(
(SELECT ',''' + product_sku+'''' FROM CTE
WHERE cte.RowNumber BETWEEN 1 and 700
FOR XML PATH (''))
, 1, 1, '') )
AS varchar(8000))
... and nine more of these select statements into additional variables to allow for larger delete operations.
And then using this result to delete records from MySql using an Execute SQL command with the following code:
DELETE FROM datarepo.product
WHERE product_sku in (?)
Result: The package executed but failed to delete anything. When viewing the MySql query log file I saw the following, which tells me why it failed to delete anything.
DELETE FROM datarepo.product
WHERE product_sku in ('\'')
Note that this same SSIS Execute SQL statement , when using hardcoded values (like the following), deletes just fine.
DELETE FROM datarepo.product
WHERE product_sku in ('1234','5678','abcd', etc...)
I haven't been able to find anything else online. As Reza Rad said in the first linked post, it's hard to find material about using SSIS to perform operations on MySql.
I've a problem with MySql, here is the details :
I created a new schema/database, executed (only) this queries :
create table mytable (
id varchar(50) not null,
name varchar(50) null default '',
primary key (id));
create view myview as
select id,name from mytable;
insert into mytable values ('1','aaa');
insert into mytable values ('2','bbb');
insert into mytable values ('3','ccc');
and then, if I run these queries :
select * from mytable;
select * from myview;
prepare cmd from 'select id,name from mytable where id=?';
set #param1 = '2';
execute cmd using #param1;
the queries give the correct result (3 rows,3 rows,1 row).
but, the problem exists if I run this query:
prepare cmd from 'select id,name from myview where id=?';
set #param1 = '2';
execute cmd using #param1;
ERROR: #1615 - Prepared statement needs to be re-prepared
I've done some research and found that the increment of configurations below "may" solve the problem :
increase table_open_cache_instances value
increase table_open_cache value
increase table_definition_cache value
As far as I know, the queries above are the common and standard MySql queries, so I think there is no problem with the syntax.
I'm on a shared webhosting and using MySql version is 5.6.22
But the things that make me confused is, it only contain 1 schema/database, with 1 table with 3 short records and 1 view,
and I executed a common and standard MySql select query,
does the increment of values above really needed?
is there anyone with the same problem had increase the values and really solve the problem?
or, perhaps do you have any other solution which you think may or will works to solve this problem?
ps: it does not happen once or twice in a day (which assumed caused by some backup or related), but in all day (24 hours).
Thank you.
Do you do this after each execute?
deallocate prepare cmd;
The closest guess until now is some other shared members on the server dont write their code quite well (because it is a shared webhosting), either doing large alter while doing the large select, or dont deallocate the prepared statement after using it, like Rick James said. (want to make the post usefull, but I dont have the reputation, sorry Rick)
I can not make sure if the increment of "table_definition_cache" will works because the system administrator still wont change the value until now, but incase you having the same problem and you can modify it, it worth to try.
My current solution is I change all my views in my query strings into non-view or subqueries, it works for me, but the problem is still in the air.
eg. from
select myview.id, myview.name
from myview
inner join other_table on ...
where myview.id=?
into
select x.id, x.name
from (select id,name from mytable) x
inner join other_table on ...
where x.id=?
I run this query
CREATE TEMPORARY TABLE usercount SELECT * FROM users
I get this message
Your SQL query has been executed successfully ( Query took 0.1471 sec )
But when I try to access the newly created table using
SELECT * FROM usercount
I get this error
#1146 - Table 'abc_site.usercount' doesn't exist
Not sure why, I need to mention that I've did a good share of googling beforehand.
My version of PHPMyAdmin is 3.5.2.2 and MySQL 5.5.27
PHPMyAdmin (or rather PHP) closes the database connection after each screen. Thus your temporary tables disappear.
You can put multiple SQL statements in the SQL query box in PHPMyAdmin; this should be executed as one block and thus the temporary table is not deleted.
Temporary tables are temparar and after use thay Delete.
for example ,when insert data into database , first we can insert into temp table and thus when complete transaction , then insert into main table.
EXAMPLE :
//------------------------------------------
CREATE TEMPORARY TABLE TEMP
(
USERNAME VARCHAR(50) NOT NULL,
PASSWORD VARCHAR(50) NOT NULL,
EMAIL varchar(100),
TYPE_USER INT
);
INSERT INTO TEMP VALUES('A','A','A','1');
SELECT * FROM TEMP
//-----------------------------------------
Show A,A,A,1
I have my log file working, but I get extraneous info on each line like "29 Query", and I can't tell, but it looks like the queries logged are the interpretation of how MySQL treats each query internally. Is there a way to automatically log each query as they were executed by the application without any additional information added to the log by MySQL? Thanks!
EDIT:
As a part of offering the bounty, let me explain my situation. We're using Magento Commerce, which has an EAV database architecture. Tracking anything down, and where it is stored is an absolute nightmare. My thought was to insert a product into the database in the application, and then log every query that was executed during that process. This worked well, but the logs have a ton of other cruft around the queries. I really do just want something like this:
1.) SELECT * FROM <TABLE>;
2.) UPDATE <TABLE> SET <VALUE> = <VALUE>;
3.) ...
4.) ...
Something simple that tells me what was executed so that I don't have to go sifting through controllers and models to try and get all this. I don't need dates, times, line numbers or anything extra.
To enable full Log Query add the following to your my.cnf:
log=/var/log/mysqldquery.log
The above will log all queries to the log file.
Don't forgot to restart mysql service after making changes in my.cnf file.
Example output from actions via SequelPro (mac client):
090721 11:06:45 51 Query ALTER TABLE `test` ADD `name` varchar(10) DEFAULT NULL
51 Query SHOW COLUMNS FROM `test`
51 Query SHOW INDEX FROM `test`
090721 11:06:57 51 Query SHOW COLUMNS FROM `test`
51 Query UPDATE `test` SET `id`='1', `name`='test' WHERE `id` = '1' AND `name` IS NULL LIMIT 1
51 Query SELECT * FROM `test` LIMIT 0,100
51 Query SELECT COUNT(1) FROM `test`
090721 11:07:00 51 Query UPDATE `test` SET `id`='2', `name`='test' WHERE `id` = '2' AND `name` IS NULL LIMIT 1
51 Query SELECT * FROM `test` LIMIT 0,100
51 Query SELECT COUNT(1) FROM `test`
On *NIX based systems you can use grep to start
grep 'SELECT\|INSERT\|UPDATE' querylog.log
Or get more tricky and start doing things like:
grep 'SELECT\|INSERT\|UPDATE' querylog.log | awk '{$1="";$2="";print}'
This would give you something like this, not perfect but closer:
51 Query UPDATE `test` SET `id`='2', `name`='test' WHERE `id` = '2' AND `name` IS NULL LIMIT 1
SELECT * FROM `test` LIMIT 0,100
SELECT COUNT(1) FROM `test`
51 Query INSERT INTO `test` (`id`,`name`) VALUES ('3','testing')
SELECT * FROM `test` LIMIT 0,100
SELECT COUNT(1) FROM `test`
You could use the mysql query log file. Add this parameter when you start mysql:
--log=/var/log/mysqld.log
If you are referring to the binary log, you need to use mysqlbinlog to pass it through to get meaningful output.
cat log100.log | mysqlbinlog
This any help to you?
http://www.bigdbahead.com/?p=99
There's 2 solutions there - one is easier but requires mysql 5.1+.