Select specific columns from SHOW ERRORS query - mysql

There are 3 columns (Level, Code, Message) in the output when SHOW ERRORS is executed. Is there any way to select one specific column (lets say, Message) instead of all three.
The main purpose is to get the error message (3rd column) in a variable for further processing.
Edited:
The result of the query SHOW ERRORS after a erroneous select query SELECT anything is like that:
+-------+------+-------------------------------------------+
| Level | Code | Message |
+-------+------+-------------------------------------------+
| Error | 1054 | Unknown column 'anything' in 'field list' |
+-------+------+-------------------------------------------+

I was looking mysql for a mysql equivalent to the T-SQL ##ERROR and came across your question.
I have used GET DIAGNOSTICS to gain access to the error information and then use these as input for inserts to error logs.
Create structures for example:
CREATE TABLE table_that_exists
(
column_that_exists INT(11) NOT NULL
, PRIMARY KEY (column_that_exists)
);
CREATE TABLE tbl_error_log
(
id INT(11) NOT NULL AUTO_INCREMENT
, err_no INT(4)
, err_msg VARCHAR(50)
, source_proc VARCHAR(50)
, PRIMARY KEY (id)
);
Run query to produce an error & show output of SHOW_ERRORS:
SELECT anything FROM table_that_exists;
SHOW ERRORS;
Example of how to access data for use in other procedures/error management:
GET DIAGNOSTICS CONDITION 1
#P1 = MYSQL_ERRNO, #P2 = MESSAGE_TEXT;
SELECT #P1, #P2;
INSERT INTO tbl_error_log (err_no, err_msg, source_proc)
VALUES (#P1, #P2, 'sp_faulty_procedure');
SELECT * FROM tbl_error_log;

Related

MySQL - Select only the rows that have not been selected in the last read

Problem description
I have a table, say trans_flow:
CREATE TABLE trans_flow (
id BIGINT(20) AUTO_INCREMENT PRIMARY KEY,
card_no VARCHAR(50) DEFAULT NULL,
money INT(20) DEFAULT NULL
)
New data is inserted into this table constantly.
Now, I want to fetch only the rows that have not been fetched in the last query. For example, at 5:00, id ranges from 1 to 100, and I read the rows 80 - 100 and do some processing. Then, at 5:01, the id comes to 150, and I want to get exactly the rows 101 - 150. Otherwise, the processing program will read in old and already processed data. Note that such queries are committed continuously. From a certain perspective, I want to implement "streaming process" on MySQL.
A tentative idea
I have a simple but maybe ugly solution. I create an auxiliary table query_cursor which stores the beginning and end ids of one query:
CREATE TABLE query_cursor (
task_id VARCHAR(20) PRIMARY KEY COMMENT 'Specify which task is reading this table',
first_row_id BIGINT(20) DEFAULT NULL,
last_row_id BIGINT(20) DEFAULT NULL
)
During each query, I first update the query range stored in this table by:
UPDATE query_cursor
SET first_row_id = (SELECT last_row_id + 1 FROM query_cursor WHERE task_id = 'xxx'),
last_row_id = (SELECT MAX(id) FROM trans_flow)
WHERE task_id = 'xxx'
And then, doing query on table trans_flow using stored cursors:
SELECT * FROM trans_flow
WHERE id BETWEEN (SELECT first_row_id FROM query_cursor WHERE task_id = 'xxx')
AND (SELECT last_row_id FROM query_cursor WHERE task_id = 'xxx')
Question for help
Is there a simpler and more elegant implementation that can achieve the same effect (the best if no need to use an auxiliary table)? The version of MySQL is 5.7.

MySQL 8 - Trigger on INSERT - duplicate AUTO_INCREMENT id for VCS

Trying to
create trigger that is called on INSERT & sets originId = id (AUTO_INCREMENT),
I've used SQL suggested here in 1st block:
CREATE TRIGGER insert_example
BEFORE INSERT ON notes
FOR EACH ROW
SET NEW.originId = (
SELECT AUTO_INCREMENT
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = DATABASE()
AND TABLE_NAME = 'notes'
);
Due to information_schema caching I have also set
information_schema_stats_expiry = 0
in my.cnf file. Now information gets updated almost instantly on every INSERT, as I've noticed..
But, performing "direct" INSERTs via console with ~2min intervals, I keep getting not updated AUTO_INCREMENT values in originId.
(They shoud be equal to id fields)
While explicit queries, fetching AUTO_) result in updated correct values.
Thus I suspect that the result of SELECT AUTO_INCREMENT... subquery gets somehow.. what? cached?
How can one get around this?
Thank you.
Edit 1
I intended to implement sort of VCS this way:
User creates new Note, app marks it as 'new' and performs an INSERT in MySQL table. It is the "origin" note.
Then user might edit this Note (completely) in UI, app will mark is as 'update' and INSERT it in MySQL table as a new row, again. But this time originId should be filled with an id of "origin" Note (by app logics). And so on.
This allows PARTITIONing by originId on SELECT, fetching only latest versions to UI.
initial Problem:
If originId of "origin" Note is NULL, MySQL 8 window function(s) in default (and only?) RESPECT_NULL mode perform(s) framing not as expected ("well, duh, it's all about your NULLs in grouping-by column").
supposed Solution:
Set originId of "origin" Notes to id on their initial and only INSERT, expecting 2 benefits:
Easily fetch "origin" Notes via originId = id,
perform correct PARTITION by originId.
resulting Problem:
id is AUTO_INCREMENT, so there's no way (known to me) of getting its new value (for the new row) on INSERT via backend (namely, PHP).
supposed Solution:
So, I was hoping to find some MySQL mechanism to solve this (avoiding manipulations with id field) and TRIGGERs seemed a right way...
Edit 2
I believed automated duplicating id AUTO_INCREMENT field (or any field) within MySQL to be extra fast & super easy, but it totally doesn't appear so now..
So, possibly, better way is to have vcsGroupId UNSIGNED INT field, responsible for "relating" Note's versions:
On create and "origin" INSERT - fill it with MAX(vcsGroupId) + 1,
On edit and "version" INSERT - fill it with "sibling"/"origin" vcsGroupId value (fetched with CTE),
On view and "normal" SELECT - perform framing with Window Function by PARTITION BY vcsGroupId, ORDER BY id or timestamp DESC, then just using 1st (or ascending order by & using last) row,
On view and "origin" SELECT - almost the same, but reversed..
It seems easier, doesn't it?
What you are doing is playing with fire. I don't know exactly what can go wrong with your trigger (beside that it doesn't work for you already), but I have a strong feeling that many things can and will go wrong. For example: What if you insert multiple rows in a single statement? I don't think, that the engine will update the information_schema for each row. And it's going to be even worse if you run an INSERT ... SELECT statement. So using the information_schema for this task is a very bad idea.
However - The first question is: Why do you need it at all? If you need to save the "origin ID", then you probably plan to update the id column. That is already a bad idea. And assuming you will find a way to solve your problem - What guarantees you, that the originId will not be changed outside the trigger?
However - the alternative is to keep the originId column blank on insert, and update it in an UPDATE trigger instead.
Assuming this is your table:
create table vcs_test(
id int auto_increment,
origin_id int null default null,
primary key (id)
);
Use the UPDATE trigger to save the origin ID, when it is changed for the first time:
delimiter //
create trigger vcs_test_before_update before update on vcs_test for each row begin
if new.id <> old.id then
set new.origin_id = coalesce(old.origin_id, old.id);
end if;
end;
delimiter //
Your SELECT query would then be something like this:
select *, coalesce(origin_id, id) as origin_id from vcs_test;
See demo on db-fiddle
You can even save the full id history with the following schema:
create table vcs_test(
id int auto_increment,
id_history text null default null,
primary key (id)
);
delimiter //
create trigger vcs_test_before_update before update on vcs_test for each row begin
if new.id <> old.id then
set new.id_history = concat_ws(',', old.id_history, old.id);
end if;
end;
delimiter //
The following test
insert into vcs_test (id) values (null), (null), (null);
update vcs_test set id = 5 where id = 2;
update vcs_test set id = 4 where id = 5;
select *, concat_ws(',', id_history, id) as full_id_history
from vcs_test;
will return
| id | id_history | full_id_history |
| --- | ---------- | --------------- |
| 1 | | 1 |
| 3 | | 3 |
| 4 | 2,5 | 2,5,4 |
View on DB Fiddle

Can't use parameters in subquery when selecting from view

System: MariaDB 10.3.15, python 3.7.2, mysql.connector python package
I'm having trouble to determine to the exact cause of a problem, possibly a bug in MariaDB/mySQL, when executing the query with the table structure as described below. The confusing part is the error message
1356 (HY000): View 'test_project.denormalized' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
which seems to relate to the problem at first, but the further I dig into why this is happening, the more I get the feeling this error message is a red herring.
Steps to reproduce:
CREATE DATABASE `test_project`;
USE `test_project`;
CREATE TABLE `normalized` (
`id` INT NOT NULL AUTO_INCREMENT,
`foreign_key` INT NOT NULL,
`name` VARCHAR(45) NOT NULL,
`value` VARCHAR(45) NULL,
PRIMARY KEY (`id`));
INSERT INTO `normalized` (`foreign_key`, `name`, `value`) VALUES
(1, 'attr_1', '1'),
(1, 'attr_2', '2'),
(2, 'attr_1', '3'),
(2, 'attr_2', '4');
CREATE OR REPLACE ALGORITHM=UNDEFINED DEFINER=`root`#`localhost` SQL SECURITY DEFINER VIEW `denormalized` AS
select
max(`iq`.`foreign_key`) AS `foreign_key`,
max(`iq`.`attr_1`) AS `attribute_1`,
max(`iq`.`attr_2`) AS `attribute_2`
from (
select
`foreign_key` AS `foreign_key`,
if(`name` = 'attr_1',`value`,NULL) AS `attr_1`,
if(`name` = 'attr_2',`value`,NULL) AS `attr_2`
from `normalized`
) as `iq`
group by `iq`.`foreign_key`;
Using python connect to the database and execute the following query:
conn = mysql.connector.connect(host="somehost", user="someuser", password="somepassword")
cursor = conn.cursor()
query = """select * from denormalized as d
where d.`foreign_key` in
(
SELECT distinct(foreign_key)
FROM normalized
where value = %s
);"""
cursor.execute(query, ["2"])
results = cursors.fetchall()
Further information: At first I thought that obviously it's a privilege issue, but even using root for everything and double checking hosts and specific privileges didn't change anything.
Then I dug deeper into what the queries and views involved do (the test case above is a reduced version of what's actually in our database) and tested each part. Selecting from the view works. Running the query of the view works. Selecting from the view with a static subquery works. In fact, replacing the view in the problematic query with it's definition works too.
I've boiled it down to selecting from the view using a subquery in the where clause using parameters in that subquery. This causes the error to appear. Using a static subquery or replacing the view with it's definition works just fine, it's only this specific circumstance where it fails.
And I have no idea why.
The group by does not make sense; did you really mean one of these?
This returns one row:
select max(`foreign_key`) AS `foreign_key`,
max(if(`name` = 'attr_1', `value`,NULL)) AS `attribute_1`,
max(if(`name` = 'attr_2', `value`,NULL)) AS `attribute_2`
from `normalized`;
This uses the GROUP BY and returns one row per foreign_key:
select `foreign_key`,
max(if(`name` = 'attr_1', `value`,NULL)) AS `attribute_1`,
max(if(`name` = 'attr_2', `value`,NULL)) AS `attribute_2`
from `normalized`
group by `foreign_key`;
Your python query is probably better in either of these formulations:
select d.*
FROM ( SELECT distinct(foreign_key)
FROM normalized
where
value = %s )
JOIN denormalized as d;
select d.*
FROM denormalized as d
WHERE EXISTS ( SELECT 1
FROM normalized
where foreign_key = d.foreign_key
AND value = %s )
They would benefit from INDEX(value, foreign_key).

INSERT INTO ... SELECT if destination column has a generated column

Have some tables:
CREATE TABLE `asource` (
`id` int(10) unsigned NOT NULL DEFAULT '0'
);
CREATE TABLE `adestination` (
`id` int(10) unsigned NOT NULL DEFAULT '0',
`generated` tinyint(1) GENERATED ALWAYS AS (id = 2) STORED NOT NULL
);
I copy a row from asource to adestination:
INSERT INTO adestination
SELECT asource.*
FROM asource;
The above generates an error:
Error Code: 1136. Column count doesn't match value count at row 1
Ok, quite strange to require me to mention generated query. But ok, I add that column to the query:
INSERT INTO adestination
SELECT asource.*, NULL AS `generated`
FROM asource;
This has worked fine in 5.7.10. However, it generates an error in 5.7.11 (due to a fix:
Error Code: 3105. The value specified for generated column 'generated' in table 'adestination' is not allowed.
Ok, next try:
INSERT INTO adestination
SELECT asource.*, 1 AS `generated`
FROM asource;
But still the same error. I have tried 0, TRUE, FALSE but the error persists.
The DEFAULT value which is stated as the only allowed value (specs or docs). However, the following generates a syntax error (DEFAULT is not supported there):
INSERT INTO adestination
SELECT asource.*, DEFAULT AS `generated`
FROM asource;
So, how can I copy a row from one table to another using INSERT INTO ... SELECT if the destination table adds some columns where some of them are GENERATED?
The code calling this query is generic and has no knowledge what columns that particular tables have. It just knows which extra columns the destination table has. The source table is a live table, the destination table is a historical version of the source table. It has few columns extra like user id made the change, what type of the change it is (insert, update, delete) when etc.
Sadly this is just how MySQL works now to "conform to SQL standards".
The only value that the generated column can accept in an update, insert, etc. is DEFAULT, or the other option is to omit the column altogether.
My poor mans work around for these are to just disable the generated column while I'm working with the data (like for importing a dump) and then go back and add the generated column expression afterwards.
You must declare the columns
Insert into adestination (id, generated)
select id, 1
from asource;
It is best practice to list out the columns, and use null as field1 for the auto incremented id field.
INSERT INTO adestination
(id,
field1,
field2)
SELECT
null AS generated,
asource.field1,
asource.field2
FROM asource;

Strange behavior when query for varchar filed

I came across this strange behavior when I was hunting for a bug in a system. Consider following.
We have a mysql table which have varchar(100) column. See the following sql script.
create table user(`id` bigint(20) NOT NULL AUTO_INCREMENT,`user_id` varchar(100) NOT NULL,`username` varchar(255) DEFAULT NULL,PRIMARY KEY (`id`),UNIQUE KEY `user_id` (`user_id`)) ENGINE=InnoDB AUTO_INCREMENT=129 DEFAULT CHARSET=latin1;
insert into user(user_id, username) values('20120723145614834', 'user1');
insert into user(user_id, username) values('20120723151128642', 'user1');
When I execute following query I received 0 results.
select * from user where user_id=20120723145614834;
But When I execute following I get the result(note the single quote).
select * from user where user_id='20120723145614834';
This is expected since user_id field is varchar. Strange thing is that both following queries yield result.
select * from user where user_id=20120723151128642;
select * from user where user_id='20120723151128642';
Can anybody explain me the reason for this strange behavior. My MySql version is 5.1.63-0ubuntu0.11.10.1
Check mysql document 12.2. Type Conversion in Expression Evaluation
Comparisons that use floating-point numbers (or values that are
converted to floating-point numbers) are approximate because such
numbers are inexact. This might lead to results that appear
inconsistent:
mysql> SELECT '18015376320243458' = 18015376320243458;
-> 1
mysql> SELECT '18015376320243459' = 18015376320243459;
-> 0
So we better use always right data type for SQL.