I have an database with alot of users (schema's) for example:
user_1
user_2
user_3
...
user_303
All of those got the same tables, for example:
CREATE TABLE `messages` (
`id` int(11) NOT NULL,
`content` text COLLATE utf8mb3_unicode_ci NOT NULL,
`date` datetime NOT NULL,
`viewed` int(11) NOT NULL,
`forId` int(11) NOT NULL,
`fromId` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3 COLLATE=utf8mb3_unicode_ci;
INSERT INTO `messages` (`id`, `content`, `date`, `viewed`, `forId`, `fromId`) VALUES
(1, 'Hello World', '2020-06-04 14:49:17', 1, 2106, 1842),
(2, 'Hi there', '2020-06-04 14:49:39', 1, 2106, 1842),
(3, 'test 1', '2022-01-03 11:40:43', 1, 3006, 3006),
(4, 'Test 2', '2022-01-20 12:01:52', 1, 1842, 1842);
What I want is a query for example:
USE user_1, user_2, user_3;
SELECT * FROM `messages` WHERE `content` LIKE `%Hi%`;
I don't know if this is possible as a SQL Query, an other option is to write a small PHP code with a for each loop but than I want a command so I get an list of all users: user_1 till user_303
The users are not from 1 till 303 there are some users deleted, to it can be that user_200 doesn't exist any more.
Hope someone here can help me out
You can use the following to write the query you want.
USE information_schema;
SELECT concat("SELECT * FROM ", table_schema,".",table_name, " UNION ALL ")
FROM tables WHERE table_name = 'messages';
You will obtain something like this;
SELECT * FROM base.messages UNION ALL
SELECT * FROM c.messages UNION ALL
You can then run this query to obtain what you want.
In Oracle you can concatenate only 2 arguments, so you have to use nesting to concatenate more than two arguments.
Also you should use ALL_TABLES instead of information_schema
SELECT
concat('SELECT MESSAGE FROM ', concat(OWNER, concat('.', concat(TABLE_NAME, ' UNION ALL '))))
FROM
ALL_TABLES
WHERE
OWNER LIKE 'user_%';
Don't forget to delete the last UNION ALL from result.
Related
My database is composed by individual job contracts. I am updating some information to enhance the quality of the data. More precisely, I am updating information regarding workers' residence codes. In the following image I am showing an example of my database in the following image (the .csv version could be found here).
While variables are explained here below.
id -----------> "Primary key" [indexed]
worker_id ----> "Id associated ot each individual/worker" [indexed]
dt_start -----> "Starting date of the job contract"
dt_end -------> "End date of the job contract"
cod_res ------> "Old residence code"
cod_res_rev --> "New residence code"
id_lag -------> "Previous id, if the 'worker_id' is the same" [indexed]
id_lead ------> "Subsequent id, it the 'worker_id' is the same" [indexed]
As you can notice, the column cod_res_rev is characterized is full of NULL values. This is because the reconstruction of the variable cod_res_rev with the updated residence values it was based solely on specific contracts (those for which the worker had had an actual change of residence - but this is redundant for the purposes of my question). Therefore, my goal is to fill each NULL value of the column cod_res_rev with the previous one, if not missing, until the next non-empty value is reached and continue like this for each worker. The result should be something like this.
I attempted to achieve my goal through the following procedure.
-- The loop is performed based on the maximum number of entries per worker in the database identified through the table 'max_count'.
drop table if exists max_count;
create table max_count
as select worker_id, count(*) n
from ml_arm
group by worker_id;
alter table max_count add unique index (worker_id);
DROP PROCEDURE IF EXISTS doiterate;
delimiter //
CREATE PROCEDURE doiterate()
BEGIN
DECLARE total INT unsigned DEFAULT 0;
WHILE total <= (select MAX(n) from max_count) DO
update ml_arm a
left outer join ml_arm b on a.id_lag = b.id
set a.cod_res_rev =
case
when a.cod_res_rev is NULL and a.worker_id = b.worker_id and b.cod_res_rev is not NULL
then b.cod_res_rev
else a.cod_res_rev
end;
SET total = total + 1;
END WHILE;
END//
delimiter ;
CALL doiterate();
However, I do not believe this is the optimal way to update my table. In fact, by database if composed by about 25 million of rows and the value from select MAX(n) from max_count is about 4,000. I kindly ask you for any suggestion on faster approaches to update my table. I am using MySQL 8.0.22. Thank you in advance.
Eventually, here below there is a command to create a sample table of my database with a bunch of entries.
drop table if exists ml_arm;
create table ml_arm (
id MEDIUMINT NOT NULL AUTO_INCREMENT,
worker_id int,
dt_start date,
dt_end date,
cod_res varchar(50),
cod_res_rev varchar(50),
id_lag int,
id_lead int,
PRIMARY KEY (id)
);
insert into
ml_arm(id, worker_id, dt_start, dt_end, cod_res, cod_res_rev, id_lag, id_lead)
values
('12', '20', '2014-05-02', '2014-07-08', '', 'I040', NULL, '13'),
('13', '20', '2017-01-14', '2017-01-31', '', NULL, '12', '14'),
('14', '20', '2017-11-06', '2017-12-15', 'I040', NULL, '13', NULL),
('20', '29', '2014-11-24', '2017-02-11', '', 'N.D.', NULL, NULL),
('21', '42', '2016-01-22', '2016-05-05', 'G582', 'G582', NULL, NULL),
('23', '45', '2013-08-07', '2014-04-06', 'G582', 'G582', NULL, '24'),
('24', '45', '2014-05-07', '2014-05-10', 'G582', NULL, '23', NULL),
('25', '48', '2012-08-11', '2012-08-31', 'G582', 'G582', NULL, '26'),
('26', '48', '2013-08-10', '2013-08-31', 'G582', NULL, '25', NULL),
('53', '71', '2016-12-01', '2017-05-31', '', 'N.D.', NULL, '54'),
('54', '71', '2017-06-01', '2020-05-29', '', NULL, '53', '55'),
('55', '71', '2020-06-01', '2099-01-01', '', NULL, '54', NULL)
;
By following Gordon Linoff's answer on this thread it seems to work:
UPDATE ml_arm t1
JOIN (
SELECT id, #s:=IF(cod_res_rev IS NULL, #s, cod_res_rev) cod_res_rev
FROM (SELECT * FROM ml_arm ORDER BY id) r,
(SELECT #s:= NULL) t
) t2
ON t1.id = t2.id
SET t1.cod_res_rev = t2.cod_res_rev;
I have a table with a json column in which I'm storing a list of ids for contract types. I want to do a join on the contract_types table so that I can get a concatenated list of contract type names for their ids.
CREATE TABLE `my_alerts` (
`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`name` VARCHAR(255) NOT NULL,
`criteria` JSON NULL DEFAULT NULL,
PRIMARY KEY (`id`)
)
CREATE TABLE `contract_types` (
`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`name` VARCHAR(255) NOT NULL,
PRIMARY KEY (`id`)
)
INSERT INTO contract_types (id, name) VALUES (1, 'Full time');
INSERT INTO contract_types (id, name) VALUES (2, 'Part time');
INSERT INTO my_alerts (id, name, criteria) VALUES (1, 'test', '{"contractTypes": ["1", "2"]}');
I try the following query but it doesn't work:
SELECT
a.id,
a.name,
GROUP_CONCAT(c.name SEPARATOR ', ') as contractTypes
FROM my_alerts a
LEFT JOIN contract_types c on JSON_CONTAINS(a.criteria, CAST(c.id as JSON), '$.contractTypes')
group by a.id
I can only get the query to work if I change the json column and store the array values without quotes:
{"contractTypes": [1, 2]}
Unfortunately I'm unable to change the way the array values are stored i.e. without quotes.
How can I get the above json_contains query to work when the json array values are stored in the following two formats:
{"contractTypes": ["1", "2"]}
or
{"contractTypes": ["1,2"]}
* UPDATE *
I've figured out how I can join if json is in the second format i.e. a comma separated list as follows:
LEFT JOIN contract_types c on find_in_set(c.id, JSON_EXTRACT(a.criteria, '$.contractTypes'))
Now I only need some help with joining when array values have double quotes.
Try and adjust what you need:
mysql> SELECT
-> CAST(JSON_ARRAY(GROUP_CONCAT(`id` SEPARATOR ', ')) AS JSON)
-> FROM `contract_types`;
+-------------------------------------------------------------+
| CAST(JSON_ARRAY(GROUP_CONCAT(`id` SEPARATOR ', ')) AS JSON) |
+-------------------------------------------------------------+
| ["1, 2"] |
+-------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> SELECT
-> CAST(REPLACE(GROUP_CONCAT(JSON_ARRAY(CAST(`id` AS CHAR))), '],[', ', ') AS JSON)
-> FROM `contract_types`;
+----------------------------------------------------------------------------------+
| CAST(REPLACE(GROUP_CONCAT(JSON_ARRAY(CAST(`id` AS CHAR))), '],[', ', ') AS JSON) |
+----------------------------------------------------------------------------------+
| ["1", "2"] |
+----------------------------------------------------------------------------------+
1 row in set (0.00 sec)
I have figured out this issue.
If your json column called 'user_ids' and it has the values like below:
["1", "2", "3", "4", "5"]
You can do this
SELECT JSON_CONTAINS(user_ids, '"2"', '$') as u;
The 'u' will return 1.
The key for this solution is that you make sure wrap the number with those double quotes first and then the single quotes around it.
CREATE TABLE myCTGlobalFootprint (
geoID INT NOT NULL AUTO_INCREMENT,
geoName VARCHAR(20) NOT NULL,
PRIMARY KEY (geoID)
);
INSERT INTO myCTGlobalFootprint
(geoName)
VALUES
('Canada'),
('United States'),
('Europe'),
('International Misc.');
It's throwing an error at line 15... any insight would be DEEPLY APPRECIATED!!
Use UNION like:
INSERT INTO myCTGlobalFootprint (geoName)
select 'Canada'
UNION
select 'United States'
UNION
select 'Europe'
UNION
select 'International Misc.';
Otherwise you have to write four INSERT INTO statements like:
INSERT INTO myCTGlobalFootprint(geoName) VALUES('Canada');
INSERT INTO myCTGlobalFootprint(geoName) VALUES('United States');
INSERT INTO myCTGlobalFootprint(geoName) VALUES('Europe');
INSERT INTO myCTGlobalFootprint(geoName) VALUES('International Misc.');
Is there any way how can I use result for specifying table to join?
I'd like to do something like
SELECT id, some_number, ... FROM sometable NATURAL JOIN someothertable_$some_number;
I know that there's nothing like this in relational algebra, so probably I'll not succeed, I just wanted to ask to be sure.
I don't want to use any SQL scripts.
Runnable Example Here: http://sqlfiddle.com/#!2/5e92c/36
Code to setup tables for this example:
create table if not exists someTable
(
someTableId bigint not null auto_increment
, tableId int not null
, someOtherTableId bigint not null
, primary key (someTableId)
, index (tableId, someOtherTableId)
);
create table if not exists someOtherTable_$1
(
someOtherTableId bigint not null auto_increment
, data varchar(128) character set utf8
, primary key (someOtherTableId)
);
create table if not exists someOtherTable_$2
(
someOtherTableId bigint not null auto_increment
, data varchar(128) character set utf8
, primary key (someOtherTableId)
);
insert sometable (tableId, someOtherTableId) values (1, 1);
insert sometable (tableId, someOtherTableId) values (1, 2);
insert sometable (tableId, someOtherTableId) values (2, 2);
insert sometable (tableId, someOtherTableId) values (2, 3);
insert someothertable_$1(data) values ('table 1 row 1');
insert someothertable_$1(data) values ('table 1 row 2');
insert someothertable_$1(data) values ('table 1 row 3');
insert someothertable_$2(data) values ('table 1 row 1');
insert someothertable_$2(data) values ('table 1 row 2');
insert someothertable_$2(data) values ('table 1 row 3');
STATIC SOLUTION
Here's a solution if your tables are fixed (e.g. in the example you only have someOtherTable 1 and 2 / you don't need the code to change automatically as new tables are added):
select st.someTableId
, coalesce(sot1.data, sot2.data)
from someTable st
left outer join someOtherTable_$1 sot1
on st.tableId = 1
and st.someOtherTableId = sot1.someOtherTableId
left outer join someOtherTable_$2 sot2
on st.tableId = 2
and st.someOtherTableId = sot2.someOtherTableId;
DYNAMIC SOLUTION
If the number of tables may change at runtime you'd need to write dynamic SQL. Beware: with every successive table you're going to take a performance hit. I wouldn't recommend this for a production system; but it's a fun challenge. If you can describe your tool set & what you're hoping to achieve we may be able to give you a few pointers on a more suitable way forward.
select group_concat(distinct ' sot' , cast(tableId as char) , '.data ')
into #coalesceCols
from someTable;
select group_concat(distinct ' left outer join someOtherTable_$', cast(tableId as char), ' sot', cast(tableId as char), ' on st.tableId = ', cast(tableId as char), ' and st.someOtherTableId = sot', cast(tableId as char), '.someOtherTableId ' separator '')
into #tableJoins
from someTable;
set #sql = concat('select someTableId, coalesce(', #coalesceCols ,') from someTable st', #tableJoins);
prepare stmt from #sql;
execute stmt;
Suppose I have this database table (some sample code below) that stores the relationship between two lists (requirements and testcases in my case) and I want to create a table with rows showing testcases and columns showing requirements with an indicator showing that a relationship exists.
A few limitations
I don't have the luxury of changing the db structure as this belongs to an open source test case management system (TestLink).
It's possible to write some code for this, but I'm hoping it can be done in MySQL.
Ah, and yes, it uses MySQL, so this would have to work in that environment.
This functionality used to exist, but has been taken out because typically, this type of work brings the db to its knees when there are tens-of-thousands of testcases and requirements.
create table pivot (
req_id int(11),
testcase_id int(11)
) ;
/*Data for the table pivot */
insert into pivot(req_id,testcase_id) values (1,1);
insert into pivot(req_id,testcase_id) values (2,2);
insert into pivot(req_id,testcase_id) values (3,3);
insert into pivot(req_id,testcase_id) values (4,1);
insert into pivot(req_id,testcase_id) values (5,2);
insert into pivot(req_id,testcase_id) values (6,3);
insert into pivot(req_id,testcase_id) values (2,1);
insert into pivot(req_id,testcase_id) values (3,2);
What I want to get out of the query is a table that looks somethign like this:
1 2 3 4 5 6
1 x x x
2 x x x
3 x x
Note:the row are the testcase_ids and the columns are the 'req_ids'
Anyone have a tip on how to get this with just SQL?
below is a lot more efficient:
create one table for test_cases, like
create table testCases(
id int(11) auto_increment,
testcase varchar(200),
primary key(id))
one table for requirements
requirements(
id int(11) auto_increment,
requirements varchar(200),
primary key(id))
then in a third table map the relationship
create table matchRequirementsToTests(
requirements varchar(200),
testcase varchar(200),
primary key(requirements, testcase),
foreign key (requirements) references Requirements(id),
foreign key(test case) references Test_cases(id))
select testcase_id,
if(sum(req_id = 1), 'X', '') as '1',
if(sum(req_id = 2), 'X', '') as '2',
if(sum(req_id = 3), 'X', '') as '3',
if(sum(req_id = 4), 'X', '') as '4',
if(sum(req_id = 5), 'X', '') as '5',
if(sum(req_id = 6), 'X', '') as '6'
from pivot
group by testcase_id;
It's ugly, but it works:
+-------------+---+---+---+---+---+---+
| testcase_id | 1 | 2 | 3 | 4 | 5 | 6 |
+-------------+---+---+---+---+---+---+
| 1 | X | X | | X | | |
| 2 | | X | X | | X | |
| 3 | | | X | | | X |
+-------------+---+---+---+---+---+---+
3 rows in set (0.00 sec)
I now have a name for what I'm trying to accomplish. It's a 'dynamic crosstab'. Here is how I got to the solution. Thanks to http://rpbouman.blogspot.com/2005/10/creating-crosstabs-in-mysql.html for the clear instructions for getting here.
Lines 1-20 - Set up a table to use for testing.
Lines 22-29 - a 'static' crosstab query, assuming I know how many requirements I've got.
Thanks D Mac for the solution you gave :)
Lines 30-44 - A query that dynamically generates the static query above.
Lines 45-72 - This is where I’m having the problem. The intent is to create a stored procedure that returns the result of the dynamic query. MySQL is saying there is a syntax issue, but I don't see how to fix it. Any thoughts?
drop table if exists pivot;
create table `pivot` (
`req_id` int(11),
`testcase_id` int(11)
);
/*Data for the table `pivot` */
insert into `pivot`(`req_id`,`testcase_id`) values (1,4);
insert into `pivot`(`req_id`,`testcase_id`) values (2,4);
insert into `pivot`(`req_id`,`testcase_id`) values (3,4);
insert into `pivot`(`req_id`,`testcase_id`) values (4,7);
insert into `pivot`(`req_id`,`testcase_id`) values (1,7);
insert into `pivot`(`req_id`,`testcase_id`) values (2,12);
insert into `pivot`(`req_id`,`testcase_id`) values (3,12);
insert into `pivot`(`req_id`,`testcase_id`) values (4,4);
select * from pivot;
select testcase_id
, if(sum(req_id = 1), 1, 0)
, if(sum(req_id = 2), 1, 0)
, if(sum(req_id = 3), 1, 0)
, if(sum(req_id = 4), 1, 0)
from pivot
group by testcase_id;
select concat(
'select testcase_id','\n'
, group_concat(
concat(
', if(sum(req_id = ',p2.req_id,'), 1, 0)','\n'
)
order by p2.req_id
separator ''
)
, 'from pivot','\n'
, 'group by testcase_id;','\n'
) statement
from pivot p2
order by p2.req_id;
CREATE PROCEDURE p_coverage()
LANGUAGE SQL
NOT DETERMINISTIC
CONTAINS SQL
SQL SECURITY DEFINER
begin
select concat(
'select testcase_id','\n'
, group_concat(
concat(
', if(sum(req_id = ',p2.req_id,'), 1, 0)','\n'
)
order by p2.req_id
separator ''
)
, 'from pivot','\n'
, 'group by testcase_id;','\n'
) statement
into #coverage_query
from pivot p2
order by p2.req_id;
prepare coverage from #coverage_query;
execute coverage;
deallocate prepare coverage;
end;
select * from pivot;