Insert value into specific column from another table with groupby - mysql

I am using MySQL. I want to insert value's result from groupby of datetime to specific column (using where, maybe). Let say:
I have two tables (a, b). In table a, I want to get how many total records during a hour (which I have datetime column), then the result will insert into table b, but in specific ID (there is already exist ID's value).
This is my error code:
INSERT INTO b(value)
WHERE ID=15
SELECT DAY COUNT(*)
FROM a
WHERE date >= '2015-09-19 00:00:00' AND date < '2015-09-19 00:59:59'
GROUP BY DAY(date),HOUR(date);";
Is that possible I make a query from this case?
Thank you very much for any reply!

Schema
create table tA
( id int auto_increment primary key,
theDate datetime not null,
-- other stuff
key(theDate) -- make it snappy fast
);
create table tB
( myId int primary key, -- by definition PK is not null
someCol int not null
);
-- truncate table tA;
-- truncate table tB;
insert tA(theDate) values
('2015-09-19'),
('2015-09-19 00:24:21'),
('2015-09-19 07:24:21'),
('2015-09-20 00:00:00');
insert tB(myId,someCol) values (15,-1); -- (-1) just for the heck of it
insert tB(myId,someCol) values (16,-1); -- (-1) just for the heck of it
The Query
update tB
set someCol=(select count(*) from tA where theDate between '2015-09-19 00:00:00' and '2015-09-19 00:59:59')
where tB.myId=15;
The Results
select * from tB;
+------+---------+
| myId | someCol |
+------+---------+
| 15 | 2 |
| 16 | -1 |
+------+---------+
only myId=15 is touched.

Related

Create a view where every line comes from a different table

I have tables for multiple stations with data from each station: timestamp, error etc. The total number of rows in each table is the frequency of errors at that station. The name of the station is in the table name.
CREATE TABLE Station_00 (error INT, timestamp DATETIME);
INSERT INTO Station_00 VALUES (1, '2020/10/05 12-12-12'),(2,'2020/10/05 12-12-15'),(3,'2020/10/05 12-12-20'),(4,'2020/10/05 12-12-25'),(5,'2020/10/05 12-12-30'),(6,'2020/10/05 12-12-35'),(7,'2020/10/05 12-12-37'),(8,'2020/10/05 12-12-40');
CREATE TABLE Station_01 (error INT, timestamp DATETIME);
INSERT INTO Station_01 VALUES (1, '2020/10/05 12-14-12'),(2,'2020/10/05 12-14-15'),(3,'2020/10/05 12-14-20'),(4,'2020/10/05 12-14-25');
CREATE TABLE Station_02 (error INT, timestamp DATETIME);
INSERT INTO Station_02 VALUES (1, '2020/10/05 12-14-17'),(2,'2020/10/05 12-14-20'),(3,'2020/10/05 12-14-26'),(4,'2020/10/05 12-14-29'),(5,'2020/10/07 12-14-29');
CREATE TABLE Station_03 (error INT, timestamp DATETIME);
INSERT INTO Station_03 VALUES (1, '2020/10/05 12-17-12'),(2,'2020/10/05 12-17-15'),(3,'2020/10/07 12-14-20'),(4,'2020/10/07 12-14-25'),(5,'2020/10/07 12-14-30'),(6,'2020/10/07 12-16-25');
Event values are random, not necessarily in ascending order like here.
I want to create a VIEW with as many rows as there are stations and columns station (the name of each table) and frequency (the number of rows in each table). Is there any way to do that in one SELECT ?
I would like something like:
+---------+-----------+
| Station | Frequency |
+---------+-----------+
| 00 | 8 |
| 01 | 4 |
| 02 | 5 |
| 03 | 6 |
+---------+-----------+
You could use union all:
create view myview as
select '00' as station, count(*) as cnt from table_station00
union all select '01', count(*) from table_station01
union all select '02', count(*) from table_station02
Note, however, that all these tables should probably be consolidated in a single table, with an additional column that represents the station. Then, you could just do:
select station, count(*) cnt from table_station;

Declare a table has a column that is the product of two columns

Table Product
id name price quantity total
1 food 50 1 50
2 drink 20 2 40
3 dress 100 3 300
How do I declare a table that has a column that is the product of two columns?
I have this code:
CREATE TABLE [dbo].[Orders] (
[Id] INT IDENTITY (1, 1) NOT NULL,
[ProductName] NCHAR (70) NULL,
[Price] INT NULL,
[Quantity] INT NULL,
[Total] INT NULL,
PRIMARY KEY CLUSTERED ([Id] ASC)
);
Sounds like you want a VIEW.
Their example is exactly what you're describing
mysql> CREATE TABLE t (qty INT, price INT);
mysql> INSERT INTO t VALUES(3, 50);
mysql> CREATE VIEW v AS SELECT qty, price, qty*price AS value FROM t;
mysql> SELECT * FROM v;
+------+-------+-------+
| qty | price | value |
+------+-------+-------+
| 3 | 50 | 150 |
+------+-------+-------+
You can try this mate:
DROP TRIGGER IF EXISTS trg_product_total;
DELIMITER //
CREATE TRIGGER trg_product_total AFTER INSERT ON product
FOR EACH ROW BEGIN
SET #price = NULL, #quantity = NULL;
SELECT price INTO #price FROM product
WHERE id = NEW.id;
SELECT quantity INTO #quantity
WHERE id = NEW.id;
UPDATE product SET total = #price * #quantity
WHERE id = NEW.id;
END;
You can use this kind of approach if you don't really want to process the product.total before inserting it into the DB.
The Trigger will execute each time a new record is added into the table, wherein the expected insert for the total column is either 'NULL' or '0' depending on your default value.
But I think it would be better if you calculate it before the insert.
The flow would be like:
Application side
1. get price and quantity for the product
2. calculate for the total
3. insert values into the query
4. execute query
In case you want to learn more about MySQL Trigger: Link
Also, PHP Transaction: Link

Getting union of records from multiple tables if either of them exists

I have datewise tables created with date as part of the table name.
ex. data_02272015, data_02282015 (name format is data_<mmddyyyy>). All the tables have the same schema.
Now, The tables have a datetime column TransactionDate. I need to get all the records by querying against this column. One table stores 24 hr data of the corresponding day. So, if I query with date 2015-02-28 xx:xx:xx, I can just query the table data_02282015. But, if I want to query with date 2015-02-27 xx:xx:xx, I have to consider both the tables data_02282015 and data_02272015.
I can get the union like this:
SELECT * FROM data_02272015
UNION
SELECT * FROM data_02282015;
But the problem is I also need to check whether either of the table exists. So if data_02282015 does not exists, the query fails. Is there a way with which query will return the records from the table(s) that exists.
So,
If both table exists, then it will return union of records of both the tables.
If either table does not exists, then it will return records for existing table only.
If both tables does not exists, empty resultset.
I tried things like:
SELECT IF( EXISTS(SELECT 1 FROM data_02282015), (SELECT * FROM data_02282015), 0)
...
But it didn't worked.
If I understand the question correctly, you need a FULL JOIN :
CREATE TABLE two
( val INTEGER NOT NULL PRIMARY KEY
, txt varchar
);
INSERT INTO two(val,txt) VALUES
(0,'zero'),(2,'two'),(4,'four'),(6,'six'),(8,'eight'),(10,'ten');
CREATE TABLE three
( val INTEGER NOT NULL PRIMARY KEY
, txt varchar
);
INSERT INTO three(val,txt) VALUES
(0,'zero'),(3,'three'),(6,'six'),(9,'nine');
SELECT *
FROM two t2
FULL JOIN three t3 ON t2.val = t3.val
ORDER BY COALESCE(t2.val , t3.val)
;
Result:
CREATE TABLE
INSERT 0 6
CREATE TABLE
INSERT 0 4
val | txt | val | txt
-----+-------+-----+-------
0 | zero | 0 | zero
2 | two | |
| | 3 | three
4 | four | |
6 | six | 6 | six
8 | eight | |
| | 9 | nine
10 | ten | |
(8 rows)
Try this script. As a complete solution, you could use the following embedded in a stored procedure, replacing id column with all your needed columns.
-- temp table that will collect results
declare #tempResults table (id int)
-- Your min and max dates to iterate between
declare #dateParamStart datetime
set #dateParamStart = '2015-02-25'
declare #dateParamEnd datetime
set #dateParamEnd = '2015-02-28'
-- table name using different dates
declare #currTblName nchar(13)
while #dateParamStart < #dateParamEnd
begin
-- set table name with current date
SELECT #currTblName = 'data_' + REPLACE(CONVERT(VARCHAR(10), #dateParamStart, 101), '/', '')
SELECT #currTblName -- show current table
-- if table exists, make query to insert into temp table
if OBJECT_ID (#currTblName, N'U') IS NOT NULL
begin
print ('table ' + #currTblName + 'exists')
execute ('insert into #tempResults select id from ' + #currTblName)
end
-- set next date
set #dateParamStart = dateadd(day, 1, #dateParamStart)
end
-- get your results.
-- Use distinct to act as a union if rows can be the same between tables.
select distinct * from #tempResults

insert into table select max(column_name)+1

I have this mysql table built like this:
CREATE TABLE `posts` (
`post_id` INT(10) NOT NULL AUTO_INCREMENT,
`post_user_id` INT(10) NOT NULL DEFAULT '0',
`gen_id` INT(10) NOT NULL DEFAULT '0',
PRIMARY KEY (`post_user_id`, `post_id`)
)
COLLATE='utf8_general_ci'
ENGINE=MyISAM;
When I do:
insert into posts (post_user_id) values (1);
insert into posts (post_user_id) values (1);
insert into posts (post_user_id) values (2);
insert into posts (post_user_id) values (1);
select * from posts;
I get:
post_id | post_user_id | gen_id
1 1 0
2 1 0
1 2 0
3 1 0
A unique post_id is generated for each unique user.
I need the gen_id column to be 1 2 3 4 5 6 etc. How can I increment this column when I do an insert. I tried the one below, but it won't work. What's the right way to do this?
insert into posts (post_user_id,gen_id) values (1,select max(gen_id)+1 from posts);
//Select the highest gen_id and add 1 to it.
Try this:
INSERT INTO posts (post_user_id,gen_id)
SELECT 1, MAX(gen_id)+1 FROM posts;
Use a TRIGGER on your table. This sample code can get you started:
DELIMITER //
CREATE TRIGGER ai_trigger_name AFTER INSERT ON posts
FOR EACH ROW
BEGIN
UPDATE posts
SET gen_id = (SELECT MAX(gen_id) FROM posts) + 1
WHERE post_id = LAST_INSERT_ID()
LIMIT 1;
END;//
DELIMITER ;
For my case the first number to increment was null. I resolve with
IFNULL(MAX(number), 0) + 1
or better the query became
SELECT IFNULL(MAX(number), 0) + 1 FROM mytable;
Here is the table "Autos" and the data that it contains to begin with:
AutoID | Year | Make | Model | Color |Seq
1 | 2012 | Jeep |Liberty| Black | 1
2 | 2013 | BMW | 330XI | Blue | 2
The AutoID column is an auto incrementing column so it is not necessary to include it in the insert statement.
The rest of the columns are varchars except for the Seq column which is an integer column/field.
If you want to make it so that when you insert the next row into the table and the Seq column auto increments to the # 3 you need to write your query as follows:
INSERT INTO Autos
(
Seq,
Year,
Make,
Model,
Color,
)
Values
(
(SELECT MAX(Seq) FROM Autos) + 1, --this increments the Seq column
2013,'Mercedes','S550','Black');
The reason that I put the Seq column first is to ensure that it will work correctly... it does not matter where you put it, but better safe than sorry.
The Seq column should now have a value of 3 along with the added values for the rest of that row in the database.
The way that I intended that to be displayed did not happen...so I will start from the beginning: First I created a table.
create table Cars (
AutoID int identity (1,1) Primary Key,
Year int,
Make varchar (25),
Model varchar (25),
TrimLevel varchar (30),
Color varchar (30),
CreatedDate date,
Seq int
)
Secondly I inserted some dummy values
insert into Cars values (
2013,'Ford' ,'Explorer','XLT','Brown',GETDATE(),1),
(2011,'Hyundai' ,'Sante Fe','SE','White',GETDATE(),2),
(2009,'Jeep' ,'Liberty','Jet','Blue',GETDATE(),3),
(2005,'BMW' ,'325','','Green',GETDATE(),4),
(2008,'Chevy' ,'HHR','SS','Red',GETDATE(),5);
When the insertion is complete you should have 5 rows of data.
Since the Seq column is not an auto increment column and you want to ensure that the next Seq's row of data is automatically incremented to the # 6 and its subsequent rows are incremented as well you would need to write the following code:
INSERT INTO Cars
(
Seq,
Year,
color,
Make,
Model,
TrimLevel,
CreatedDate
)
Values
(
(SELECT MAX(Seq) FROM Cars) + 1,
2013,'Black','Mercedes','A550','AMG',GETDATE());
I have run this insert statement many times using different data just to make sure that it works correctly....hopefully this helps!

Create index to optimize slow query

There is a query that takes too long on a 250,000 rows table. I need to speed it up:
create table occurrence (
occurrence_id int(11) primary key auto_increment,
client_id varchar(16) not null,
occurrence_cod varchar(50) not null,
entry_date datetime not null,
zone varchar(8) null default null
)
;
insert into occurrence (client_id, occurrence_cod, entry_date, zone)
values
('1116', 'E401', '2011-03-28 18:44', '004'),
('1116', 'R401', '2011-03-28 17:44', '004'),
('1116', 'E401', '2011-03-28 16:44', '004'),
('1338', 'R401', '2011-03-28 14:32', '001')
;
select client_id, occurrence_cod, entry_date, zone
from occurrence o
where
occurrence_cod = 'E401'
and
entry_date = (
select max(entry_date)
from occurrence
where client_id = o.client_id
)
;
+-----------+----------------+---------------------+------+
| client_id | occurrence_cod | entry_date | zone |
+-----------+----------------+---------------------+------+
| 1116 | E401 | 2011-03-28 16:44:00 | 004 |
+-----------+----------------+---------------------+------+
1 row in set (0.00 sec)
The table structure is from a commercial application and can not be altered.
What would be the best index(es) to optimize it? Or a better query?
EDIT:
It is the last occurrence of the E401 code for each client and only if the last occurrence is that code.
The ideal indexes for such a query would be:
index #1: [client_id] + [entry_date]
index #2: [occurence_cod] + [entry_date]
Nevertheless those indexes can be simplified if it happens that data have some characteristics. This will save file space, and also time when data are updated (insert/delete/update).
If there is rarely more than one "occurence" record for each [client_id], then index #1 can be only [client_id].
By the same way, if there is rarely more than one "occurence" record for each [occurence_cod], then index #1 can be only [occurence_cod].
It may be more useful to turn index #2 into [entry_date] + [occurence_cod]. This will enable you to use the index for criteria that are only on [entry_date].
Regards,
Unless you are truly trying to get the row with the max date, if and only if the occurrence_cod matches, this should work:
select client_id, occurrence_cod, entry_date, zone
from occurrence o
where occurrence_cod = 'E401'
ORDER BY entry_date DESC
LIMIT 1;
It will return the most recent row with occurrence_cod='E401'
select
a.client_id,
a.occurrence_cod,
a.entry_date,
a.zone
from occurrence a
inner join (
select client_id, occurence_cod, max(entry_date) as entry_date
from occurence
) as b
on
a.client_id = b.client_id and
a.occurence_cod = b.occurence_cod and
a.entry_date = b.entry_date
where
a.occurrence_cod = 'E401'
Using this approach you're avoiding the subselect per row, and it should be faster to compare two big sets of data than a big set of data for each row of the set.
I'd re-write the query:
select client_id, occurrence_cod, max(entry_date), zone
from occurrence
group by client_id, occurrence_cod, zone;
(assuming the other lines are indeed identical, and entry date is the only thing that changes).
Did you try putting an index on occurrence_cod?
Try this if other approaches not available.
create a new table: last_occurrence.
Every time user occurred, update the corresponding row in this last_occurrence table.
by doing this, you just need to use the following sql to get your result :)
select * from last_occurrence