So I have the following tables running, but I'm having a problem on a specific situation.
I have a network of soap dispensers, that I want to keep track of their current soap level. I'm counting the number of pumps (3 mililiters each) and doing greatest(full_capacity - number_pumps * 3, 0) as seen on the View table.
But my problem is: there is table maintenance, and one of the "descriptions" may be "refill". What I wanted was for when a maintenance_description = "refill" for the number_pumps in table records be set to 0 for that exact dispenser. Is is possible? I read about triggers, but couldn't really understand how to do this.
As a pratical example, lets say I have soap dispenser id 1 with a max capacity of 1000ml, I then count 300 pumps, so I know I have 100ml left. I then do a refill and want the number of pumps to get set to 0. Otherwise in the next use it will say I have 97ml available, when in reality I have 997ml because I already made a refill.
Thank you very much in advance.
create table dispenser(
id_dispenser int not null auto_increment,
localization_disp varchar(20) not null,
full_capacity int not null,
primary key (id_dispenser));
create table records(
time_stamp DATETIME DEFAULT CURRENT_TIMESTAMP not null,
dispenser_id int not null,
number_pumps int not null,
battery_level float not null,
primary key (dispenser_id,time_stamp));
create table maintenance(
maintenance_id int not null auto_increment,
maintenance_date DATETIME DEFAULT CURRENT_TIMESTAMP not null,
employee_id int not null,
maintenance_description varchar(20) not null,
dispenser_id int not null,
primary key (maintenance_id));
CREATE VIEW left_capacity
AS
SELECT max(time_stamp) AS calendar,
id_dispenser AS dispenser,
full_capacity AS capacity,
greatest(full_capacity - number_pumps * 3, 0) AS available
FROM records r
INNER JOIN dispenser d
ON d.id_disp = r.id_dispenser
GROUP by id_dispenser;
If I understand correctly you want a view with the amount remaining. This would be the number pumps since the last refill, subject to your formula.
MySQL has had tricky issues with subqueries in views. I think the following is view-safe for MySQL:
select d.*,
(d.full_capacity -
(select count(*) * 3
from records r
where r.id_dispenser = d.id_dispenser and
r.time_stamp > (select max(m.maintenance_date)
from maintenance m
where m.id_dispenser = r.id_dispenser and
m.maintenance_description = 'refill'
)
)
) as available
from dispenser d;
Related
I have 2 tables.
CREATE TABLE $media_table (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`options` longtext DEFAULT NULL,
`order_id` int(11) unsigned DEFAULT NULL,
`player_id` int(11) unsigned NOT NULL,
PRIMARY KEY (`id`))
CREATE TABLE $category_table (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`category` varchar(300) DEFAULT NULL,
`media_id` int(11) unsigned DEFAULT NULL,
PRIMARY KEY (`id`))
I get id, options, category for rows matching category 'foo','bar'. I also use limit to get only x number of results.
SELECT mt.id, mt.options, ct.category
FROM $media_table as mt
LEFT JOIN $category_table as ct
ON mt.id = ct.media_id
WHERE mt.player_id = %d AND ct.category IN ('foo','bar')
GROUP BY ct.media_id
ORDER BY mt.order_id
LIMIT $limit
This works as intended. But I dont know how to get total number of results?
I tried this but the count is not correct.
SELECT COUNT(mt.id), ct.category
FROM $media_table as mt
LEFT JOIN $category_table as ct
ON mt.id = ct.media_id
WHERE mt.player_id = %d AND ct.category IN ('foo','bar')
GROUP BY ct.media_id
Where I select all results without the limit (in my previous query) the count is correct.
If I had only one table with primary key id I would do this to get count:
SELECT COUNT(id) FROM table
I dont know how to apply the same to my query.
Edit: I found my answer here select count(*) from select
Question 1: Are you looking at the raw results of the query using a tool like phpMyAdmin or MySQL WorkBency or what?
Question 2: Will the ultimate query results be delivered to the client via a web browser or what?
Answer 1: "The SUM() function returns the total sum of a numeric column."
SELECT SUM(column_name) FROM table_name WHERE condition;
Answer Possibility 2: If the results will be delivered in a web browser you should be able to use PHP or some other server side language like MS Active Server Pages to add up he "COUNT" field of each result.
Answer Possibility 3: Alternative 1: Export the results to a CVS file and import into a spreadsheet.
Maybe some of these suggestions will get the wheels turning and help you find the solution you are looking for.
Im a data lover and created a list of possible item combinations for a widely known mobile game. There are 21.000.000 combinations (useless combos filtered out by logics).
So what i wanna do now is creating a website people can access to see what they need to get the best gear OR whats the best they can do with the gear the have right now.
My Item Database currently looks like this:
CREATE TABLE `items` (
`ID` int(8) unsigned NOT NULL,
`Item1` int(2) unsigned NOT NULL,
`Item2` int(2) unsigned NOT NULL,
`Item3` int(2) unsigned NOT NULL,
`Item4` int(2) unsigned NOT NULL,
`Item5` int(2) unsigned NOT NULL,
`Item6` int(2) unsigned NOT NULL,
`Item7` int(2) unsigned NOT NULL,
`Item8` int(2) unsigned NOT NULL,
PRIMARY KEY (`ID`)
) ENGINE=InnoDB
ID range: 1 - 21.000.000
Every Item is known by its number e.g. 11. First number describes the category and second number the item of this category. For example 34 means Item3 --> 4. Its saved like this as i also have images to show on the website later using this number as identification (34.png).
The Stats Database looks like this right now:
CREATE TABLE stats (
Stat1 FLOAT UNSIGNED NOT NULL,
Stat2 FLOAT UNSIGNED NOT NULL,
Stat3 FLOAT UNSIGNED NOT NULL,
Stat4 FLOAT UNSIGNED NOT NULL,
Stat5 FLOAT UNSIGNED NOT NULL,
Stat6 FLOAT UNSIGNED NOT NULL,
Stat7 FLOAT UNSIGNED NOT NULL,
Stat8 FLOAT UNSIGNED NOT NULL,
ID1 INT UNSIGNED,
ID2 INT UNSIGNED,
ID3 INT UNSIGNED,
ID4 INT UNSIGNED,
ID5 INT UNSIGNED,
ID6 INT UNSIGNED,
ID7 INT UNSIGNED,
ID8 INT UNSIGNED
) ENGINE = InnoDB;
Where Stat* stands for stuff like Attack, Defense, Health, etc. and ID* for the ID of the Item Database. Some Combinations have the same stat combinations over all 8 possible stats, so i grouped them together to save some entries (dunno if that was smart yet). For example one Stat combination can have ID1, ID2 and ID3 filled and another combination just has ID1 (the max is 8 IDs tho, i calced it).
Right now im displaying a huge table sortable by every Stat, and its working fine.
What i want in the future tho is to let the user search for items or exclude certain items from the list. I know i can do this with some join and where-clauses (where items.ID == stats.ID1 OR items.ID == stats.ID2 etc.), but i wonder if my current structure is the smartest solution for this? I try to get the best performance as im running this on my old Pi 2.
When you have very large data-sets that only have a small number of matches, the best performance is often to use a subquery in the FROM or WHERE clause.
SELECT SP.TerritoryID,
SP.BusinessEntityID,
SP.Bonus,
TerritorySummary.AverageBonus
FROM (SELECT TerritoryID,
AVG(Bonus) AS AverageBonus
FROM Sales.SalesPerson
GROUP BY TerritoryID) AS TerritorySummary
INNER JOIN
Sales.SalesPerson AS SP
ON SP.TerritoryID = TerritorySummary.TerritoryID
Copied from here
This effectively creates a virtual table of only those rows that match, then runs the join on the virtual table - a lot like selecting the matching rows into a tmp table, then joining on the tmp table. Running a join on the entire table, although you might think it would be OK, often comes out terrible.
You may also find using a subquery in the WHERE clause works
... where items.id in (select id1 from stats union select id2 from stats)
Or select your matching stats IDs into a tmp table, then indexing the tmp table.
It all depends quite a lot on what your other selection logic is.
It also sounds like you should get some indexes on the stats table. If you're not updating it a lot, then indexing every ID can work OK. Just make sure the unfilled stats IDs have the value NULL
I have a table that has over 2.5 million rows and I would like to run the following SQL Statment to get the
select count(*)
from workflow
where action_name= 'Workflow'
and release_date >= '2019-12-01 13:24:22'
and release_date <= '2019-12-31 13:24:22'
AND project_name= 'Web'
group
by page_id
, headline
, release_full_name
, release_date
The problem is that it takes over 2.7 seconds to return 0 rows as expected. Is there a way to speed it up more? I have 6 more SQL Statements that are similiar so that will take almost (2.7 seconds * 6) = 17 seconds at least.
Here is my table schema
CREATE TABLE workflow (
id int(11) NOT NULL AUTO_INCREMENT,
action_name varchar(100) NOT NULL,
project_name varchar(30) NOT NULL,
page_id int(11) NOT NULL,
headline varchar(200) NOT NULL,
create_full_name varchar(200) NOT NULL,
create_date datetime NOT NULL,
change_full_name varchar(200) NOT NULL,
change_date datetime NOT NULL,
release_full_name varchar(200) NOT NULL,
release_date datetime NOT NULL,
reject_full_name varchar(200) NOT NULL,
reject_date datetime NOT NULL,
PRIMARY KEY (id)
) ENGINE=InnoDB AUTO_INCREMENT=2948271 DEFAULT CHARSET=latin1
What I'm looking for in this query is to get the count of the pages that were released last month. that have project_name = "web" and action_name = "Workflow"
This is bit bigger for comments
Using Group by with Count function doesn't make any sense. Usually you need to count actual rows in DB not after aggregation. Not sure if this is your actual requirement reason being GROUP BY causes slowness of the query.
Use composite Index on (Web, start_date) as column project seems highest selective.
For other information, Please share the explain plan.
Assuming that you need counts for groups (you had listed), better to include the group fields in select (essentially) like
select page_id, headline, release_full_name, release_date, count(*)
from ...
Adding an index with (page_id, headline) would optimize well.
Im new to using MySQL.
Im trying to run an inner join query, between a database of 80,000 (this is table B) records against a 40GB data set with approx 600million records (this is table A)
Is Mysql suitable for running this sort of query?
Whay sort of time should I expect it to take?
This is the code I ied is below. However it failed as my dbs connection failed at 60000 secs.
set net_read_timeout = 36000;
INSERT
INTO C
SELECT A.id, A.link_id, link_ref, network,
date_1, time_per,
veh_cls, data_source, N, av_jt
from A
inner join B
on A.link_id = B.link_id;
Im starting to look into ways to cutting down the 40GB table size to a temp table, to try and make the query more manageabe. But I keep getting
Error Code: 1206. The total number of locks exceeds the lock table size 646.953 sec
Am I on the right track?
cheers!
my code for splitting the database is:
LOCK TABLES TFM_830_car WRITE, tfm READ;
INSERT
INTO D
SELECT A.id, A.link_id, A.time_per, A.av_jt
from A
where A.time_per = 34 and A.veh_cls = 1;
UNLOCK TABLES;
Perhaps my table indices are in correct all I have is a simple primary key
CREATE Table A
(
id int unsigned Not Null auto_increment,
link_id varchar(255) not Null,
link_ref int not Null,
network int not Null,
date_1 varchar(255) not Null,
#date_2 time default Null,
time_per int not null,
veh_cls int not null,
data_source int not null,
N int not null,
av_jt int not null,
sum_squ_jt int not null,
Primary Key (id)
);
Drop table if exists B;
CREATE Table B
(
id int unsigned Not Null auto_increment,
TOID varchar(255) not Null,
link_id varchar(255) not Null,
ABnode varchar(255) not Null,
#date_2 time not Null,
Primary Key (id)
);
In terms of the schema, it is just these two two tables (A and B) loaded underneath a database
I believe that answer has already been given in this post: The total number of locks exceeds the lock table size
ie. use a table lock to avoid InnoDB default row by row lock mode
thanks foryour help.
Indexing seems to have solved the problem. I managed to reduce the query time from 700secs to aprox 0.2secs per record by indexing on:
A.link_id
i.e. from
from A
inner join B
on A.link_id = B.link_id;
found this really usefull post. v helpfull for a newbe like myself
http://hackmysql.com/case4
code used to index was:
CREATE INDEX linkid_index ON A(link_id);
I have 1 main table and two tables that hold multiple dinamyc information about the first table.
The first table called 'items' holds main information. Then there are two tables (ratings and indexes) that holds information about some values for dinamyc count of auditories and time period.
What i want:
When I query for those items, I want result to have an additional column names from ratings and indexes tables.
I have the code like this
SELECT items.*, ratings.val AS rating, indexes.val AS idx
FROM items,ratings,indexes
WHERE items.date>=1349902800000 AND items.date <=1349989199000
AND ratings.period_start <= items.date
AND ratings.period_end > items.date
AND ratings.auditory = 'kids'
AND indexes.period_start <= items.date
AND indexes.period_end > items.date
AND indexes.auditory = 'kids'
ORDER BY indexes.added, ratings.added DESC
The tables look something like this
items:
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(200) DEFAULT NULL,
`date` bigint(40) DEFAULT NULL
PRIMARY KEY (`id`)
ratings:
`id` bigint(50) NOT NULL AUTO_INCREMENT,
`period_start` bigint(50) DEFAULT NULL,
`period_end` bigint(50) DEFAULT NULL,
`val` float DEFAULT NULL,
`auditory` varchar(200) DEFAULT NULL,
`added` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
All dates except 'added' fields which are simple TIMESTAMPS are BIGINT format - miliseconds from whatever date it is in AS3 when you do Date.getTime();
So - what is the correct way to get this acomplished?
The only thing I'm not seeing is the unique correlation of any individual ITEM to its ratings... I would think the ratings table would need an "ItemID" to link back to items. As it stands now, if you have 100 items within a given time period say 3 months... and just add all the ratings / reviews, but don't associate those ratings to the actual Item, you are stuck. Put the ItemID in and add that to your WHERE condition and you should be good to go.