I am currently collecting lap times in a sql database and are having some difficulties with extracting the drivers with fastest laptimes!
The structure looks like the following!
CREATE TABLE IF NOT EXISTS `leaderboard` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`driver` varchar(50) NOT NULL,
`car` varchar(50) NOT NULL,
`best` double NOT NULL,
`guid` bigint(255) NOT NULL,
`server_name` varchar(255) NOT NULL,
`track` varchar(55) NOT NULL,
PRIMARY KEY (`id`),
KEY `driver` (`driver`),
KEY `server_name` (`server_name`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1213 ;
Data example
INSERT INTO `leaderboard` (`id`, `driver`, `car`, `best`, `guid`, `server_name`, `track`) VALUES
(1, 'dave.38', 'bmw_m3_e30', 88.379, 76561198084629688, 'A++%21+A++%21+------+Saturdaynightracing.tk+-+%5BRACE-SERVER%5D+-+%5BMagione%5D+%23SNR', 'magione'),
(2, 'Gabriel PorfÃrio', 'bmw_m3_e30', 87.318, 76561197987062834, 'A++%21+A++%21+------+Saturdaynightracing.tk+-+%5BRACE-SERVER%5D+-+%5BMagione%5D+%23SNR', 'magione'),
(3, 'xX_VEGA_Xx', 'bmw_m3_e30', 88.23, 76561198182074333, 'A++%21+A++%21+------+Saturdaynightracing.tk+-+%5BRACE-SERVER%5D+-+%5BMagione%5D+%23SNR', 'magione'),
(4, 'dave.38', 'bmw_m3_e30', 88.379, 76561198084629688, 'A++%21+A++%21+------+Saturdaynightracing.tk+-+%5BRACE-SERVER%5D+-+%5BMagione%5D+%23SNR', 'magione'),
(5, 'Gabriel PorfÃrio', 'bmw_m3_e30', 87.318, 76561197987062834, 'A++%21+A++%21+------+Saturdaynightracing.tk+-+%5BRACE-SERVER%5D+-+%5BMagione%5D+%23SNR', 'magione');
Now i am trying to sort out the drivers with best time using column best using the following SQL but it appears as if some times are discarded, the combination of sort and order does not work.
SELECT DISTINCT guid, car, best, driver FROM `leaderboard` WHERE `server_name` like '%%' AND `track` = 'magione' GROUP BY(driver) ORDER BY `best` * 1 LIMIT 10
Please help this is driving me mad!
Some fields in your data are not very clear, so I made such assumptions:
guid means driver's guid (because it is the same for the same driver in your data).
car is the same for the same driver.
With these assumptions you can use simple GROUP BY to get the results that you need:
SELECT driver, car, MIN(best) as best_time, guid
FROM leaderboard
WHERE `server_name` like '%%' AND `track` = 'magione'
GROUP BY driver, car, guid
ORDER BY MIN(best)
Related
MySQL is a new language to me and I struggle with selecting more data from my loans table when I do this query:
My objective is to print all elements of the Loans table that match the Bank IDs, all I get is outputs 1-10 where I have over 13 elements in my loans table.
EDIT 1: Bank Table serves as a link between all tables, I know the problem resides in my DML query however cluelessly not sure what to do.
When running my query, only matching primary key to foreign key appears. That is if Bank ID is 1 and Loans ID is 1 it shows, but when Bank ID is 1 and Loans ID is 13 it does not appear in the query.
Please save your criticism, as mentioned above, my experience is green.
My DML:
SELECT bank.bankID, bankcustomer.FirstName, bankcustomer.LastName, loans.FirstPaymentDate
FROM bank
JOIN bankcustomer ON bank.bankID = bankcustomer.customerID
JOIN loans ON loans.LoansID = bank.bankID;
Tables DDL:
CREATE TABLE bankCustomer(
CustomerID int(11) NOT NULL AUTO_INCREMENT,
FirstName varchar(20) DEFAULT NULL,
MiddleName varchar(20) DEFAULT NULL,
LastName varchar(20) DEFAULT NULL,
Address_Line1 varchar(50) DEFAULT NULL,
Address_Line2 varchar(50) DEFAULT NULL,
City varchar(20) DEFAULT NULL,
Region varchar(20) DEFAULT NULL,
PostCode varchar(20) DEFAULT NULL,
Country varchar(30) DEFAULT NULL,
DateOfBirth DATE DEFAULT NULL,
telephoneNumber int(13) DEFAULT 0,
openingAccount int CHECK (openingAccount >= 50),
PRIMARY KEY (CustomerID),
KEY CustomerID (CustomerID)
) ENGINE=INNODB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;
CREATE TABLE bank(
BankID int(11) NOT NULL AUTO_INCREMENT,
customerID int,
PRIMARY KEY (BankID),
FOREIGN KEY (CustomerID) REFERENCES bankCustomer(CustomerID) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=INNODB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;
CREATE TABLE loans(
LoansID int(11) NOT NULL AUTO_INCREMENT,
BankID int,
PaymentRate int(100) DEFAULT 300,
NumOfMonthlyPayments int(12) DEFAULT NULL,
FirstPaymentDate DATE DEFAULT NULL,
MonthlyDueDate DATE DEFAULT NULL,
PRIMARY KEY (LoansID),
FOREIGN KEY (BankID) REFERENCES bank(BankID) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=INNODB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;
INSERT DML's:
INSERT INTO bank (BankID, CustomerID) VALUES (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6), (7, 7), (8, 8), (9, 9), (10, 10);
INSERT INTO loans (LoansID, BankID, PaymentRate, NumOfMonthlyPayments, FirstPaymentDate, MonthlyDueDate) VALUES (1, 1, 400, 12, '2008-02-03', '2008-03-25'),
(11, 1, 150, 10, '2008-02-04', '2008-04-25'),
(12, 1, 150, 10, '2008-02-07', '2008-04-25'),
(2, 2, 100, 20, '2011-04-01', '2011-04-25'),
(3, 3, 85, 5, '2015-07-03', '2015-08-25')...
Thank you all for your dear help, I managed to resolve my issue. The problem was the order of JOINing clauses.
SELECT loans.LoansID, bankcustomer.FirstName, customerbankcard.AccountNumber, loans.FirstPaymentDate
FROM bank
JOIN loans ON loans.BankID = bank.bankID
JOIN bankcustomer ON bankcustomer.customerID = bank.customerID
JOIN customerbankcard ON customerbankcard.bankID = bank.bankID
GROUP BY loans.LoansID ASC;
The outcome was to avoid loop, repeating wrongly assigned account numbers with customers whose IDs did not match.
If you want to select all tables use *. Also, you are joining tables incorrectly.
SELECT bank.*, bankcustomer.*, loans.*
FROM bank
JOIN bankcustomer ON bank.customerID = bankcustomer.customerID --Since you want to join data on customer ID you select custemerID in both tables
JOIN loans ON loans.BankID = bank.bankID; --The same problem here when joining tables
Goodluck and happy coding!
Firstly, joins could only be used between the tables when the columns are common between them (though they might have different names in different tables). That is why the concept of foreign key is of paramount importance as it references the column to the referenced table as you have duly done in your DDL commands.
Secondly, try using semicolon (;) as it denotes the end of any command and might get you out of the looping.
I have a simple click tracking system that consists of three tables "tracking" (which holds unique views), "views" (which holds raw views) and "products" (which holds products).
Here's how it works: each time a user clicks on a tracking link, if the hash present in the link does not exist in the database, it will be saved in the "tracking" table as an unique view and also in the "views" table as a raw view. If the hash present in the link does exist in the database, then it will be saved only in the "views" table. So basically the number of "raw views" can not be smaller than the number of "unique views" because each "unique view" also counts as a "raw view".
I wrote a query to create reports based on products, but the number of "raw views" returned is not correct.
I've also created a fiddle which I hope it will give a better overview of my problem.
Here's the table structure:
CREATE TABLE `products` (
`id` int(10) UNSIGNED NOT NULL,
`name` varchar(128) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO `products` (`id`, `name`) VALUES
(1, 'Test product');
CREATE TABLE `tracking` (
`id` int(10) UNSIGNED NOT NULL,
`product_id` int(11) NOT NULL,
`hash` varchar(32) NOT NULL,
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO `tracking` (`id`, `product_id`, `hash`, `created`) VALUES
(1, 1, '7ddf32e17a6ac5ce04a8ecbf782ca509', '2020-02-09 18:50:19'),
(2, 1, '00bb28eaf259ba0c932d67f649d90783', '2020-02-09 18:55:34');
CREATE TABLE `views` (
`id` int(10) UNSIGNED NOT NULL,
`hash` varchar(32) NOT NULL,
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO `views` (`id`, `hash`, `created`) VALUES
(1, '7ddf32e17a6ac5ce04a8ecbf782ca509', '2020-02-09 18:46:30'),
(2, '7ddf32e17a6ac5ce04a8ecbf782ca509', '2020-02-09 18:46:30'),
(3, '7ddf32e17a6ac5ce04a8ecbf782ca509', '2020-02-09 18:46:35'),
(4, '7ddf32e17a6ac5ce04a8ecbf782ca509', '2020-02-09 18:46:42'),
(5, '00bb28eaf259ba0c932d67f649d90783', '2020-02-09 18:56:31'),
(6, '00bb28eaf259ba0c932d67f649d90783', '2020-02-09 18:57:01');
And here's the query I wrote so far:
SELECT products.name AS `param`,
SUM(IF(tracking.product_id<>24, 1, 0)) AS `uniques`,
IF(SUM(IF(tracking.product_id<>24, 1, 0))=0, 0,
(SELECT COUNT(`hash`)
FROM `views` WHERE tracking.hash = views.hash)) AS `views`
FROM tracking
LEFT JOIN products ON products.id = tracking.product_id
WHERE tracking.created BETWEEN '2019-01-01 00:00:00' AND '2020-02-10 00:00:00'
GROUP BY products.name
As you can see I have 2 unique views and 6 raw views (4 for one hash and 2 for the other hash).
My expectation would be for the query result to be 2 uniques and 6 raw views for this given product, but instead I'm getting 2 uniques and 4 raw views. Like it's counting the views only for the first hash.
The next query can solve your situation:
SELECT
products.name,
COUNT(DISTINCT `tracking`.`hash`) AS `uniques`, -- count unique hashes
COUNT(*) AS `views` -- count total
FROM `tracking`
JOIN `views` ON `views`.hash = tracking.hash
LEFT JOIN products ON products.id = tracking.product_id
WHERE tracking.created BETWEEN '2019-01-01 00:00:00' AND '2020-02-10 00:00:00'
GROUP BY products.name;
;
I've searched and looked alot into the different join and union commands for mySQL but cannot find a solution to my problem regarding the code undernearth.
I only worked with mySQL for a couple of days trying to learn mySQL, HTML, CSS, PHP, javascript, python and more to create my own servers with databases, webpages, backup, make my home to be interconnected etc. All help is very appreciated! 'Takk' in advance! :)
create database cinema;
use cinema;
create table cinema.genre (
genre_code varchar(3) not null,
genre varchar(45) null,
primary key (genre_code) );
create table cinema.movie (
movie_id int not null auto_increment,
genre varchar(3) null,
released int(4) null,
title varchar(45) not null,
name_director varchar(45) null,
primary key(movie_id),
constraint FK_genre foreign key (genre) references cinema.genre(genre_code) );
create table movie_halls (
hall_id int not null auto_increment,
hall_name varchar(45) not null,
spaces int not null,
primary key (hall_id) );
create table overview (
cinema_per_id int not null auto_increment,
date_time datetime not null, #1000-01-01 00:00:00
hall_id int not null,
movie_id int not null,
number_people int null,
primary key (cinema_per_id) );
insert into cinema.genre values
('COM', 'Comedy'),
('ACT', 'Action'),
('DRA', 'Drama'),
('HOR', 'Horror');
insert into cinema.movie (genre, released, title, name_director) values
('COM', 2010, '2 goats', 'Speilberg, Peter'),
('ACT', 2012, 'Fast5', 'Dramaqueen'),
('DRA', 1995, 'Status quo', 'Turtle, Ninja'),
('ACT', 1950, 'Joker', 'Man, Spider');
insert into movie_halls (hall_name, spaces) values
('Darth Vader', 200),
('Princess Leila', 150),
('Yoda', 1999),
('Obi-Wan Kenobi', 1920);
This is what I would like to input into overview (to make it easy and not to remember all the ids of movies and halls.
insert into overview (date_time, hall_id, movie_id, number_people) values
(2018-04-06 20:00:00, 'Darth Vader', 'Fast5', 120),
(2018-04-06 20:00:00, 'Yoda', 'Joker', 1500),
(2018-04-06 21:30:00, 'Obi-Wan Kenobi', '2 goats', 1200);
and be stored like this:
and if I now select overview after insertion I would like to see following (don't mind the datetime format - just excel mixing it up):
I'm not sure if this is possible, or if my general modeling of my database is what is at fault. Any clarification and help is very much appreciated!
As variant you can use subqueries for inserting
insert into overview (date_time, hall_id, movie_id, number_people) values
('2018-04-06 20:00:00', (select hall_id from movie_halls where hall_name='Darth Vader'), (select movie_id from movie where title='Fast5'), 120),
('2018-04-06 20:00:00', (select hall_id from movie_halls where hall_name='Yoda'), (select movie_id from movie where title='Joker'), 1500),
('2018-04-06 21:30:00', (select hall_id from movie_halls where hall_name='Obi-Wan Kenobi'), (select movie_id from movie where title='2 goats'), 1200)
And then use a query with JOIN. For example
SELECT
o.*,
h.hall_name,
m.title AS movie_title
FROM overview o
JOIN movie_halls h ON o.hall_id=h.hall_id
JOIN movie m ON o.movie_id=m.movie_id
ORDER BY o.date_time
Here's what I'm working with:
CREATE TABLE IF NOT EXISTS `rate` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`client_company` int(11) DEFAULT NULL,
`client_group` int(11) DEFAULT NULL,
`client_contact` int(11) DEFAULT NULL,
`role` int(11) DEFAULT NULL,
`date_from` datetime DEFAULT NULL,
`hourly_rate` decimal(18,2) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
INSERT INTO `rate` (`id`, `client_company`, `client_group`,
`client_contact`, `role`, `date_from`, `hourly_rate`)
VALUES
(4, NULL, NULL, NULL, 3, '2012-07-30 14:48:16', 115.00),
(5, 3, NULL, NULL, 3, '2012-07-30 14:51:38', 110.00),
(6, 3, NULL, NULL, 3, '2012-07-30 14:59:20', 112.00);
This table stores chargeout rates for clients; the idea being that, when looking for the correct rate for a job role, we'd first look for a rate matching the given role and client contact, then if no rate was found, would try to match the role and the client group (or 'department'), then the client company, and finally looking for a global rate for just the role itself. Fine.
Rates can change over time, so the table may contain multiple entries matching any given combination of role, company, group and client contact: I want a query that will only return me the latest one for each distinct combination.
Given that I asked a near-identical question only days ago, and that this topic seems fairly frequent in various guises, I can only apologise for my slow-wittedness and ask once again for someone to explain why the query below is returning all three of the records above and not, as I want it to, only the records with IDs 4 and 6.
Is it something to do with my trying to join based on columns containing NULL?
SELECT
rate.*,
newest.id
FROM rate
LEFT JOIN rate AS newest ON(
rate.client_company = newest.client_company
AND rate.client_contact = newest.client_contact
AND rate.client_group = newest.client_group
AND rate.role= newest.role
AND newest.date_from > rate.date_from
)
WHERE newest.id IS NULL
FWIW, the problem WAS joining NULL columns. The vital missing ingredient was COALESCE:
SELECT
rate.*,
newest.id
FROM rate
LEFT JOIN rate AS newest ON(
COALESCE(rate.client_company,1) = COALESCE(newest.client_company,1)
AND COALESCE(rate.client_contact,1) = COALESCE(newest.client_contact,1)
AND COALESCE(rate.client_group,1) = COALESCE(newest.client_group,1)
AND COALESCE(rate.role,1) = COALESCE(newest.role,1)
AND newest.date_from > rate.date_from
)
WHERE newest.id IS NULL
I have this structure of my db:
CREATE TABLE IF NOT EXISTS `peoples` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=1 ;
For customers.
CREATE TABLE IF NOT EXISTS `peoplesaddresses` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`people_id` int(10) unsigned NOT NULL,
`phone` varchar(20) COLLATE utf8_unicode_ci NOT NULL,
`address` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=1 ;
For their addresses.
CREATE TABLE IF NOT EXISTS `peoplesphones` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`people_id` int(10) unsigned NOT NULL,
`phone` varchar(20) COLLATE utf8_unicode_ci NOT NULL,
`address` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=1 ;
For their phones.
UPD4
ALTER TABLE peoplesaddresses DISABLE KEYS;
ALTER TABLE peoplesphones DISABLE KEYS;
ALTER TABLE peoplesaddresses ADD INDEX i_phone (phone);
ALTER TABLE peoplesphones ADD INDEX i_phone (phone);
ALTER TABLE peoplesaddresses ADD INDEX i_address (address);
ALTER TABLE peoplesphones ADD INDEX i_address (address);
ALTER TABLE peoplesaddresses ENABLE KEYS;
ALTER TABLE peoplesphones ENABLE KEYS;
END UPD4
CREATE TABLE IF NOT EXISTS `order` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`people_id` int(10) unsigned NOT NULL,
`name` varchar(255) CHARACTER SET utf8 NOT NULL,
`phone` varchar(255) CHARACTER SET utf8 NOT NULL,
`adress` varchar(255) CHARACTER SET utf8 NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=12 ;
INSERT INTO `order` (`id`, `people_id`, `name`, `phone`, `adress`) VALUES
(1, 0, 'name1', 'phone1', 'address1'),
(2, 0, 'name1_1', 'phone1', 'address1_1'),
(3, 0, 'name1_1', 'phone1', 'address1_2'),
(4, 0, 'name2', 'phone2', 'address2'),
(5, 0, 'name2_1', 'phone2', 'address2_1'),
(6, 0, 'name3', 'phone3', 'address3'),
(7, 0, 'name4', 'phone4', 'address4'),
(8, 0, 'name1_1', 'phone5', 'address1_1'),
(9, 0, 'name1_1', 'phone5', 'address1_2'),
(11, 0, 'name1', 'phone1', 'address1'),
(10, 0, 'name1', 'phone1', 'address1');
Production base have over 9000 records. Is there way to execute this 3 update query's little more faster, than now (~50 min on dev machine).
INSERT INTO peoplesphones( phone, address )
SELECT DISTINCT `order`.phone, `order`.adress
FROM `order`
GROUP BY `order`.phone;
Fill peoplesphones table with unique phones
INSERT INTO peoplesaddresses( phone, address )
SELECT DISTINCT `order`.phone, `order`.adress
FROM `order`
GROUP BY `order`.adress;
Fill peoplesaddresses table with unique adress.
The next three querys are very slow:
UPDATE peoplesaddresses, peoplesphones SET peoplesaddresses.people_id = peoplesphones.id WHERE peoplesaddresses.phone = peoplesphones.phone;
UPDATE peoplesaddresses, peoplesphones SET peoplesphones.people_id = peoplesaddresses.people_id WHERE peoplesaddresses.address = peoplesphones.address;
UPDATE `order`, `peoplesphones` SET `order`.people_id = `peoplesphones`.people_id where `order`.phone = `peoplesphones`.phone;
Finally fill people table, and clear uneccessary fields.
INSERT INTO peoples( id, name )
SELECT DISTINCT `order`.people_id, `order`.name
FROM `order`
GROUP BY `order`.people_id;
ALTER TABLE `peoplesphones`
DROP `address`;
ALTER TABLE `peoplesaddresses`
DROP `phone`;
So, again: How can I make those UPDATE query's a little more faster? THX.
UPD: I forgott to say: I need to do it at once, just for migrate phones and adresses into other tables since one people can have more than one phone, and can order pizza not only at home.
UPD2:
UPD3:
Replace slow update querys on this (without with) get nothing.
UPDATE peoplesaddresses
LEFT JOIN
peoplesphones
ON peoplesaddresses.phone = peoplesphones.phone
SET peoplesaddresses.people_id = peoplesphones.id;
UPDATE peoplesphones
LEFT JOIN
`peoplesaddresses`
ON `peoplesaddresses`.address = `peoplesphones`.address
SET `peoplesphones`.people_id = `peoplesaddresses`.people_id;
UPDATE `order`
LEFT JOIN
`peoplesphones`
ON `order`.phone = `peoplesphones`.phone
SET `order`.people_id = `peoplesphones`.people_id;
UPD4 After adding code at the top (upd4), script takes a few seconds for execute. But on ~6.5k query it terminate with text: "The system cannot find the Drive specified".
Thanks to All. Especially to xQbert and Brent Baisley.
50 minutes for 9000 records is a bit ridiculous, event without indexes. You might as well put the 9000 records in Excel and do what you need to do. I think there is something else going on with your dev machine. Perhaps you have mysql configured to use very little memory? Maybe you can post the results of this "query":
show variables like "%size%";
Just this morning I did an insert(ignore)/select on 2 tables (one into another), both with over 400,000 records. 126,000 records were inserted into the second table, it took a total of 2 minutes 13 seconds.
I would say put indexes on any of the fields you are joining or grouping on, but this seems like a one time job. I don't think the lack of indexes is your problem.
All write operations are slow in relational databases. Especially indexes make them slow, since they have to be recalculated.
If you're using a WHERE in your statements, you should place an index on the fields referenced.
GROUP BY is always very slow, and so is DISTINCT, since they have to do a lot of checks that don't scale linearly. Always avoid them.
You may like to choose a different database engine for what you're doing. 9000 records in 50 minutes is very slow. Experiment with a few different engines, such as MyISAM and InnoDB. If you're using temporary tables a lot, MEMORY is really fast for those.
Update: Also, updating multiple tables in one statement probably shouldn't be done.