MySQL - When shouldn't I Join tables? Combinatorial Explosion of values - mysql

I am working on a database called classicmodels, which I found at: https://www.mysqltutorial.org/mysql-sample-database.aspx/
I realized that when I executed an Inner Join between 'payments' and 'orders' tables, a 'cartesian explosion' occurred. I understand that these two tables are not meant to be joined. However, I would like to know if it is possible to identify this just by looking at the relational schema or if I should check the tables one by one.
For instance, the customer number '141' appears 26 times in the 'orders table', which I found by using the following code:
SELECT
customerNumber,
COUNT(customerNumber)
FROM
orders
WHERE customerNumber=141
GROUP BY customerNumber;
And the same customer number (141) appears 13 times in the payments table:
SELECT
customerNumber,
COUNT(customerNumber)
FROM
payments
WHERE customerNumber=141
GROUP BY customerNumber;
Finally, I executed an Inner Join between 'payments' and 'orders' tables, and selected only the rows with customer number '141'. MySQL returned 338 rows, which is the result of 26*13. So, my query is multiplying the number of times this 'customer n°' appears in 'orders' table by the number of times it appears in 'payments'.
SELECT
o.customernumber,
py.amount
FROM
customers c
JOIN
orders o ON c.customerNumber=o.customerNumber
JOIN
payments py ON c.customerNumber=py.customerNumber
WHERE o.customernumber=141;
My questions is the following:
1 ) Is there a way to look at the relational schema and identify if a Join can be executed (without generating a combinatorial explosion)? Or should I check table by table to understand how the relationship between them is?
Important Note: I realized that there are two asterisks in the payments table's representation in the relational schema below. Maybe this means that this table has a composite primary key (customerNumber+checkNumber). The problem is that 'checkNumber' does not appear in any other table.
This is the database's relational schema provided by the 'MySQL Tutorial' website:
Thank you for your attention!

This is called "combinatorial explosion" and it happens when rows in one table each join to multiple rows in other tables.
(It's not "overestimation" or any sort of estimation. It's counting data items multiple times when it should only count them once.)
It's a notorious pitfall of summarizing data in one-to-many relationships. In your example each customer may have no orders, one order, or more than one. Independently, they may have no payments, one, or many.
The trick is this: Use subqueries so your toplevel query with GROUP BY avoids joining one-to-many relationships serially. In the query you showed us, that's happening.
You can this subquery to get a resultset with just one row per customer. (try it.)
SELECT customernumber,
SUM(amount) amount
FROM payments
GROUP BY customernumber
Likewise you can get the value of all orders for each customer with this
SELECT c.customernumber,
SUM(od.qytOrdered * od.priceEach) amount
FROM orders o
JOIN orderdetails od ON o.orderNumber = od.orderNumber
GROUP BY c.customernumber
This JOIN won't explode in your face because customer can have multiple orders, and each order can have multiple details. So it's a strict hierarchical rollup.
Now, we can use these subqueries in the main query.
SELECT c.customernumber, p.payments, o.orders
FROM customers c
LEFT JOIN (
SELECT c.customernumber,
SUM(od.qytOrdered * od.priceEach) orders
FROM orders o
JOIN orderdetails od ON o.orderNumber = od.orderNumber
GROUP BY c.customernumber
) o ON c.customernumber = o.customernumber
LEFT JOIN (
SELECT customernumber,
SUM() payment
FROM payments
GROUP BY customernumber
) p on c.customernumber = p.customernumber
Takehome tricks:
A subquery IS a table (a virtual table) that can be used whereever you might mention a table or a view.
The GROUP BY stuff in this query happens separately in two subqueries, so no combinatorial explosions.
All three participants in the toplevel JOIN have either one or zero rows per customernumber.
The LEFT JOINs are there so we can still see customers with (importantly for a business) no orders or no payments. With the ordinary inner JOIN, rows have to match both sides of the ON conditions or they're omitted from the resultset.
Pro tip Format your SQL queries fanatically carefully: They are really verbose. Adm. Grace Hopper would be proud. That means they get quite long and nested, putting the Structured in Structured Query Language. If you, or anybody, is going to reason about them in future, we must be able to grasp the structure easily.
Pro tip 2 The data engineer who designed this database did a really good job thinking it through and documenting it. Aspire to this level of quality. (Rarely reached in the real world.)

In this particular case, your behavior should depend on the accounting style being supported by the database, and this does not appear to be "open item" style accounting ie when an order is raised for 1000 there does not need to be a payment against it for 1000.. This is perhaps unusual in most consumer experience because you will be quite familiar with open item style ordering from Amazon - you buy a 500 dollar tv and a 500 dollar games console, the order is a thousand dollars and you pay for it, the payment going against the order. However, you're also familiar with "balance forward" accounting if you paid for that order using your credit card because you make similar purchases every day for a month and hen you get a statement from your bank saying you owe 31000 and you pay a lump of money, doesn't even have to be 31k. You aren't expected to make 31 payments of 1000 to your bank at the end of the month. Your bank allocate it to the oldest items on the account (if they're nice, or the newest items if they're not) and may eventually charge you interest on unpaid transactions
1 ) Is there a way to look at the relational schema and identify if a Join can be executed
Yes, you can tell looking at the schema- customer has many orders, customer makes many payments, but there is no relation between the order and payment tables at all so we can see there is no attempt to directly attach a payment to an order. You can see that customer is a parent table of payment and order, and therefore enjoys a relationship with each of them but they do not relate to each other. If you had Person, Car and Address tables, a person has many addresses during their life, and many cars but it doesn't mean there is a relationship between cars and addresses
In such a case it simply doesn't make sense to join payments to customers to orders because they do not relate that way. If you want to make such a join and not suffer a Cartesian explosion then you absolutely have to sum one side or the other (or both) to ensure that your joins are 1:1 and 1:M (or 1:1 and 1:1). You cannot arrange a join that is a pair of 1:M.
Going back to the car/person/address example to make any meaningful joins, you have to build more information into the question and arrange the join to create the answer. Perhaps the question is "what cars did they own while they lived at" - this flattens the Person:Address relationship to 1:1 but leaves Person:Car as 1:M so they might have owned many cars during their time in that house. "What was the newest car they owned while living at..." might be 1:1 on both sides if there is a clear winner for "newest" (though if they bought two cars manufactured at identical times...)
Which side you sum in your orders case will depend on what you want to know, but in this case I'd say you usually want to know "which orders haven't been paid for" and that's summing all payments and rolling summing all orders then looking at what point the rolling sum exceeds the sum of payments.. those are the unpaid orders
Take a look again at your database graph (the one that was present in the first iteration of your question). See the lines between tables have 3 angled legs on one end - that's the many end. You can start at any table in the graph and join to other tables by walking along the relationship. If you're going from the many end to the one end, and assuming you've picked out a single row in the start table (a single order) you can always walk to any other table in the many->one direction and not increase your row count. If you walk the other way you potentially increase your row count. If you split and walk two ways that both increase row count you get a Cartesian explosion. Of course, also you don't have to only join on relation lines, but that's out of scope for the question
ps: this is easier to see on the db diagram than the ERD in the question because the database purely concerns itself with the columns that are foreign keyed. The ERD is saying a customer has zero or one payments with a particular check number but the database will only be concerned with "the customer ID appears once in the customer table and multiple times in the payment table" because only part of the compound primary key of payment is keyed to the customer table. In other words, the ERD is concerned with business logic relations too, but the db diagram is purely how tables relate and they aren't necessarily aligned. For this reason the db diagrams are probably easier to read when walking round for join strategies

After seeing the answers of Caius Jard and O.Jones (please, check their replies), which kindly helped me to clarify this doubt, I decided to create a table to identify which customers paid for all orders they made and which ones did not. This creates a pertinent reason to join 'orders', 'orderdetails', 'payments' and 'customers' tables, because some orders may have been cancelled or still may be 'On Hold', as we can see in their corresponding 'status' in the 'orders' table. Also, this enables us to execute this join without generating a 'combinatorial explosion'.
I did this by using the CASE statement, which registers when py.amount and amount_in_orders match, don't match or when they are NULL (customers which did not make orders or payments):
SELECT
c.customerNumber,
py.amount,
amount_in_orders,
CASE
WHEN py.amount=amount_in_orders THEN 'Match'
WHEN py.amount IS NULL AND amount_in_orders IS NULL THEN 'NULL'
ELSE 'Don''t Match'
END AS Match
FROM
customers c
LEFT JOIN(
SELECT
o.customerNumber, SUM(od.quantityOrdered*od.priceEach) AS amount_in_orders
FROM
orders o
JOIN orderdetails od ON o.orderNumber=od.orderNumber
GROUP BY o.customerNumber
) o ON c.customerNumber=o.customerNumber
LEFT JOIN(
SELECT customernumber, SUM(amount) AS amount
FROM payments
GROUP BY customerNumber
) py ON c.customerNumber=py.customerNumber
ORDER BY py.amount DESC;
The query returned 122 rows. The images below are fractions of the generated output, so you can visualize what happened:
For instance, we can see that the customers identified by the numbers '141', '124', '119' and '496' did not pay for all the orders they made. Maybe some of them where cancelled or maybe they simply did not pay for them yet.
And this image shows some of the columns (not all of them) that are NULL:

Related

Data design best practices for customer data

I am trying to store customer attributes in a MySQL database although it could be any type of database. I have a customer table and then I have a number of attribute tables (status, product, address, etc.)
The business requirements are to be able to A) look back at a point in time to see if a customer was active or what address they had on any given date and B) have a customer service rep be able to put things like entering future vacation holds. I customer might call today and tell the rep they will be on vacation next week.
I currently have different tables for each customer attribute. For instance, the customer status table has records like this:
CustomerID
Status
dEffectiveStart
dEffectiveEnd
1
Active
2022-01-01
2022-05-01
1
Vacation
2022-05-02
2022-05-04
1
Active
2022-05-05
2099-01-01
When I join these tables the sql typically looks like this:
SELECT *
FROM customers c
JOIN customerStatus cs
on cs.CustomerID = c.CustomerID
and curdate() between cs.dEffectiveStart and cs.dEffectiveEnd
While this setup does work as designed, it is slow. The query joins themselves aren't too bad, but when I try to throw an Order By on its done. The typical client query would pull 5-20k records. There are 5-6 other similar tables to the one above I join to a customer.
Do you any suggestions of a better approach?
That ON clause is very hard to optimize. So, let me try to 'avoid' it.
If you are always (or usually) testing CURDATE(), then I recommend this schema design pattern. I call it History + Current.
The History table contains many rows per customer.
The Current table contains only "current" info about each customer -- one row per customer. Your SELECT would need only this table.
Your design is "proper" because the current status is not redundantly stored in two places. My design requires changing the status in both tables when it changes. This is a small extra cost when changing the "status", for a big gain in SELECT.
More
The Optimizer will probably transform that query into
SELECT *
FROM customerStatus cs
JOIN customers c
ON cs.CustomerID = c.CustomerID
WHERE curdate() >= cs.dEffectiveStart
AND curdate() <= cs.dEffectiveEnd
(Use EXPLAIN SELECT ...; SHOW WARNINGS; to find out exactly.)
In a plain JOIN, the Optimizer likes to start with the table that is most filtered. I moved the "filtering" to the WHERE clause so we could see it; I left the "relation" in the ON.
curdate() >= cs.dEffectiveStart might use an index on dEffectiveStart. Or it _might` use an index to help the other part.
The Optimizer would probably notice that "too much" of the table would need to be scanned with either index, and eschew both indexes and simply do a table scan.
Then it will quickly and efficiently JOIN to the other table.

Have to enter mySQL criteria twice?

Say I have two tables:
Table: customers
Fields: customer_id, first_name, last_name
Table: customer_cars
Fields: car_id, customer_id, car_brand, car_active
Say I am trying to write a query that shows all customers with a first name of "Karl," and the brands of the ** active ** cars they have. Not all customers will have an active car. Some cars are active, some are inactive.
Please keep in mind that this is a representative example that I just made up, for sake of clarity and simplicity. Please don't reply with questions about why we would do it this way, that I could use table aliases, how it's possible to have an inactive car, or that my field names could be better written. It's a fake example that is intended be very simple in order to illustrate the point. It has a structure and issue that I encounter all the time.
It seems like this would be best done with a LEFT JOIN and subquery.
SELECT
customer_id,
first_name,
last_name,
car_brand
FROM
customers
LEFT JOIN
(SELECT
customer_id,
car_brand
FROM
customer_cars
INNER JOIN customers ON customer_cars.customer_id = customers.customer_id
WHERE
first_name = 'Karl' AND
customer_cars.car_active = '1') car_query ON customers.customer_id = car_query.customer_id
WHERE
first_name = 'Karl'
The results might look like this:
first_name last_name car_brand
Karl Johnson Dodge
Karl Johnson Jeep
Karl Smith NULL
Karl Davis Chrysler
Notice the duplication of 'Karl' in both WHERE clauses, and the INNER JOIN in the subquery that is the same table in the outer query. My understanding of mySQL is that this duplication is necessary because it processes the subquery first before processing the outer query. Therefore, the subquery must be properly limited so it doesn't scan all records, then it tries to match on the resulting records.
I am aware that removing the car_active = '1' condition would change things, but this is a requirement.
I am wondering if a query like this can be done in a different way that only causes the criteria and joins to be entered once. Is there a recommended way to prioritize the outer query first, then match to the inner one?
I am aware that two different queries could be written (find all records with Karl, then do another that finds matching cars). However, this would cause multiple connections to the database (one for every record returned) and would be very taxing and inefficient.
I am also aware of correlating subqueries, but from my understanding and experience, this is for returning one field per customer (e.g., an aggregate field such as how much money Karl spent) within the fieldset. I am looking for a similar approach as this, but where one customer could be matched to multiple other records like in the sample output above.
In your response, if you have a recommended query structure that solves this problem, it would be really helpful if you could write a clear example instead of just describing it. I really appreciate your time!
First, is a simple and straight query not enough?
Say I am trying to write a query that shows all customers with a first
name of "Karl," and the brands of the ** active ** cars they have. Not
all customers will have an active car. Some cars are active, some are
inactive.
Following this requirement, I can just do something like:
SELECT C.first_name
, C.last_name
, CC.car_brand
FROM customers C
LEFT JOIN cutomer_cars CC ON CC.customer_id = C.customer_id
AND car_active = 1
WHERE C.first_name = 'Karl'
Take a look at the SQL Fiddle sample.

Averaging a one-to-one field will summing a one to many in MySQL

I put together this example to help
http://sqlfiddle.com/#!9/51db24
The idea is I have 3 tables. One is the "root" table, in this case person which has a score attached to it. They then have some category I need to group by person_cat.cat and a one to many field called CS.
I would like to query for the average of the score, the sum of the one to many field person_co.co and group by the category.
SELECT
person_cat.cat,
person.id,
SUM(person_co.co),
AVG(person.cs)
FROM
person
LEFT JOIN person_co USING (id)
LEFT JOIN person_cat USING (id)
GROUP BY cat;
The issue I'm currently having is the average gets thrown off due to the join for the one to many. I can accomplish this with multiple queries, which is ok if that is the answer. However it would be nice to get this as one query.

Looking for the earliest history entry of a product should I join history/product tables or store value in product table?

I have a MySQL database with a table for products and a table with the buying/selling history of these products. The buying and selling history of each product is basically tracked in this history table.
I am looking for the most efficient way of creating a list of these products with the earliest transaction data from the history table joined.
At the moment my SQL query selects the products with the earliest history entry like this:
SELECT p.*
, h.transdate
, h.sale_price
FROM products p
LEFT
JOIN
( SELECT MIN(transdate) transdate
, product_id
FROM history
GROUP
BY product_id
) hist_min
ON hist_min.product_id = p.id
LEFT
JOIN history h
ON h.product_id = hist_min.product_id
AND h.transdate = hist_min.transdate
Since this query is used very frequently and potentially with many products I am considering storing the first sale_price directly in the 'products' table. This way I wouldn't need the 2 additional JOINS at all. But this would mean I store redundant data.
For me the most important question is, which of these possibilities is the most efficient one.
I am not sure if I am allowed to ask this additionally, but if there is an even better way I would like to know about it.
EDIT: To clarify 'efficient', I am talking about tens of thousands of products with maybe 10 history records each, where I only pick pagewise 20 with a LIMIT statement. To save the original price with the product would be pulling the data straight with the record, while the scanning of dates in the history table for the earliest time and another scan to join the actual row of data would require certainly more resources, even if only for the second table involved. The use of a primary key ID oder an index over product_id and transdate would certainly speed up the second join though.
What you're describing is called 'normalization'. The level of normalization is not a black and white area so I don't think this site is the place to get your answer as it's primarily opinion based.
Check out these links to get started:
Database Normalization Explained in Simple English
Wikipedia (check out the 'See also' section, it describes level of normalization)

One-to-many SQL JOIN produces a lot of copies of the same "one", is this a good practice?

I need to make a one-to-many joined SQL query, like select all orders for all customers.
One customer can have many orders.
SELECT
C.firstName,
C.lastName,
O.invoiceID,
O.description
FROM Customers AS C
LEFT JOIN Orders AS O
ON Customers.ID = Orders.CustomerID
Every returned row will contain a copy of customer's name, with different order data.
I do need to have customers' names in the result, but I'm not sure if it's OK to have the names unnecessarily copied in every row. Is this a waste of memory? Should I better only include Customer ID in the joined select, and then make a separate select to get the customer names by customer ids? What is the best practice?
Update: I understand the tradeoffs and that it depends. My question is whether there are some internal DBMS or driver optimizations (in the most popular DBMS'es) which prevent unnecessary memory waste in such joined selects, by sharing memory for the repeated fields.