MySQL Union (or similar) query - mysql

I have some booking data from a pair of views in MySQL. They match columns perfectly, and the main difference is a booking code that is placed in one of these rows.
The context is as follows: this is for calculating numbers for a sports camp. People are booked in, but can do extra activities.
View 1: All specialist bookings (say: a football class).
View 2: A general group.
Due to the old software, the booking process results in many people booking for the general group and then are upgraded to the old class. This is further complicated by some things elsewhere in the business.
To be clear - View 1 actually contains some (but are not exclusively all) people from within View 2. There's an intersection of the two groups. Obviously people can't be in two groups at once (there's only one of them!).
Finding all people who are in View 2 is of course easy... as is View 1. BUT, I need to produce a report which is basically:
"View 1" overwriting "View 2"... or put another way:
"View 1" [sort of] UNION "View 2"
However: I'm not sure the best way of doing this as there are added complications:
Each row is as approximately (with other stuff omitted) as follows:
User ID Timeslot Activity
1 A Football
1 A General
2 A General
3 A Football
As you can see, these rows all concern timeslot A:
- User 2 does general activities.
- User 3 does football.
- User 1 does football AND general.
AS these items are non unique, the above is a UNION (distinct), as there are no truly distinct rows.
The output I need is as follows:
User ID Timeslot Activity
1 A Football
2 A General
3 A Football
Here, Football has taken "precedence" over "general", and thus I get the picture of where people are at any time.
This UNION has a distinct clause on a number of fields, but ignores others.
So: does anyone know how to do what amounts to:
"add two tables together and overwrite one of them if it's the same timeslot"
Or something like a:
"selective distinct on UNION DISTINCT".
Cheers
Rick

Try this:
SELECT *
FROM
(SELECT *,
IF(Activity='General',1,0) AS order_column
FROM `Table1`
ORDER BY order_column) AS tmp
GROUP BY UserId
This will add an order_column to your original table that as value 1 if the Activity value is general; Doing this we can select this temporary table ordering by this column (ascending order) and all record with general activity comes after all others. After that we can simply select the result of this temporary table grouping by user id. The group by clouse without any aggregate function takes the first record that match.
EDIT:
If you don't to use group by without aggregate function this is an 'ugly' alternative:
SELECT UserId,
Timeslot,
SUBSTRING(MAX(CASE Activity WHEN "General" THEN "00General" WHEN "Football" THEN "01Football" ELSE Activity END) , 3)
FROM `Table1`
GROUP BY UserId,
Timeslot LIMIT 0 ,
30
Here we need to define each possible value for Activity.

Related

Database Design for a system that has Facebook like groups

I'm creating a system that has Groups. These can be thought of like Facebook Groups. Users can create new groups. Currently I have the following types of groups:
City Group - Groups based on a certain city. For example "London Buy and Sell Group"
School Group - Groups based on schools. For example "London University Study Group"
Interest Group - Groups that are not tied to a place. For example "Over 50's Knitting Group"
In the future more group types will be added. Each group can have different types of options, but all groups have the same basic data:
An ID
A creator ID
A name
An option description
I'm struggling on putting together a database design for this. My initial thought was to create different tables for the different groups.
For example have a single table called group. This table has an id, creator id, name, description, member count, timestamps.
Then have other tables to represent the other groups, and link them to group. So I have a city_group table that contains and id, group_id, city_id. And the same for the other group types.
The only problem I have with this is interest_group doesn't have any extra data that a normal group. But for the purpose of being able to query only Interest Groups I thought it might make sense to create an interest_group table. It would only have the following columns: id, group_id, timestamps ... which seems a bit wasteful to have a table just for this purpose.
Here's a diagram to make things easier:
Are there any issues with my solution, or any better ways to solve this design problem?
I've got an idea, which is a workaround basically: have another table like: group_type in which you have id(the PK) and then you have tablename (the full table name of the type).
Then, you should have a FK from your Group table linking to this group_type table.
id tablename
--------------------
1 School Group
2 Interest Group
After all this is done, you could build your queries based on the values from this table, as an example:
JOIN (SELECT tablename FROM group_type WHERE id=group.group_type_id) ON ..

Pulling different records from multiple tables as one transaction history list

I am working on an employee management/reward system and need to be able to show a single "transaction history" page that shows in chronological order the different events that the employee has experienced in one list. (Sort of like how in facebook you can goto your history/action section and see a chronological list of all the stuff that you have done and affects you, even though they are unrelated to eachother and just have you as a common user)
I have different tables for the different events, each table has an employee_id key and an "occured" timestamp, some table examples:
bonuses
customers
raise
complaints
feedback
So whenever an event occurs (ie a new customer is assigned to the employee, or the employee gets a complaint or raise) a new row is added to the appropriate table with the employee ID it affects and a timestamp of when it occured.
I need a single query to pull all records (upto 50 for example) that include the employee and return a history view of that employee. The field names are different in each table (ie the bonus includes an amount with a note, the customer includes customer info etc).
I need the output to be a summary view using column names such as:
event_type = (new customer, bonus, feedback etc)
date
title (a brief worded title of the type of event, specified in sql based on the table its referencing)
description (verbiage about the action, such as if its event_type bonus display the bonus amount here, if its a complain show the first 50 characters of the complaint message or the ID of the user that filed the complaint from the complaints table. All done in SQL using if statements and building the value of this field output based on which table it comes from. Such as if its from the customers table IF current_table=customers description='A customer was assigned to you by'.customers.assigner_id).
Ideally,
Is there any way to do this?
Another option I have considered, is I could do 5-6 different queries pulling the records each from their own table, then use a mysql command to "mesh/interleave" the results from all the queries into one list by chronological order. That would be acceptable too
You could use a UNION query to merge all the information together and use the ORDER BY clause to order the actions chronologically. Each query must have the same number of fields. Your ORDER BY clause should be last.
The examples below assume you have a field called customer_name in the customers table and bonus_amount in the bonuses table.
It would look something like this:
SELECT 'New Customer' as event_type, date,
'New customer was assigned' as title,
CONCAT('New Customer: ', customer_name, ' was assigned') as description
FROM customers
WHERE employee_id = 1
UNION
SELECT 'Bonus' as event_type, date,
'Received a bonue' as title,
CONCAT('Received a bonus of $', FORMAT(bonus_amount, 2), '.') as description
FROM bonuses
WHERE employee_id = 1
UNION
...
ORDER BY date DESC;

Find first, second, third, and so forth record per person

I have a 1 to many relationship between people and notes about them. There can be 0 or more notes per person.
I need to bring all the notes together into a single field and since there are not going to be many people with notes and I plan to only bring in the first 3 notes per person I thought I could do this using at most 3 queries to gather all my information.
My problem is in geting the mySQL query together to get the first, second, etc note per person.
I have a query that lets me know how many notes each person has and I have that in my table. I tried something like
SELECT
f_note, f_person_id
FROM
t_person_table,
t_note_table
WHERE
t_person_table.f_number_of_notes > 0
AND t_person_table.f_person_id = t_note_table.f_person_id
GROUP BY
t_person_table.f_person_id
LIMIT 1 OFFSET 0
I had hoped to run this up to 3 times changing the OFFSET to 1 and then 2 but all I get is just one note coming back, not one note per person.
I hope this is clear, if not read on for an example:
I have 3 people in the table. One person (A) has 0 notes, one (B) with 1 and one (C) with 2.
First I would get the first note for person B and C and insert those into my person table note field.
Then I would get the second note for person C and add that to the note field in the person table.
In the end I would have notes for persons B and C where the note field for person C would be a concatination of their 2 notes.
Welcome to SO. The thing you're trying to do, selecting the three most recent items from a table for each person mentioned, is not easy in MySQL. But it is possible. See this question.
Select number of rows for each group where two column values makes one group
and, see my answer to it.
Select number of rows for each group where two column values makes one group
Once you have a query giving you the three rows, you can use GROUP_CONCAT() ... GROUP BY to aggregate the note fields.
You can get one note per person using a nested query like this:
SELECT
f_person_id,
(SELECT f_note
FROM t_note_table
WHERE t_person_table.f_person_id = t_note_table.f_person_id
LIMIT 1) AS note
FROM
t_person_table
WHERE
t_person_table.f_number_of_notes > 0
Note that tables in SQL are basically without a defined inherent order, so you should use some form or ORDER BY in the subquery. Otherwise, your results might be random, and repeated runs asking for different notes might unexpectedly return the same data.
If you only aim for a concatenation of notes in any case, then you can use the GROUP_CONCAT function to combine all notes into a single column.

Finding and dealing with duplicate users

In a large user database with the following format and sample data, we are trying to identify duplicated people:
id first_name last_name email
---------------------------------------------------
1 chris baker
2 chris baker chris#gmail.com
3 chris baker chris#hotmail.com
4 chris baker crayzyguy#crazy.com
5 carl castle castle#npr.org
6 mike rotch fakeuser#sample.com
I am using the following query:
SELECT
GROUP_CONCAT(id) AS "ids",
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "duplicate_count"
FROM
users
GROUP BY
name
HAVING
duplicate_count > 1
This works great; I get a list of duplicates with the id numbers of the involved rows.
We would re-assign any associated data tied to a duplicate to the actual person (set user_id = 2 where user_id = 3), then we delete the duplicating user row.
The trouble comes after we make this report the first time, as we clean up the list after manually verifying that they are indeed duplicates -- some ARE NOT duplicates. There are 2 Chris Bakers that are legitimate users.
We don't want to keep seeing Chris Baker in subsequent duplicate reports until the end of time, so I am looking for a way to flag that user id 1 and user id 4 are NOT duplicates of each other for future reports, but they could be duplicated by new users added later.
What I tried
I added a is_not_duplicate field to the user table, but then if a new duplicate "Chris Baker" gets added to the database, it will cause this situation to not show on the duplicate report; the is_not_duplicate improperly excludes one of the accounts. My HAVING statement would not meet the > 1 threshold until there are -two- duplicates of Chris Baker, plus the "real" one marked is_not_duplicate.
Question Summed Up
How can I build exceptions into the above query without looping results or multiple queries?
Sub-queries are fine, but the size of the dataset makes every query count and I'd like the solution to be as performant as possible.
Try to add the is_not_duplicate boolean field and modify your code as follows:
SELECT
GROUP_CONCAT(id) AS "ids",
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "duplicate_count",
SUM(is_not_duplicate) AS "real_count"
FROM
users
GROUP BY
name
HAVING
duplicate_count > 1
AND
duplicate_count - real_count > 0
Newly added duplicates will have is_not_duplicate=0 so the real_count for that name will be less than duplicate_count and the row will be shown
My brain is too fried to come up with the actual query for this at the moment, but I might be able to give you a nudge in a path that should work :)
What if you did add another column (maybe a table of valid duplicated users instead?...both will accomplish the same thing), and ran a subquery that would count up all of the valid duplicates and then you could compare against the count in your current query. You would exclude any users that have matching counts, and would pull in any with counts that are higher. Hopefully that makes sense; I will create a use case:
Chris Baker with id 1 and 4 are marked as valid_duplicates
There are 4 Chris Baker's in the system
You get a count of valid Chris Baker's
You get a count of all Chris Baker's
valid_count <> total_count, so return Chris Baker
*You probably can even modify the query so that it does not even list the duplicate id's (even if you get a duplicate marking of only 1 id). Rather than having to re-check which are the valids. This would be a little more complicated. Without it, at least you ignore Chris Baker until another enters the system
I have written up the basic query, dealing with excluding specific id's I will try to roll in tonight. But, this at least solves your initial need. If you do not need the more complicated query, do let me know so that I do not waste my time on it :)
SELECT
GROUP_CONCAT(id) AS "ids",
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "duplicate_count"
FROM
users
WHERE NOT EXISTS
(
SELECT 1
FROM
(
SELECT
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "valid_duplicate_count"
FROM
users
WHERE
is_valid_duplicate = 1 --true
GROUP BY
name
HAVING
valid_duplicate_count > 1
) AS duplicate_users
WHERE
duplicate_users.name = users.name
AND valid_duplicate_count = duplicate_count
)
GROUP BY
name
HAVING
duplicate_count > 1
Below is the query that should do the same as above, but the final list will only print the id's that are not in the valid list. This actually ended up being a lot simpler than I thought. And, it is mostly the same as above, but the only reason I kept above is to keep the two options and in case I messed the above up...it does get complicated as it is many nested queries. If CTE's are available to you, or even temp tables. It might make the query more expressive to break it up into temp tables :). Hopefully this helps and is what you are looking for
SELECT GROUP_CONCAT(id) AS "ids",
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "final_duplicate_count"
--This count could actually be 1 due to the nature of the query
FROM
users
--get the list of duplicated user names
WHERE EXISTS
(
SELECT
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "total_duplicate_count"
FROM
users AS total_dup_users
--ignore valid_users whose count still matches
WHERE NOT EXISTS
(
SELECT 1
FROM
(
SELECT
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "valid_duplicate_count"
FROM
users AS valid_users
WHERE
is_valid_duplicate = 1 --true
GROUP BY
name
HAVING
valid_duplicate_count > 1
) AS duplicate_users
WHERE
--join inner table to outer table
duplicate_users.name = total_dup_users.name
--valid count check
AND valid_duplicate_count = total_duplicate_count
)
--join inner table to outer table
AND total_dup_users.Name = users.Name
GROUP BY
name
HAVING
duplicate_count > 1
)
--ignore users that are valid when doing the actual counts
AND NOT EXISTS
(
SELECT 1
FROM users AS valid
WHERE
--join inner table to outer table
users.name =
CONCAT(UPPER(valid.first_name), UPPER(valid.last_name))
--only valid users
AND valid.is_valid_duplicate = 1 --true
)
GROUP BY
FinalDuplicates.Name
Since this is basically a many-to-many relationship I would add a new table not_duplicate with fields user1 and user2.
I would probably add two rows for each not_duplicate relationship such that I have one row for 2 -> 3 and a symmetric row for 3 -> 2 to ease querying, but that may introduce data inconsistencies so make sure you delete both rows at the same time (or have only one row and make the correct query in your script).
well it seems to me that the is_not_duplicate column is not complex enough to hold the information you want to store - from what I understand you want to manually tell your detection that two distinct users are not duplicates of each other. so either you create a column like is_not_duplicate_of=other-user-id or if you want to keep the possibility open that one user can be manually defined not duplicate of more than one users, you need a seperate table with two user-id columns.
the query telling you the non overridden duplicates probably has to be a bit more complex than the one you suggested, I cannot think of one that works with a group by and having logic. The only thing that would come to my mind is something like
SELECT u1.* FROM users u1
INNER JOIN users u2
ON u1.id <> u2.id
AND u2.name = u1.name
WHERE NOT EXISTS (
SELECT *
FROM users_non_dups un
WHERE (un.id1 = u1.id AND un.id2 = u2.id)
OR (un.id1 = u2.id AND un.id2 = u1.id)
)
If you were to correct all duplicates each time you run the report, then a very simple solution might be to modify the query:
SELECT
GROUP_CONCAT(id) AS "ids",
MAX(id) AS "max_id",
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "duplicate_count"
FROM
users
GROUP BY
name
HAVING
duplicate_count > 1
AND
max_id > MAX_ID_LAST_TIME_DUPLICATE_REPORT_WAS_GENERATED;
I would go ahead and make the "confirmed_unique" column, defaulted as "False."
In order to avoid the problems you mentioned,
Then I would select all elements that may look like duplicates and have a "False" entry for "confirmed_unique."
I am not sure if this will work, but could you consider the reverse logic of adding a *is_duplicate_of* column? That way you can mark duplicates by entering the ID of the first record at this column which will be greater than zero. The records that you wish to retain will have a 0 value at this field. You can set the default (unchecked records) to -1 to keep track of the validation status for each record.
Afterwards you can keep executing an SQL that will compare new records only with correct records having is_duplicate_of = 0 .
If you are ok to make a slight change to the format of the report. You could do a self-join like this -
SELECT
CONCAT(u1.id,",", u2.id) AS "ids",
CONCAT(UPPER(u1.first_name), UPPER(u1.last_name)) AS "name"
FROM
users u1, users u2
WHERE
u1.id < u2.id AND
UPPER(u1.first_name) = UPPER(u2.first_name) AND
UPPER(u1.last_name) = UPPER(u2.last_name) AND
CONCAT(u1.id,",", u2.id) NOT IN (SELECT ids from not_dupe)
which reports duplicates as follows:
ids | name
----|--------
1,2 | CHRISBAKER
1,3 | CHRISBAKER
...
And the not_dupe table would have rows like below:
ids
------
1,2
3,4
...
I think it would make sense to create a lookup-table storing the ids of the ones that are not duplicates. Thus confirmed non duplicants are removed and the query will only have to ad a small look up for duplicates actualy found on the lookup table.
for instance in this example we would have
id 1 | id 2
2 4
if crayzyguy#crazy.com and chris#gmail.com are diffrent persons.
If I were you, I will add some geolocalisation tables/fields to my database schema.
The probability two end-users are having the same names AND are living in the same place is very very low - except in very big town - but you can split geolocalization to small areas too - it's about granularity.
Good luck.
I would suggest you to create a couple of things:
A Boolean column to flag confirmed users
A String column to save ids
A trigger that will check if the first name and last name are already there to fill up the flag, and save in the string column all ids to which this one is a possible duplicate.
And then build a report that looks for duplicated true and decode the string field to match the possible duplicated
I gave Justin Pihony +1 as the 1st to suggest comparing the duplicate count with the not duplicate count, and Hrant Khachatrian +1 for being the 1st to show an efficient way of doing that.
Here is a slightly different method, plus some renaming to make everything a bit more self explanatory, plus some extra columns in the query to make it obvious which records need to be compared as potential duplicates.
I would call the new column "CONFIRMED_UNIQUE" instead of "IS_NOT_DUPLICATE". Like Hrant I would make it Boolean (tinyint(1) with 0=FALSE and 1=TRUE).
The "potential_duplicate_count" is the maximum number of records that would have to be deleted.
select
group_concat(case when not confirmed_unique then id end) as potential_duplicate_ids,
group_concat(case when confirmed_unique then id end) as confirmed_unique_ids,
concat(upper(first_name), upper(last_name)) as name,
sum( case when not confirmed_unique then 1 end ) - (not max(confirmed_unique)) as potential_duplicate_count
from
users
group by
name
having
potential_duplicate_count > 0
I see someone else has been voted down for the suggestion of merging, but nothing about your problem statement says the data needs to be inplace. The OP followed up with their solution which happens to be a put SQL one, that doesn't imply that every solution needs to be limited to that.
The issue as I understand is around contacts having multiple, similar, but not necessarily identical records in your database, which has cost and reputational implications so you're looking to deduplicate these records.
I would write a batch job that searches for potential duplicates (this can be as complicated or as simple as you like) and then close the two records that it finds are dupes and create a new record.
To enable that you'd need four new columns:
Status, which would be either Open, Merged, Split
RelatedId, which would hold the value of who the record was merged with
ChainId, the new record Id
DateStatusChanged, obvious enough
Open would be the default status
Merged would be when the record is merged (effectively closed and replaced)
Split would be if the merge was reversed
So, as an example, go through all of the records that, for example, have the same name. Merge them in pairs. So if you have three Chris Bakers, records 1, 2 and 3, merge 1 and 2 to make record 4 and then 3 and 4 to make record 5. Your table would end up something like:
ID NAME STATUS RELATEDID CHAINID DATESTATUSCHANGED [other rows omitted]
1 Chris Baker MERGED 2 4 27-AUG-2012
2 Chris Baker MERGED 1 4 27-AUG-2012
3 Chris Baker MERGED 4 5 28-AUG-2012
4 Chris Baker MERGED 3 5 28-AUG-2012
5 Chris Baker OPEN
This way you have a full record of what has happened to your data can reverse any changes by unmerging, if for example contacts 1 and 2 weren't the same you reverse the merge of 3 and 4, reverse the merge of 1 and 2, you'd end up with this:
ID NAME STATUS RELATEDID CHAINID DATESTATUSCHANGED
1 Chris Baker SPLIT 2 4 29-AUG-2012
2 Chris Baker SPLIT 1 4 29-AUG-2012
3 Chris Baker SPLIT 4 5 29-AUG-2012
4 Chris Baker CLOSED 3 5 29-AUG-2012
5 Chris Baker CLOSED 29-AUG-2012
You could then manually merge, as you'd probably not want your job to automatically remerge split records.
Is there a good reason for not merging duplicate accounts into a single account?
From the comments, it seems like the information is being used mostly for contact information so merging should be relatively painless and low risk. Once you merge users they will no longer appear in your duplicate report. Furthermore, you users table will actually shrink which could help with performance.
Add is_not_duplicate by datatype bit to your table and use below query after set is_not_duplicate data value:
SELECT GROUP_CONCAT(id) AS "ids",
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name"
FROM users
GROUP BY name
HAVING COUNT(*) > SUM(CAST(is_not_duplicate AS INT))
above query compare total duplicate rows by total valid duplicate rows.
Why don't you make the email column to be a unique identifier in this case, and after you cleanse your records once, you do not allow duplicates from there onwards?

mysql: get data from two tables

i have two tables "members" and "users".
I need with one query from this two tables get all users where condition is "name LIKE %Joy%".
How join in this situation two tables?
Tables:
users
id / name / age
1 joy 15
2 marko 26
members
id / name / level
1 peter 1
2 joyes 0
3 marko 1
Try with UNION. I added the first column so you can check later where that result came from (and create a link to the user's profile page for example).
(SELECT 'user' AS type, id, name FROM user WHERE name LIKE '%Joy%')
UNION
(SELECT 'member', id, name FROM member WHERE name LIKE '%Joy%')
It appears as though both tables essentially store information about the same kind of thing: people. I don't know what the difference is between a "user" and a "member" in your specific situation, but it sounds as though you might be better off having just one table "people" with a bit column specifying whether the person is a user or a member.