Optimize complex MySQL selects for reporting - mysql

I'm building an application in which,
a) each librarian can create a campaign
b) actions carried out as part of that campaign are tracked in campaign_actions, actions being page loads
In order to report on the number of actions made in each campaign, I wrote this SQL query (for MySQL) for the following database structure, with the intention of tracking the number of actions undertaken by a librarian for each campaign:
LIBRARIANS
id | status
CAMPAIGNS
id | librarian_id
CAMPAIGN_ACTIONS
id | campaign_id | name
The problems I am having are:
a) I have to specify the fields I want to count in the correlated subselects
b) The query will be quite expensive as a result
My question is, since there are multiple actions for a campaign, how can I effectively count the number of actions per campaign in a more efficient manner?
Less complex queries amount to returning a result set like so:
librarians.id | librarians.status | campaign_actions.name
1 3 pageX
1 3 pageY
1 3 pageZ
1 3 pageA
1 3 pageB
2 3 pageX
which means i'd have to parse the result set in application code row by row, which is likely to be more expensive.
I appreciate any thoughts you may have on this problem.

Breaking the task into smaller tasks (views):
--- campaings per librarian
CREATE VIEW count_librarian_campaigns
AS ( SELECT lib.id AS lib_id
, COUNT(c.id)
AS num_campaigns
FROM librarians lib
LEFT JOIN campaigns c
ON c.librarian_id = lib.id
GROUP BY lib.id
)
--- campaign actions per campaign
CREATE VIEW count_campaign_actions
AS ( SELECT c.id AS c_id
, COUNT(ca.campaign_id)
AS num_actions
FROM campaigns c
LEFT JOIN campaign_actions ca
ON ca.campaign_id = c.id
GROUP BY c.id
)
So, you could have queries like this:
SELECT lib.id AS lib_id
, countlibc.num_campaigns
, c.id AS c_id
, countca.num_actions
FROM librarians lib
JOIN count_librarian_campaigns countlibc
ON countlibc.lib_id = lib.id
LEFT JOIN campaigns c
ON c.librarian_id = lib.id
JOIN count_campaign_actions countca
ON countca.c_id = c.id

Does something like this work for your purposes?
SELECT librarians_id,COUNT(librarians_id) FROM (
SELECT librarians.id as librarians_id,
librarians.status as librarians_status,
campaign_actions.name as campaign_actions_name
FROM campaign_actions
INNER JOIN campaigns
ON campaign_actions.campaign_id = campaigns.id
INNER JOIN librarians
ON campaigns.librarian_id = librarians.id
GROUP BY campaign_actions.name,librarians.id,librarians.status ) as a
Maybe I misunderstood what you're after. The inner query seems like it would return the table you represented above.

Related

Optimization of MySQL query have millions of records

OBJECTIVE: Need query to count all "distinct" leads outside of current company that do not exist in current company. The query needs to account for millions of records between multiple tables (lead_details, domains, company)
EXAMPLE:
company 1 -> domain 1 -> lead 1 lead_details records exists.
company 2 -> domain 2 -> lead 1 lead_details records exists.
company 2 -> domain 2 -> lead 2 lead_details records exists.
company 3 -> domain 3 -> lead 2 lead_details records exists.
company 3 -> domain 3 -> lead 3 lead_details records exists.
RESULT: If I run the query for the data above on company 1, the result should be a count of (2) since lead 2 & lead 3 is unique and does not exist in company 1
domain_id domain_name company_id company_name lead_id lead_count
"2" "D2" "2" "C2" "2" "2"
"3" "D3" "3" "C3" "3" "1"
Here is my Query, Please let me know if anyone has any better suggestion.
SELECT al.*
FROM (
SELECT
d.id AS domain_id,
d.name AS domain_name,
c.id AS company_id,
c.name AS company_name,
ld.lead_id,
count(ld.lead_id) as lead_count
FROM domains d
INNER JOIN company c
ON (c.id = d.company_id AND c.id != 1)
INNER JOIN lead_details ld
ON (ld.domain_id = d.id)
GROUP BY ld.lead_id
) al
LEFT JOIN (
SELECT ld.lead_id FROM domains d
INNER JOIN company c
ON (c.id = d.company_id AND c.id = 1)
INNER JOIN lead_details ld
ON (ld.domain_id = d.id)
) ccl
ON al.lead_id = ccl.lead_id
WHERE ccl.lead_id IS NULL;
I have almost million rows, so need to figure out better solution..
Plan A
The pattern
FROM ( SELECT ... )
JOIN ( SELECT ... ) ON ...
is inefficient, especially in older versions of MySQL. This is because neither of the subqueries has any indexes, so (in older versions) a repeated full table scan is needed of one of the subqueries.
The better method is to try to reformulate as
FROM t1 ...
JOIN t2 ... ON ...
JOIN t3 ... ON ...
LEFT JOIN t4 ... ON ...
LEFT JOIN t5 ... ON ...
Plan B
This is closer to what you have...
CREATE TEMPORARY TABLE ccl
( INDEX(lead_id) )
SELECT ... -- the stuff that is after LEFT JOIN
Then replace that subquery with just ccl. This provides the index that is missing from the original query.
Plan C
Summary Table. (This may or may not be practical for your query, since you are looking distinct and do not exist.) Every month (or week or whatever) calculate subtotals for the last month and store it into another table. Then the query against this other table will be much faster.

SQL Genius need .. Complex MySQL query

I am trying to optimise my php by doing as much work on the MySQL server as possible. I have this sql query which is pulling data out of a leads table, but at the same time joining two tags tables to combine the result. I am looking to add a company which is linked through a relations table.
So the table that holds the relationship between the two is relations_value which simply states (I add example data)
parenttable (companies) | parentrecordid (10) | childtable (leads) | childrecordid (1)
the companies table has quite a few columns but the only two relevant are;
id (10) | companyname (my company name)
So this query currently grabs everything I need but I want to bring the companyname into the query:
SELECT leads.id,
GROUP_CONCAT(c.tag ORDER BY c.tag) AS tags,
leads.status,
leads.probability
FROM `gs_db_1002`.leads
LEFT JOIN ( SELECT *
FROM tags_module
WHERE tagid IN ( SELECT id
FROM tags
WHERE moduleid = 'leads' ) ) as b
ON leads.id = b.recordid
LEFT JOIN `gs_db_1002`.tags as c
ON b.tagid = c.id
GROUP BY leads.id,
leads.status,
leads.probability
I need to be able to go into the relations_values table and pull parenttable and parentrecordid by selecting childtable = leads and childrecordid = 1 and somehow join these so that I am able to get companyname as a column in the above query...
Is this possible?
I have created a sqlfiddle: sqlfiddle.com/#!2/023fa/2 So I am looking to add companies.companyname as column to the query.
I don't know what your primary keys and foreign keys are that link each table together.. if you could give a better understanding of what ID's are linked to eachother it would make this a lot easier... however i did something that does return the correct result... but since all of the ID's are = 1 then it could be incorrect.
SELECT
leads.id, GROUP_CONCAT(c.tag ORDER BY c.tag) AS tags,
leads.status, leads.probability, companyname
FROM leads
LEFT JOIN (
SELECT * FROM tags_module WHERE tagid IN (
SELECT id FROM tags WHERE moduleid = 'leads' )
) as b ON leads.id = b.recordid
LEFT JOIN tags as c ON b.tagid = c.id
LEFT JOIN relations_values rv on rv.id = b.recordid
LEFT JOIN companies c1 on c1.createdby = rv.parentrecordid
GROUP BY leads.id,leads.status, leads.probability

Make HAVING count(*) percentage based - complicated query with percentage calculations

This query suggests friendship based on how many words users have in common. in_common sets this threshold.
I was wondering if it was possible to make this query completely % based.
What I want to do is have user suggested to current user, if 30% of their words match.
curent_user total words 100
in_common threshold 30
some_other_user total words 10
3 out of these match current_users list.
Since 3 is 30% of 10, this is a match for the current user.
Possible?
SELECT users.name_surname, users.avatar, t1.qty, GROUP_CONCAT(words_en.word) AS in_common, (users.id) AS friend_request_id
FROM (
SELECT c2.user_id, COUNT(*) AS qty
FROM `connections` c1
JOIN `connections` c2
ON c1.user_id <> c2.user_id
AND c1.word_id = c2.word_id
WHERE c1.user_id = :user_id
GROUP BY c2.user_id
HAVING count(*) >= :in_common) as t1
JOIN users
ON t1.user_id = users.id
JOIN connections
ON connections.user_id = t1.user_id
JOIN words_en
ON words_en.id = connections.word_id
WHERE EXISTS(SELECT *
FROM connections
WHERE connections.user_id = :user_id
AND connections.word_id = words_en.id)
GROUP BY users.id, users.name_surname, users.avatar, t1.qty
ORDER BY t1.qty DESC, users.name_surname ASC
SQL fiddle: http://www.sqlfiddle.com/#!2/c79a6/9
OK, so the issue is "users in common" defined as asymmetric relation. To fix it, let's assume that in_common percentage threshold is checked against user with the least words.
Try this query (fiddle), it gives you full list of users with at least 1 word in common, marking friendship suggestions:
SELECT user1_id, user2_id, user1_wc, user2_wc,
count(*) AS common_wc, count(*) / least(user1_wc, user2_wc) AS common_wc_pct,
CASE WHEN count(*) / least(user1_wc, user2_wc) > 0.7 THEN 1 ELSE 0 END AS frienship_suggestion
FROM (
SELECT u1.user_id AS user1_id, u2.user_id AS user2_id,
u1.word_count AS user1_wc, u2.word_count AS user2_wc,
c1.word_id AS word1_id, c2.word_id AS word2_id
FROM connections c1
JOIN connections c2 ON (c1.user_id < c2.user_id AND c1.word_id = c2.word_id)
JOIN (SELECT user_id, count(*) AS word_count
FROM connections
GROUP BY user_id) u1 ON (c1.user_id = u1.user_id)
JOIN (SELECT user_id, count(*) AS word_count
FROM connections
GROUP BY user_id) u2 ON (c2.user_id = u2.user_id)
) AS shared_words
GROUP BY user1_id, user2_id, user1_wc, user2_wc;
Friendship_suggestion is on SELECT for clarity, you probably need to filter by it, so yu may just move it to HAVING clause.
I throw this option into your querying consideration... The first part of the from query is to do nothing but get the one user you are considering as the basis to find all others having common words. The where clause is for that one user (alias result OnePerson).
Then, add to the from clause (WITHOUT A JOIN) since the OnePerson record will always be a single record, we want it's total word count available, but didn't actually see how your worked your 100 to 30 threashold if another person only had 10 words to match 3... I actually think its bloat and unnecessary as you'll see later in the where of PreQuery.
So, the next table is the connections table (aliased c2) and that is normal INNER JOIN to the words table for each of the "other" people being considered.
This c2 is then joined again to the connections table again alias OnesWords based on the common word Id -- AND -- the OnesWords user ID is that of the primary user_id being compared against. This OnesWords alias is joined to the words table so IF THERE IS a match to the primary person, we can grab that "common word" as part of the group_concat().
So, now we grab the original single person's total words (still not SURE you need it), a count of ALL the words for the other person, and a count (via sum/case when) of all words that ARE IN COMMON with the original person grouped by the "other" user ID. This gets them all and results as alias "PreQuery".
Now, from that, we can join that to the user's table to get the name and avatar along with respective counts and common words, but apply the WHERE clause based on the total per "other users" available words to the "in common" with the first person's words (see... I didn't think you NEEDED the original query/count as basis of percentage consideration).
SELECT
u.name_surname,
u.avatar,
PreQuery.*
from
( SELECT
c2.user_id,
One.TotalWords,
COUNT(*) as OtherUserWords,
GROUP_CONCAT(words_en.word) AS InCommonWords,
SUM( case when OnesWords.word_id IS NULL then 0 else 1 end ) as InCommonWithOne
from
( SELECT c1.user_id,
COUNT(*) AS TotalWords
from
`connections` c1
where
c1.user_id = :PrimaryPersonBasis ) OnePerson,
`connections` c2
LEFT JOIN `connections` OnesWords
ON c2.word_id = OnesWords.word_id
AND OnesWords.user_id = OnePerson.User_ID
LEFT JOIN words_en
ON OnesWords.word_id = words_en.id
where
c2.user_id <> OnePerson.User_ID
group by
c2.user_id ) PreQuery
JOIN users u
ON PreQuery.user_id = u.id
where
PreQuery.OtherUserWords * :nPercentToConsider >= PreQuery.InCommonWithOne
order by
PreQuery.InCommonWithOne DESC,
u.name_surname
Here's a revised WITHOUT then need to prequery the total original words of the first person.
SELECT
u.name_surname,
u.avatar,
PreQuery.*
from
( SELECT
c2.user_id,
COUNT(*) as OtherUserWords,
GROUP_CONCAT(words_en.word) AS InCommonWords,
SUM( case when OnesWords.word_id IS NULL then 0 else 1 end ) as InCommonWithOne
from
`connections` c2
LEFT JOIN `connections` OnesWords
ON c2.word_id = OnesWords.word_id
AND OnesWords.user_id = :PrimaryPersonBasis
LEFT JOIN words_en
ON OnesWords.word_id = words_en.id
where
c2.user_id <> :PrimaryPersonBasis
group by
c2.user_id
having
COUNT(*) * :nPercentToConsider >=
SUM( case when OnesWords.word_id IS NULL then 0 else 1 end ) ) PreQuery
JOIN users u
ON PreQuery.user_id = u.id
order by
PreQuery.InCommonWithOne DESC,
u.name_surname
There might be some tweaking on the query, but your original query leads me to believe you can easily find simple things like alias or field name type-o instances.
Another options might be to prequery ALL users and how many respective words they have UP FRONT, then use the primary person's words to compare to anyone else explicitly ON those common words... This might be more efficient as the multiple joins would be better on the smaller result set. What if you have 10,000 users and user A has 30 words, and only 500 other users have one or more of those words in common... why compare against all 10,000... but if having up-front a simple summary of each user and how many should be an almost instant query basis.
SELECT
u.name_surname,
u.avatar,
PreQuery.*
from
( SELECT
OtherUser.User_ID,
AllUsers.EachUserWords,
COUNT(*) as CommonWordsCount,
group_concat( words_en.word ) as InCommonWords
from
`connections` OneUser
JOIN words_en
ON OneUser.word_id = words_en.id
JOIN `connections` OtherUser
ON OneUser.word_id = OtherUser.word_id
AND OneUser.user_id <> OtherUser.user_id
JOIN ( SELECT
c1.user_id,
COUNT(*) as EachUserWords
from
`connections` c1
group by
c1.user_id ) AllUsers
ON OtherUser.user_id = AllUsers.User_ID
where
OneUser.user_id = :nPrimaryUserToConsider
group by
OtherUser.User_id,
AllUsers.EachUserWords ) as PreQuery
JOIN users u
ON PreQuery.uer_id = u.id
where
PreQuery.EachUserWords * :nPercentToConsider >= PreQuery.CommonWordCount
order by
PreQuery.CommonWordCount DESC,
u.name_surname
May I suggest a different way to look at your problem?
You might look into a similarity metric, such as Cosine Similarity which will give you a much better measure of similarity between your users based on words. To understand it for your case, consider the following example. You have a vector of words A = {house, car, burger, sun} for a user u1 and another vector B = {flat, car, pizza, burger, cloud} for user u2.
Given these individual vectors you first construct another that positions them together so you can map to each user whether he/she has that word in its vector or not. Like so:
| -- | house | car | burger | sun | flat | pizza | cloud |
----------------------------------------------------------
| A | 1 | 1 | 1 | 1 | 0 | 0 | 0 |
----------------------------------------------------------
| B | 0 | 1 | 1 | 0 | 1 | 1 | 1 |
----------------------------------------------------------
Now you have a vector for each user where each position corresponds to the value of each word to each user. Here it represents a simple count but you can improve it using different metrics based on word frequency if that applies to your case. Take a look at the most common one, called tf-idf.
Having these two vectors, you can compute the cosine similarity between them as follows:
Which basically is computing the sum of the product between each position of the vectors above, divided by their corresponding magnitude. In our example, that is 0.47, in a range that can vary between 0 and 1, the higher the most similar the two vectors are.
If you choose to go this way, you don't need to do this calculation in the database. You compute the similarity in your code and just save the result in the database. There are several libraries that can do that for you. In Python, take a look at the numpy library. In Java, look at Weka and/or Apache Lucene.

MySQL selecting rows with a max id and matching other conditions

Using the tables below as an example and the listed query as a base query, I want to add a way to select only rows with a max id! Without having to do a second query!
TABLE VEHICLES
id vehicleName
----- --------
1 cool car
2 cool car
3 cool bus
4 cool bus
5 cool bus
6 car
7 truck
8 motorcycle
9 scooter
10 scooter
11 bus
TABLE VEHICLE NAMES
nameId vehicleName
------ -------
1 cool car
2 cool bus
3 car
4 truck
5 motorcycle
6 scooter
7 bus
TABLE VEHICLE ATTRIBUTES
nameId attribute
------ ---------
1 FAST
1 SMALL
1 SHINY
2 BIG
2 SLOW
3 EXPENSIVE
4 SHINY
5 FAST
5 SMALL
6 SHINY
6 SMALL
7 SMALL
And the base query:
select a.*
from vehicle a
join vehicle_names b using(vehicleName)
join vehicle_attribs c using(nameId)
where c.attribute in('SMALL', 'SHINY')
and a.vehicleName like '%coo%'
group
by a.id
having count(distinct c.attribute) = 2;
So what I want to achieve is to select rows with certain attributes, that match a name but only one entry for each name that matches where the id is the highest!
So a working solution in this example would return the below rows:
id vehicleName
----- --------
2 cool car
10 scooter
if it was using some sort of max on the id
at the moment I get all the entries for cool car and scooter.
My real world database follows a similar structure and has 10's of thousands of entries in it so a query like above could easily return 3000+ results. I limit the results to 100 rows to keep execution time low as the results are used in a search on my site. The reason I have repeats of "vehicles" with the same name but only a different ID is that new models are constantly added but I keep the older one around for those that want to dig them up! But on a search by car name I don't want to return the older cards just the newest one which is the one with the highest ID!
The correct answer would adapt the query I provided above that I'm currently using and have it only return rows where the name matches but has the highest id!
If this isn't possible, suggestions on how I can achieve what I want without massively increasing the execution time of a search would be appreciated!
If you want to keep your logic, here what I would do:
select a.*
from vehicle a
left join vehicle a2 on (a.vehicleName = a2.vehicleName and a.id < a2.id)
join vehicle_names b on (a.vehicleName = b.vehicleName)
join vehicle_attribs c using(nameId)
where c.attribute in('SMALL', 'SHINY')
and a.vehicleName like '%coo%'
and a2.id is null
group by a.id
having count(distinct c.attribute) = 2;
Which yield:
+----+-------------+
| id | vehicleName |
+----+-------------+
| 2 | cool car |
| 10 | scooter |
+----+-------------+
2 rows in set (0.00 sec)
As other said, normalization could be done on few levels:
Keeping your current vehicle_names table as the primary lookup table, I would change:
update vehicle a
inner join vehicle_names b using (vehicleName)
set a.vehicleName = b.nameId;
alter table vehicle change column vehicleName nameId int;
create table attribs (
attribId int auto_increment primary key,
attribute varchar(20),
unique key attribute (attribute)
);
insert into attribs (attribute)
select distinct attribute from vehicle_attribs;
update vehicle_attribs a
inner join attribs b using (attribute)
set a.attribute=b.attribId;
alter table vehicle_attribs change column attribute attribId int;
Which led to the following query:
select a.id, b.vehicleName
from vehicle a
left join vehicle a2 on (a.nameId = a2.nameId and a.id < a2.id)
join vehicle_names b on (a.nameId = b.nameId)
join vehicle_attribs c on (a.nameId=c.nameId)
inner join attribs d using (attribId)
where d.attribute in ('SMALL', 'SHINY')
and b.vehicleName like '%coo%'
and a2.id is null
group by a.id
having count(distinct d.attribute) = 2;
The table does not seems normalized, however this facilitate you to do this :
select max(id), vehicleName
from VEHICLES
group by vehicleName
having count(*)>=2;
I'm not sure I completely understand your model, but the following query satisfies your requirements as they stand. The first sub query finds the latest version of the vehicle. The second query satisfies your "and" condition. Then I just join the queries on vehiclename (which is the key?).
select a.id
,a.vehiclename
from (select a.vehicleName, max(id) as id
from vehicle a
where vehicleName like '%coo%'
group by vehicleName
) as a
join (select b.vehiclename
from vehicle_names b
join vehicle_attribs c using(nameId)
where c.attribute in('SMALL', 'SHINY')
group by b.vehiclename
having count(distinct c.attribute) = 2
) as b on (a.vehicleName = b.vehicleName);
If this "latest vehicle" logic is something you will need to do a lot, a small suggestion would be to create a view (see below) which returns the latest version of each vehicle. Then you could use the view instead of the find-max-query. Note that this is purely for ease-of-use, it offers no performance benefits.
select *
from vehicle a
where id = (select max(b.id)
from vehicle b
where a.vehiclename = b.vehiclename);
Without going into proper redesign of you model you could
1) Add a column IsLatest that your application could manage.
This is not perfect but will satisfy you question (until next problem, see not at the end)
All you need is when you add a new entry to issue queries such as
UPDATE a
SET IsLatest = 0
WHERE IsLatest = 1
INSERT new a
UPDATE a
SET IsLatest = 1
WHERE nameId = #last_inserted_id
in a transaction or a trigger
2) Alternatively you can find out the max_id before you issue your query
SELECT MAX(nameId)
FROM a
WHERE vehicleName = #name
3) You can do it in single SQL, and providing indexes on (vehicleName, nameId) it should actually have decent speed with
select a.*
from vehicle a
join vehicle_names b ON a.vehicleName = b.vehicleName
join vehicle_attribs c ON b.nameId = c.nameId AND c.attribute = 'SMALL'
join vehicle_attribs d ON b.nameId = c.nameId AND d.attribute = 'SHINY'
join vehicle notmax ON a.vehicleName = b.vehicleName AND a.nameid < notmax.nameid
where a.vehicleName like '%coo%'
AND notmax.id IS NULL
I have removed your GROUP BY and HAVING and replaced it with another join (assuming that only single attribute per nameId is possible).
I have also used one of the ways to find max per group and that is to join a table on itself and filter out a row for which there are no records that have a bigger id for a same name.
There are other ways, search so for 'max per group sql'. Also see here, though not complete.

Mysql query in drupal database - groupwise maximum with duplicate data

I'm working on a mysql query in a Drupal database that pulls together users and two different cck content types. I know people ask for help with groupwise maximum queries all the time... I've done my best but I need help.
This is what I have so far:
# the artists
SELECT
users.uid,
users.name AS username,
n1.title AS artist_name
FROM users
LEFT JOIN users_roles ur
ON users.uid=ur.uid
INNER JOIN role r
ON ur.rid=r.rid
AND r.name='artist'
LEFT JOIN node n1
ON n1.uid = users.uid
AND n1.type = 'submission'
WHERE users.status = 1
ORDER BY users.name;
This gives me data that looks like:
uid username artist_name
1 foo Joe the Plumber
2 bar Jane Doe
3 baz The Tooth Fairy
Also, I've got this query:
# artwork
SELECT
n.nid,
n.uid,
a.field_order_value
FROM node n
LEFT JOIN content_type_artwork a
ON n.nid = a.nid
WHERE n.type = 'artwork'
ORDER BY n.uid, a.field_order_value;
Which gives me data like this:
nid uid field_order_value
1 1 1
2 1 3
3 1 2
4 2 NULL
5 3 1
6 3 1
Additional relevant info:
nid is the primary key for an Artwork
every Artist has one or more Artworks
valid data for field_order_value is NULL, 1, 2, 3, or 4
field_order_value is not necessarily unique per Artist - an Artist could have 4 Artworks all with field_order_value = 1.
What I want is the row with the minimum field_order_value from my second query joined with the artist information from the first query. In cases where the field_order_value is not valuable information (either because the Artist has used duplicate values among their Artworks or left that field NULL), I would like the row with the minimum nid from the second query.
The Solution
Using divide and conquer as a strategy and mysql views as a technique, and referencing this article about groupwise maximum queries, I solved my problem.
Create the View
# artists and artworks all in one table
CREATE VIEW artists_artwork AS
SELECT
users.uid,
users.name AS artist,
COALESCE(n1.title, 'Not Yet Entered') AS artist_name,
n2.nid,
a.field_image_fid,
COALESCE(a.field_order_value, 1) AS field_order_value
FROM users
LEFT JOIN users_roles ur
ON users.uid=ur.uid
INNER JOIN role r
ON ur.rid=r.rid
AND r.name='artist'
LEFT JOIN node n1
ON n1.uid = users.uid
AND n1.type = 'submission'
LEFT JOIN node n2
ON n2.uid = users.uid
AND n2.type = 'artwork'
LEFT JOIN content_type_artwork a ON n2.nid = a.nid
WHERE users.status = 1;
Query the View
SELECT
a2.uid,
a2.artist,
a2.artist_name,
a2.nid,
a2.field_image_fid,
a2.field_order_value
FROM (
SELECT
uid,
MIN(field_order_value) AS field_order_value
FROM artists_artwork
GROUP BY uid
) a1
JOIN artists_artwork a2
ON a2.nid = (
SELECT
nid
FROM artists_artwork a
WHERE a.uid = a1.uid
AND a.field_order_value = a1.field_order_value
ORDER BY
uid ASC, field_order_value ASC, nid ASC
LIMIT 1
)
ORDER BY artist;
A simple solution to this can be to create views in your database that can then be joined together. This is especially useful if you often want to see the intermediate data in the same way in some other place. While it is possible to mash together the one huge query, I just take the divide and conquer approach sometimes.