I have a query which currently should only display one row. However it somehow is displaying 4 rows as its result set even though 1) there are only three rows in the table to begin with 2) only one row matches the query criteria.
I am hoping someone might know what I am doing wrong with this MySql query
My database table structure is as below
smsid (int, auto increment), sms_type (text), sms_status (enum 'pending',sent'),
sms_error (test), sms_message(text), sms_mp3file (varchar 50),
sms_sendon (datetime), send_sms_toid (int 5)
My table entries are as so (following the order of the table columns above)
31 | mp3 | pending | | | helloworld.mp3 | 2013-11-20 16:16:00 | 7
30 | text | sent | | hello test | | 2013-11-18 13:12:00 | 8
29 | voice | sent | | testing 123 | | 2013-11-18 10:05:00 | 18
My query is as below
SELECT sms_messages.*, sms_recipients.cust_profid, sms_recipients.sms_cellnumber,
customer_smsnumbers.sms_number, customer_smsnumbers.sms_number
FROM sms_messages, sms_recipients, customer_smsnumbers
WHERE sms_messages.sms_type='mp3' AND sms_messages.sms_sendon <= '2013-11-21'
AND sms_messages.sms_status='pending' AND
sms_messages.send_sms_toid = sms_recipients.smsuser_id
In your query, you have missed a JOINING clause for customer_smsnumbers table. Similar to sms_messages.send_sms_toid = sms_recipients.smsuser_id you need to have a join clause which either connects sms_messages with customer_smsnumbers table or connects sms_recipients with the customer_smsnumbers table.
In the absence of a join clause other (unintended) records are included in the result.
Related
i am trying to fine tune a query which runs on application dashboard.
Query is like i have a master table & few transaction tables. I have to make some calculation on transataion table & showcase same output along with few columns from the master table.
I tried with join that worked but query is not fast enough for application ( 40Sec for 1k Records).
I am trying with sub query but maybe i am making mistake somewhere.
sharing dummy details below.
Master table :
id
name
1
Cell1
2
Cell2
3
Cell3
4
Cell4
transaction table 1 Session1
| id | TotalMarks |
| 1 | 21 |
| 1 | 21 |
| 2 | 23 |
| 3 | 24 |
Transaction table 2 Session2
| id | TotalMarks |
| 1 | 22 |
| 2 | 28 |
| 4 | 25 |
| 4 | 29 |
Result i want Like
| id | Name | ObtainMarksSession1 | totalObtainMarkSession2 |
| 1 | cell1 |
| |
I have checked indexes already but anyway index won't help as i am using aggregate function.
join query
Select m.id,m.name,sum(s1.TotalMarks) ObtainMarksSession1, sum(s1.TotalMarks) ObtainMarksSession2
from master join session1 s1 on m.id=s1.id and s1.id is not null
join session2 s2 on m.id=s2.id and s2.id is not null
group by m.id,m.name;
subquery Sample
Select id, sum(TotalMarks) ObtainMarksSession1 from session1 where id is not null;
Same way i got result from other table also but now i am unable to merge both output. these single query output are very fast.
Need to know how to merge result & get output with name as well from master. also, other suggestion if i can try some other method to make this query fast.
P.s Id is not primary key in transaction table so there might be possiblity for null values.
Does anyone know how to find ranges that overlap, using MySQL? Essentially, as seen on table below (just for illustrating the problem as the actual table contains 1000+ ranges), I am trying to fetch all ranges that overlap inside of a table.
Thanks!
RANGES
| count | Begin | End | Comment |
| 1 | 1001 | 1095 | overlaps with ranges 2, 3 |
| 2 | 1005 | 1030 | overlaps with ranges 1, 3 |
| 3 | 1017 | 1020 | overlaps with ranges 1, 2 |
| 4 | 1110 | 1125 | no overlap |
One method is a self join and aggregation:
select r1.count, r1.begin, r1.end,
group_concat(r2.count order by r2.count) as overlaps
from ranges r1 left join
ranges r2
on r1.end >= r2.begin and
r1.begin <= r2.end and
r1.count <> r2.count
group by r1.count, r1.begin, r1.end;
On a table with 1000 rows, this will not be fast, but it should be doable. You may want to validate the logic on a smaller table.
This assumes that count is really a unique identifier for each row.
Note that count and end are poor choices for column names because they are SQL keywords.
Here is a db<>fiddle.
i need to return the best 5 scores in each category from a table.so far i have tried query below following an example from this site: selecting top n records per group
query:
select
subject_name,substring_index(substring_index
(group_concat(exams_scores.admission_no order by exams_scores.score desc),',',value),',',-1) as names,
substring_index(substring_index(group_concat(score order by score desc),',',value),',',-1)
as orderedscore
from exams_scores,students,subjects,tinyint_asc
where tinyint_asc.value >=1 and tinyint_asc.value <=5 and exam_id=2
and exams_scores.admission_no=students.admission_no and students.form_id=1 and
exams_scores.subject_code=subjects.subject_code group by exams_scores.subject_code,value;
i get the top n as i need but my problem is that its returning duplicates at random which i dont know where they are coming from
As you can see English and Math have duplicates which should not be there
+------------------+-------+--------------+
| subject_name | names | orderedscore |
+------------------+-------+--------------+
| English | 1500 | 100 |
| English | 1500 | 100 |
| English | 2491 | 100 |
| English | 1501 | 99 |
| English | 1111 | 99 |
|Mathematics | 1004 | 100 |
| Mathematics | 1004 | 100 |
| Mathematics | 2722 | 99 |
| Mathematics | 2734 | 99 |
| Mathematics | 2712 | 99 |
+-----------------------------------------+
I have checked table and no duplicates exist
to confirm there are no duplicates in the table:
select * from exams_scores
having(exam_id=2) and (subject_code=121) and (admission_no=1004);
result :
+------+--------------+---------+--------------+-------+
| id | admission_no | exam_id | subject_code | score |
+------+--------------+---------+--------------+-------+
| 4919 | 1004 | 2 | 121 | 100 |
+------+--------------+---------+--------------+-------+
1 row in set (0.00 sec)
same result for English.
If i run the query like 5 times i sometimes end up with another field having duplicate values.
can anyone tell me why my query is behaving this way..i tried adding distinct inside
group_concat(ditinct(exams_scores.admission_no))
but that didnt work ??
You're grouping by exams_scores.subject_code, value. If you add them to your selected columns (...as orderedscore, exams_scores.subject_code, value from...), you should see that all rows are distinct with respect to these two columns you grouped by. Which is the correct semantics of GROUP BY.
Edit, to clarify:
First, the SQL server removes some rows according to your WHERE clause.
Afterwards, it groups the remaining rows according to your GROUP BY clause.
Finally, it selects the colums you specified, either by directly returning a column's value or performing a GROUP_CONCAT on some of the columns and returning their accumulated value.
If you select columns not included in the GROUP BY clause, the returned results for these columns are arbitrary, since the SQL server reduces all rows equal with respect to the columns specified in the GROUP BY clause to one single row - as for the remaining columns, the results are pretty much undefined (hence the "randomness" you're experiencing), because - what should the server choose as a value for this column? It can only pick one randomly from all the reduced rows.
In fact, some SQL servers won't perform such a query and return an SQL error, since the result for those columns would be undefined, which is something you don't want to have in general. With these servers (I believe MSSQL is one of them), you more or less can only have columns in you SELECT clause which are part of your GROUP BY clause.
Edit 2: Which, finally, means that you have to refine your GROUP BY clause to obtain the grouping that you want.
I have two tables
one as td_job which has these structure
|---------|-----------|---------------|----------------|
| job_id | job_title | job_skill | job_desc |
|------------------------------------------------------|
| 1 | Job 1 | 1,2 | |
|------------------------------------------------------|
| 2 | Job 2 | 1,3 | |
|------------------------------------------------------|
The other Table is td_skill which is this one
|---------|-----------|--------------|
|skill_id |skill_title| skill_slug |
|---------------------|--------------|
| 1 | PHP | 1-PHP |
|---------------------|--------------|
| 2 | JQuery | 2-JQuery |
|---------------------|--------------|
now the job_skill in td_job is actualy the list of skill_id from td_skill
that means the job_id 1 has two skills associated with it, skill_id 1 and skill_id 2
Now I am writing a query which is this one
SELECT * FROM td_job,td_skill
WHERE td_skill.skill_id IN (SELECT td_job.job_skill FROM td_job)
AND td_skill.skill_slug LIKE '%$job_param%'
Now when the $job_param is PHP it returns one row, but if $job_param is JQuery it returns empty row.
I want to know where is the error.
The error is that you are storing a list of id's in a column rather than in an association/junction table. You should have another table, JobSkills with one row per job/skill combination.
The second and third problems are that you don't seem to understand how joins work nor how in with a subquery works. In any case, the query that you seem to want is more like:
SELECT *
FROM td_job j join
td_skill s
on find_in_set(s.skill_id, j.job_skill) > 0 and
s.skill_slug LIKE '%$job_param%';
Very bad database design. You should fix that if you can.
I have a table with a barcode column with a unique index. The data has been loaded with additional chars (-xx) at the end of each barcode to prevent duplicates, but there will be lots of duplicates once I strip off the suffix. Here is a sample of the data:
itemnumber barcode
17912 2-14
18082 2-1
21870 2-10
29219 2-8
Then I created two temporary tables, marty and manny, both with the itemnumber and the stripped down barcodes. So,both tables would contain
itemnumber barcode
17912 2
18082 2
21870 2
29219 2
etc
And the I tried to delete all but the first entry with barcode '2' in the marty table(and every other barcode). I hoped then to update the original table with the correct first entry and the users could fix up the duplicates themselves in time in the application.
So, this was my query to delete all but the first entry in the marty table for each barcode
DELETE FROM marty
WHERE itemnumber NOT IN
(SELECT MIN(itemnumber) FROM manny GROUP BY barcode)
There are 130,000 rows in marty and manny. The query took over 24 hours and then didn't finish properly. The connection to the server crashed and the query did not do all the updates.
Is there a better way to approach this that would not us the subquery, which i think is causing the delay? And the group by is probably slowing things down too with so many records.
Thanks
One more variant: this variant works without any temporary tables for deleting duplicates:
Delete m1
From Marty m1
join Marty m2
on m1.barcode = m2.barcode
and m1.itemnumber > m2.itemnumber
Here is a two-stage approach that avoids use of NOT IN. It also does not use the temporary table "manny". First, join "marty" to itself to pick out rows for which itemnumber != min(itemnumber). Use UPDATE to set barcode for these rows to NULL. A second pass with DELETE then removes all rows that were flagged in the first phase.
For this example, I split the barcode column of "marty" into two columns; it could be done with the table in its original format with some modification (need to split the column values on the fly).
select * from marty;
+------------+---------+---------+
| itemnumber | barcode | subcode |
+------------+---------+---------+
| 17912 | 2 | 14 |
| 18082 | 2 | 1 |
| 21870 | 2 | 10 |
| 29219 | 2 | 8 |
| 30133 | 3 | 5 |
| 30134 | 3 | 7 |
| 30139 | 3 | 9 |
| 30142 | 3 | 12 |
+------------+---------+---------+
8 rows in set (0.00 sec)
UPDATE
(marty m1
JOIN
(SELECT barcode,
MIN(itemnumber) AS itemnumber
FROM marty
GROUP BY barcode) m2
USING(barcode))
SET m1.barcode = NULL WHERE m1.itemnumber != m2.itemnumber;
mysql> select * from marty;
+------------+---------+---------+
| itemnumber | barcode | subcode |
+------------+---------+---------+
| 17912 | 2 | 14 |
| 18082 | NULL | 1 |
| 21870 | NULL | 10 |
| 29219 | NULL | 8 |
| 30133 | 3 | 5 |
| 30134 | NULL | 7 |
| 30139 | NULL | 9 |
| 30142 | NULL | 12 |
+------------+---------+---------+
8 rows in set (0.00 sec)
DELETE FROM marty WHERE barcode IS NULL;
MySQL is notoriously slow when using IN with very large sets. A scripted alternative:
Use a script to construct a long itemnumber = X OR itemnumber = y OR itemnumber = z clause (chunks size ~1000) and INSERT the matched rows (i.e. the ones that would not have been DELETEd in your previous query) into a new table, TRUNCATE the existing and load the contents of the new table back into the old with INSERT INTO marty SELECT * FROM marty_tmp.
You may want to lock the table or run in a transaction for the final TRUNCATE, INSERT.
edit:
Query SELECT MIN(itemnumber) FROM manny GROUP BY barcode from a script, store results in desiredItemNumbers array
Take batches of 1000 desiredItemNumbers and construct this query: INSERT INTO manny_tmp SELECT * FROM manny WHERE itemnumber = desiredItemNumbers[0] OR itemnumber = desiredItemNumbers[1] .... Rerun this query until you've exhausted the desiredItemNumbers array (n.b. the last query will probably have less than 1000 desiredItemNumbers).
You now have a table with the results that you would have been left with had you DELETEd the rest, so swap the contents of the marty and marty_tmp tables.
TRUNCATE marty
INSERT INTO marty SELECT * FROM marty_tmp
If you are creating temp tables anyway, how about building your table with an "INSERT INTO " or "CREATE TABLE .. AS ..." based on:
SELECT MIN(itemnumber) AS itemnumber, barcode
FROM marty
GROUP BY barcode