activerecord ruby row with max value in mysql table - mysql

I need to select from MySQL table table1 (it's shown below) all records with different 'foreign_row_id' values and group them by maximum datetime value. For example, from the table below I should select rows with id=2 and id=3. And after this I have to join the result with table with phrase_id's.
In my project I use only Ruby and ActiveRecord without Rails.
+----+---------------------+----------------+--------------+
| id | datetime | foreign_row_id | other_fields |
+----+---------------------+----------------+--------------+
| 1 | 2013-05-02 17:36:15 | 1 | 1 |
| 2 | 2013-05-02 17:36:53 | 1 | 1 |
| 3 | 2013-05-03 00:00:00 | 2 | 3 |
+----+---------------------+----------------+--------------+
Here my ruby code:
#result= Model1.joins(:foreign_row).
where(:user_id => user_id).
order(:datetime).
reverse_order.
select('table1.*, foreign_row.*').
maximum(:datetime, :group => :foreign_row_id).
And it gives me only one record, without grouping by id and joining: {"1":"2013-05-02T17:36:53+09:00"}.
What should I change in the my code to get all rows?

I solved this by parts, first I get a SQL sentence that would solve problem:
SELECT * FROM (SELECT * FROM `models` ORDER BY `datetime` desc) m GROUP BY `foreign_row_id`
And then I built that query with Arel:
model_table = Model1.arel_table
subquery = model_table.project(Arel.sql('*')).order('`datetime` desc').as('m')
query = model_table.project(Arel.sql('*')).from(subquery).group('`foreign_row_id`')
Finally you can run that query:
Model1.find_by_sql query.to_sql
I added some back ticks because fields I tested with were SQL reserved words, I think you can omit them.

Related

How to select both sum value of all rows and values in some specific rows?

I have a record table and its comment table, like:
| commentId | relatedRecordId | isRead |
|-----------+-----------------+--------|
| 1 | 1 | TRUE |
| 2 | 1 | FALSE |
| 3 | 1 | FALSE |
Now I want to select newCommentCount and allCommentCount as a server response to the browser. Is there any way to select these two fields in one SQL?
I've tried this:
SELECT `isRead`, count(*) AS cnt FROM comment WHERE relatedRecordId=1 GROUP BY `isRead`
| isRead | cnt |
| FALSE | 2 |
| TRUE | 1 |
But, I have to use a special data structure to map it and sum the cnt fields in two rows to get allCommentCount by using an upper-layer programming language. I want to know if I could get the following format of data by SQL only and in one step:
| newCommentCount | allCommentCount |
|-----------------+-----------------|
| 2 | 3 |
I don't even know how to describe the question. So I got no any search result in Google and Stackoverflow. (Because of My poor English, maybe)
Use conditional aggregation:
SELECT SUM(NOT isRead) AS newCommentCount, COUNT(*) AS allCommentCount
FROM comment
WHERE relatedRecordId = 1;
if I under stand you want show sum of newComments Count and all comments so you can do it like
SELECT SUM ( CASE WHEN isRead=false THEN 1 ELSE 0 END ) AS newComment,
Count(*) AS AllComments From comments where relatedRecord=1
also you can make store procedure for it.
To place two result sets horizontally, you can as simple as use a subquery for an expression in the SELECT CLAUSE as long as the number of rows from the result sets match:
select (select count(*) from c_table where isread=false and relatedRecordId=1 ) as newCommentCount,
count(*) as allCommentCount
from c_table where relatedRecordId=1;

List Last record of each item in mysql

Each item(item is produced by Serial) in my table has many record and I need to get last record of each item so I run below code:
SELECT ID,Calendar,Serial,MAX(ID)
FROM store
GROUP BY Serial DESC
it means it must show a record for each item which in that record all data of columns be for last record related to each item but the result is like this:
-------------------------------------------------------------+
ID | Calendar | Serial | MAX(ID) |
-------------------------------------------------------------|
7031053 | 2016-05-14 14:05:14 79.5 | N10088 | 7031056 |
7053346 | 2016-05-14 15:17:28 79.8 | N10078 | 7053346 |
7051349 | 2016-05-14 15:21:29 86.1 | J20368 | 7051349 |
7059144 | 2016-05-14 15:50:27 89.6 | J20367 | 7059144 |
7045551 | 2016-05-14 15:15:15 89.2 | J20366 | 7045551 |
7056243 | 2016-05-14 15:25:34 85.2 | J20358 | 7056245 |
7042652 | 2016-05-14 15:18:33 83.9 | J20160 | 7042652 |
7039753 | 2016-05-14 11:48:16 87 | J20158 | 7039753 |
7036854 | 2016-05-14 15:18:35 87.5 | J20128 | 7036854 |
7033955 | 2016-05-14 15:20:45 83.4 | 9662 | 7033955 |
-------------------------------------------------------------+
the problem is why for example in record related to Serial N10088 the ID is "7031053", but MAX(ID) is "7031056"? or also for J20358?
each row must show last record of each item but in my output it is not true!
If you want the row with the max value, then you need a join or some other mechanism.
Here is a simple way using a correlated subquery:
select s.*
from store s
where s.id = (
select max(s2.id)
from store s2
where s2.serial = s.serial
);
You query uses a (mis)feature of SQL Server that generates lots of confusion and is not particularly helpful: you have columns in the select that are not in the group by. What value do these get?
Well, in most databases the answer is simple: the query generates an error as ANSI specifies. MySQL pulls the values for the additional columns from indeterminate matching rows. That is rarely what the writer of the query intends.
For performance, add an index on store(serial, id).
try this one.
SELECT MAX(id), tbl.*
FROM store tbl
GROUP BY Serial
You can try with this also...
SELECT ID,Calendar,Serial
FROM store s0
where ID = (
SELECT MAX(id)
FROM store s1
WHERE s1.serial = s0.serial
);

Returns distinct record in a joins query - Rails 4

I'm trying to get and display an order list including the current status.
#orders = Order.joins(order_status_details: :order_status)
.order('id DESC, order_status_details.created_at DESC')
.select("orders.id, order_status_details.status_id, order_statuses.name, order_status_details.created_at")
It works good but is returning all the rows with order ids duplicated like this:
+----+-----------+----------------------+---------------------+
| id | status_id | name | created_at |
+----+-----------+----------------------+---------------------+
| 8 | 1 | Pending | 2016-01-31 16:33:30 |
| 7 | 3 | Shipped | 2016-02-01 05:01:21 |
| 7 | 2 | Pending for shipping | 2016-01-31 05:01:21 |
| 7 | 1 | Pending | 2016-01-31 04:01:21 |
+----+-----------+----------------------+---------------------+
The correct answer must return uniques ids, for the example above should be the first and second row.
I was already trying with distinct on select, .distinct, .uniq and .group but I'm getting an error.
Thanks.
First of all, I believe your model is "An Order has many OrderStatusDetail". So that is the reason why you have several different name in your result.
So you can modify the query like this:
#orders = Order.joins(order_status_details: :order_status)
.order('id DESC, order_status_details.created_at DESC')
.where('order_status_details.id IN (SELECT MAX(id) FROM order_status_details GROUP BY order_id)')
.select("orders.id, order_status_details.status_id, order_statuses.name, order_status_details.created_at")
Ideally, the where condition is used for selecting just the expected id of order_status_details, I use min_id for example, you can modify it as needed

MySQL Subselect issue

I have an issue with a mysql subselect.
**token table:**
id | token | articles
1 | 12345 | 7,6
2 | 45saf | 6,7,8
**items table:**
id | name | filename
6 | Some brilliant name | /test/something_useful.mp3
7 | homer simpson | /test/good-voice.mp3
**query:**
SELECT items.`filename`,items.`name` FROM rm_shop items WHERE items.`id` IN ( SELECT token.`articles` FROM rm_token token WHERE token.`token` = 'token')
I only get one of the two files (with the id 7 that is). What am I missing here?
For a column with concatenated data (like your "articles" column), you can not use MySQL IN() Function. Instead use the string function FIND_IN_SET() to query such values. In your case:
SELECT items.`filename`,items.`name` FROM rm_shop items
WHERE FIND_IN_SET(items.`id`,
(SELECT token.`articles` FROM rm_token token WHERE token.`token` = 'token')) > 0
A working sqlfiddle: http://sqlfiddle.com/#!2/796998/3/0

MySQL - COUNT before INSERT in one query

Hey all, I am looking for a way to query my database table only once in order to add an item and also to check what last item count was so that i can use the next number.
strSQL = "SELECT * FROM productr"
After that code above, i add a few product values to a record like so:
ID | Product | Price | Description | Qty | DateSold | gcCode
--------------------------------------------------------------------------
5 | The Name 1 | 5.22 | Description 1 | 2 | 09/15/10 | na
6 | The Name 2 | 15.55 | Description 2 | 1 | 09/15/10 | 05648755
7 | The Name 3 | 1.10 | Description 3 | 1 | 09/15/10 | na
8 | The Name 4 | 0.24 | Description 4 | 21 | 09/15/10 | 658140
i need to count how many times it sees gcCode <> 'na' so that i can add a 1 so it will be unique. Currently i do not know how to do this without opening another database inside this one and doing something like this:
strSQL2 = "SELECT COUNT(gcCode) as gcCount FROM productr WHERE gcCode <> 'na'
But like i said above, i do not want to have to open another database query just to get a count.
Any help would be great! Thanks! :o)
There's no need to do everything in one query. If you're using InnoDB as a storage engine, you could wrap your COUNT query and your INSERT command in a single transaction to guarantee atomicity.
In addition, you should probably use NULL instead of na for fields with unknown or missing values.
They're two queries; one is a subset of the other which means getting what you want in a single query will be a hack I don't recommend:
SELECT p.*,
(SELECT COUNT(*)
FROM PRODUCTR
WHERE gccode != 'na') AS gcCount
FROM PRODUCTR p
This will return all the rows, as it did previously. But it will include an additional column, repeating the gcCount value for every row returned. It works, but it's redundant data...