Can a row in mysql reference itself in a sub-query?
test_table
id | field 1 | field 2 | field 3
1 | 25 | 10 | average of field 1 and 2
is it possible to have column 3 reference itself?
Thanks.
You can set row 3 equal to the average of columns 1 and 2 at any point, but that value will only be true for the values of those columns at that point in time.
If you're looking for an Excel-style field that automatically updates the column when you change a different column, then you'll want to use a trigger that will update the third column when either the first or second is updated.
In the case you want the value to be returned in your select statement, use this:
SELECT id, field1, field2, (CAST((field1 + field2) as DECIMAL)/2) AS 'field 3'
FROM tablename
Casting the sum into decimal will prevent losing precision in your average.
For a simple calculation like that, wouldn't you rather have it referenced in a php function instead? Storing it in the database is an unnecessary use of space.
Related
Hi so sorry for I know this just to basic. simple update only using sum on the same table. I need to get
total_tbl
+-- month1 --- month2 --- month3 --- total --+
| 3 3 5 |
| 5 3 5 |
| 3 4 |
| 5 5 |
+--------------------------------------------+
I need update the total column using SUM.
I have this statement so far:
UPDATE total_tbl SET total = (SELECT SUM(month1,month2,month3))
I should update even if one column doesn't have a value. Thanks!
SUM() is used to sum an expression across multiple rows, usually using GROUP BY. If you want to add expressions in the same row you just use ordinary addition.
Use COALESCE() to provide a default value for null columns.
UPDATE total_tbl
SET total = COALESCE(month1, 0) + COALESCE(month2, 0) + COALESCE(month3, 0)
You shouldn't need to store this derived information. I would recommend a computed column:
alter table total_tbl
add column total int -- or the datatype you need
generated always as (coalesce(month1, 0) + coalesce(month2, 0) + coalesce(month3, 0)) stored
The additional column gives you an always up-to-date perspective at your data. You can even index it of you like, so it can be queried efficiently.
On the other hand, manually maintaining the values would require updating that column every time a value changes on the row, which can get tedious.
I have a field with the value in a table like below (two records).
| field
|--------------------------------------------------------------------------------
| [{"id":"8a688d70-d881-11ea-b999-3b32356f3dce","supplierName":"t1"},{"id":"8a688d70-deeq-3221-cdee-3b32356f3dc1","supplierName":"t2"]
|--------------------------------------------------------------------------------
| [{"id":"8a688d70-323s-11ea-2123-3b32356f1111","supplierName":"t3"}]
|--------------------------------------------------------------------------------
| ...
When I use the SQL
select substring(field1, 9, 36)
FROM table
I get the records
8a688d70-d881-11ea-b999-3b32356f3dce
8a688d70-323s-11ea-2123-3b32356f1111
Now, Is there any way to get all the id value in a record?
NOTE, maybe one record had mutiple id values.
Ideal result should be like below.
|————————————————————————————————————————————————————————————————————————————
|8a688d70-d881-11ea-b999-3b32356f3dce, 8a688d70-deeq-3221-cdee-3b32356f3dc1
|————————————————————————————————————————————————————————————————————————————
|8a688d70-323s-11ea-2123-3b32356f1111
|————————————————————————————————————————————————————————————————————————————
Use JSON_EXTRACT of mySql.
SELECT JSON_EXTRACT(field, "$[*].id") AS field FROM table
Suppose we have 2 numbers of 3 bits each attached together like '101100', which basically represents 5 and 4 combined. I want to be able to perform aggregation functions like SUM() or AVG() on this column separately for each individual 3-bit column.
For instance:
'101100'
'001001'
sum(first three column) = 6
sum(last three column) = 5
I have already tried the SUBSTRING() function, however, speed is the issue in that case as this query will run on millions of rows regularly. And string matching will slow the query.
I am also open for any new databases or technologies that may support this functionality.
You can use the function conv() to convert any part of the string to a decimal number:
select
sum(conv(left(number, 3), 2, 10)) firstpart,
sum(conv(right(number, 3), 2, 10)) secondpart
from tablename
See the demo.
Results:
| firstpart | secondpart |
| --------- | ---------- |
| 6 | 5 |
With the current understanding I have of your schema (which is next to none), the best solution would be to restructure your schema so that each data point is its own record instead of all the data points being in the same record. Doing this allows you to have a dynamic number of data points per entry. Your resulting table would look something like this:
id | data_type | value
ID is used to tie all of your data points together. If you look at your current table, this would be whatever you are using for the primary key. For this answer, I am assuming id INT NOT NULL but yours may have additional columns.
Data Type indicates what type of data is stored in that record. This would be the current tables column name. I will be using data_type_N as my values, but yours should be a more easily understood value (e.g. sensor_5).
Value is exactly what it says it is, the value of the data type for the given id. Your values appear to be all numbers under 8, so you could use a TINYINT type. If you have different storage types (VARCHAR, INT, FLOAT), I would create a separate column per type (val_varchar, val_int, val_float).
The primary key for this table now becomes a composite: PRIMARY KEY (id, data_type). Since your previously single record will become N records, the primary key will need to adjust to accommodate that.
You will also want to ensure that you have indexes that are usable by your queries.
Some sample values (using what you placed in your question) would look like:
1 | data_type_1 | 5
1 | data_type_2 | 4
2 | data_type_1 | 1
2 | data_type_2 | 1
Doing this, summing the values now becomes trivial. You would only need to ensure that data_type_N is summed with data_type_N. As an example, this would be used to sum your example values:
SELECT data_type,
SUM(value)
FROM my_table
WHERE id IN (1,2)
GROUP BY data_type
Here is an SQL Fiddle showing how it can be used.
Is there a way to auto increment the id field of my database based on the values of two other columns in the inserted row?
I'd like to set up my database so that when multiple rows are inserted at the same time, they keep their tracknumber ordering. The ID field should auto increment based firstly on the automatically generated timestamp field, and then secondly the tracknumber contained within that timestamp.
Here's an example of how the database might look:
id | tracknumber | timestamp
________________________________________
1 | 1 | 2014-03-31 11:35:17
2 | 2 | 2014-03-31 11:35:17
3 | 3 | 2014-03-31 11:35:17
4 | 1 | 2014-04-01 09:10:14
5 | 2 | 2014-04-01 09:10:14
I've been reading up on triggers but not sure if that's appropriate here? I feel as though i'm missing an obvious function.
This is a bit long for a comment.
There is no automatic way to do this. You can do it with triggers, if you like. Note the plural, you will need triggers for insert, update, and delete, if you want the numbering to remain accurate as the data changes.
You can do this one the query side, if the goal is to enumerate the values. Here is one method using a subquery:
select t.*,
(select count(*) from table t2 where t2.timestamp = t.timestamp and t2.id <= t.id
) as tracknumber
from table t;
The performance of this might even be reasonable with an index on table(timestamp, id).
If the data is being created once, you can also populate the values using an update query.
If you are inserting them in one transaction and or script, then sort the data yourself in the server side according to these two fields (assuming you create timestamp on server side too because that would seem logical) and then insert the rows one after another. I don't think it is necessary to overthink this and look for a difficult approach in the database. Database will still be inserting rows one after another, not all at once so there is no way it will know that it needs to do some kind of sorting beforehand. It is you who has to do it.
Is there a way I can store multiple values in a single cell instead of different rows, and search for them?
Can I do:
pId | available
1 | US,UK,CA,SE
2 | US,SE
Instead of:
pId | available
1 | US
1 | UK
1 | CA
1 | SE
Then do:
select pId from table where available = 'US'
You can do that, but it makes the query inefficient. You can look for a substring in the field, but that means that the query can't make use of any index, which is a big performance issue when you have many rows in your table.
This is how you would use it in your special case with two character codes:
select pId from table where find_in_set('US', available)
Keeping the values in separate records makes every operation where you use the values, like filtering and joining, more efficient.
you can use the like operator to get the result
Select pid from table where available like '%US%'