I'm looking for something like
SELECT
`foo`.*,
(SELECT MAX(`foo`.`bar`) FROM `foo`)
FROM
(SELECT * FROM `fuz`) AS `foo`;
but it seem that foo does not get recognized in nested query as there is error like
[Err] 1146 - Table 'foo' doesn't exist
I try the query above because I think its faster than something like
SELECT
`fuz`.*,
(SELECT MAX(`bar`) FROM `fuz`) as max_bar_from_fuz
FROM `fuz`
Please give me some suggestions.
EDIT: I am looking for solutions with better performance than the second query. Please assume that my table fuz is a very, very big one, thus running an additional query getting max_bar cost me a lot.
What you want, for the first query (with some modification) to work, is called Common Table Expressions and MySQL has not that feature.
If your second query does not perform well, you can use this:
SELECT
fuz.*,
fuz_grp.max_bar
FROM
fuz
CROSS JOIN
( SELECT MAX(bar) AS max_bar
FROM fuz
) AS fuz_grp
Alias created into a SELECT clause only can be used to access scalar values, they are not synonyms to the tables. If you want to return the max value of a column for all returned rows you can do it by running a query before to calculate the max value into a variable and then use this variable as a scalar value into your query, like:
-- create and populate a table to demonstrate concept
CREATE TABLE fuz (bar INT, col0 VARCHAR(20), col1 VARCHAR(20) );
INSERT INTO fuz(bar, col0, col1) VALUES (1, 'A', 'Airplane');
INSERT INTO fuz(bar, col0, col1) VALUES (2, 'B', 'Boat');
INSERT INTO fuz(bar, col0, col1) VALUES (3, 'C', 'Car');
-- create the scalar variable with the value of MAX(bar)
SELECT #max_foo:=MAX(bar) FROM fuz;
-- use the scalar variable into the query
SELECT *, #max_foo AS `MAX_FOO`
FROM fuz;
-- result:
-- | BAR | COL0 | COL1 | MAX_FOO |
-- |-----|------|----------|---------|
-- | 1 | A | Airplane | 3 |
-- | 2 | B | Boat | 3 |
-- | 3 | C | Car | 3 |
Just simple use MAX function:
SELECT
`fuz`.*,
MAX(`fuz`.`bar`)
FROM
`fuz`
or if you use
SELECT
`foo`.*,
MAX(`foo`.`bar`)
FROM
(SELECT * FROM `fuz` JOIN `loolse' ON (`fuz`.`field` = `loolse`.`smile`)) AS `foo`;
Related
I'm totally new in SQL. I never used it and just need a simple answer because I don't have time to learn SQL right now :(. I need to remove duplicated records from my local DB. Case looks like this:
| id | type | ... |
-------------------
| 1 | test | ... |
| 1 | test2 | ... |
| 1 | test | ... |
| 1 | test | ... |
I want to remove all duplicated record which has the same id and type but leave only on record. Like this:
| id | type | ... |
-------------------
| 1 | test | ... |
| 1 | test2 | ... |
Using delete by Id is impossible. I have 50k records and I want to remove all duplicated records. When ID and Type are the same.
Please try this
First Way
SELECT id, type
FROM table_name
Group by id, type
Second Way
SELECT DISTINCT id, type
FROM table_name;
A TSQL sample code that might help:
WITH tbl_alias AS
(
SELECT id, type,
RN = ROW_NUMBER()OVER(PARTITION BY id, type ORDER BY id)
FROM tbl
)
DELETE FROM tbl_alias WHERE RN > 1
Also you can try How to delete duplicates on a MySQL table?
.
SELECT DISTINCT statement is used to return only distinct (different) values.
Inside a table, a column often contains many duplicate values; and sometimes you only want to list the different (distinct) values.
SELECT DISTINCT column1, column2, ...
FROM table_name;
In your table
SELECT DISTINCT id, type, ...
FROM table_name;
you just need to use the keyword distinct when selecting mate.. try like this
SELECT DISTINCT id, type, blah blah blah FROM your_table; // this should take care of em
You should replace your table grouping by id and type, and using an aggregate function on the other fields.
You should add to your question the definition of your table and specify the rule to use to get the other fields. Anyway, this is a simple solution:
-- Create a temp copy of original table with distinct values
CREATE TEMPORARY TABLE copy_table1
SELECT id, type, MIN(field3) AS field3, ...
FROM table1
GROUP BY id, type;
-- Empty original table
DELETE FROM table1;
-- Insert distinct data into original table
INSERT INTO table1 (id, type, field3, ...)
SELECT id, type, field3, ...
FROM copy_table1;
I have a Postgres table that has content similar to this:
id | data
1 | {"a":"4", "b":"5"}
2 | {"a":"6", "b":"7"}
3 | {"a":"8", "b":"9"}
The first column is an integer and the second is a json column.
I want to be able to expand out the keys and values from the json so the result looks like this:
id | key | value
1 | a | 4
1 | b | 5
2 | a | 6
2 | b | 7
3 | a | 8
3 | b | 9
Can this be achieved in Postgres SQL?
What I've tried
Given that the original table can be simulated as such:
select *
from
(
values
(1, '{"a":"4", "b":"5"}'::json),
(2, '{"a":"6", "b":"7"}'::json),
(3, '{"a":"8", "b":"9"}'::json)
) as q (id, data)
I can get just the keys using:
select id, json_object_keys(data::json)
from
(
values
(1, '{"a":"4", "b":"5"}'::json),
(2, '{"a":"6", "b":"7"}'::json),
(3, '{"a":"8", "b":"9"}'::json)
) as q (id, data)
And I can get them as record sets like this:
select id, json_each(data::json)
from
(
values
(1, '{"a":"4", "b":"5"}'::json),
(2, '{"a":"6", "b":"7"}'::json),
(3, '{"a":"8", "b":"9"}'::json)
) as q (id, data)
But I can't work out how to achieve the result with id, key and value.
Any ideas?
Note: the real json I'm working with is significantly more nested than this, but I think this example represents my underlying problem well.
SELECT q.id, d.key, d.value
FROM q
JOIN json_each_text(q.data) d ON true
ORDER BY 1, 2;
The function json_each_text() is a set returning function so you should use it as a row source. The output of the function is here joined laterally to the table q, meaning that for each row in the table, each (key, value) pair from the data column is joined only to that row so the relationship between the original row and the rows formed from the json object is maintained.
The table q can also be a very complicated sub-query (or a VALUES clause, like in your question). In the function, the appropriate column is used from the result of evaluating that sub-query, so you use only a reference to the alias of the sub-query and the (alias of the) column in the sub-query.
This will solve it as well:
select you_table.id , js.key, js.value
from you_table, json_each(you_table.data) as js
Another way that i think is very easy to work when you have multiple jsons to join is doing something like:
SELECT data -> 'key' AS key,
data -> 'value' AS value
FROM (SELECT Hstore(Json_each_text(data)) AS data
FROM "your_table") t;
you can
select js.key , js.value
from metadata, json_each(metadata.column_metadata) as js
where id='6eec';
I have an SQL query. Is it possible to change somehow this query, so it has better performance, but with the same result? This query is working, but it is very slow, and I don't have an idea on improving its performance.
SELECT keyword, query
FROM url_alias ua
JOIN product p
on (p.manufacturer_id =
CONVERT(SUBSTRING_INDEX(ua.query,'=',-1),UNSIGNED INTEGER))
JOIN oc_product_to_storebk ps
on (p.product_id = ps.product_id)
AND ua.query LIKE 'manufacturer_id=%'
AND ps.store_id= '9'
GROUP BY ua.keyword
Table structure:
URL_ALIAS
+-----------------------------------------------+
| url_alias_id | query | keyword |
+--------------+---------------------+----------+
| 1 | manufacturer_id=100 | test |
+--------------+---------------------+----------+
PRODUCT
+-----------------+------------+
| manufacturer_id | product_id |
+-----------------+------------+
| 100 | 1000 |
+-----------------+------------+
OC_PRODUCT_TO_STOREBK
+------------+----------+
| product_id | store_id |
+------------+----------+
| 1000 | 9 |
+------------+----------+
I want all the keywords from the url_alias keyword column, when the following condition is met: LIKE 'manufacturer_id=%' AND ps.store_id='9'
You should avoid the convert function as it will be expensive and provides no way you could profit from indexes on the url_alias table.
Extend your url_alias table so it has additional fields for the parts of the query. You will probably hesitate to go this way, but you will not regret it once you have done it. So your url_alias table should look like this:
create table url_alias (
url_alias_id int,
query varchar(200),
keyword varchar(100),
query_key varchar(200),
query_value_str varchar(200),
query_value_int int
);
If you don't want to recreate it, then add the fields as follows:
alter table url_alias add (
query_key varchar(200),
query_value_str varchar(200),
query_value_int int
);
Update these new columns for the existing records with this statement (only to execute once):
update url_alias
set query_key = substring_index(query, '=', 1),
query_value_str = substring_index(query, '=', -1),
query_value_int = nullif(
convert(substring_index(query,'=',-1),unsigned integer), 0);
Then create a trigger so that these 3 extra fields are updated automatically when you insert a new record:
create trigger ins_sum before insert on url_alias
for each row
set new.query_key = substring_index(new.query, '=', 1),
new.query_value_str = substring_index(new.query, '=', -1),
new.query_value_int = nullif(
convert(substring_index(new.query,'=',-1),unsigned integer), 0);
Note the additional nullif() which will make sure the last field is null when the value after the equal sign is not numerical.
If ever you also update such records, then also create a similar update trigger.
With this set-up, you can still insert records like before:
insert into url_alias (url_alias_id, query, keyword)
values (1, 'manufacturer_id=100', 'test');
When you then select this record, you will see this:
+--------------+---------------------+---------+-----------------+-----------------+-----------------+
| url_alias_id | query | keyword | query_key | query_value_str | query_value_int |
+--------------+---------------------+---------+-----------------+-----------------+-----------------+
| 1 | manufacturer_id=100 | test | manufacturer_id | 100 | 100 |
+--------------+---------------------+---------+-----------------+-----------------+-----------------+
Now the work of extraction and conversion has been done once, and does not have to be repeated any more when you select records. You can rewrite your original query like this:
select ua.keyword, ua.query
from url_alias ua
join product p
on p.manufacturer_id = ua.query_value_int
join oc_product_to_storebk ps
on p.product_id = ps.product_id
and ua.query_key = 'manufacturer_id'
and ps.store_id = 9
group by ua.keyword, ua.query
And now you can improve the performance by creating indexes on both elements of the query:
create index query_key on url_alias(query_key, query_value_int, keyword);
You might need to experiment a bit to get the order of fields right in the index before it gets used by the SQL plan.
See this SQL fiddle.
I asume that you use indexes on the store_id, product_id and keyword columns?
Focus on changing your datamodel to avoid the CONVERT and the LIKE operators. Both of them will cause that the query will not utilize indexes on relevant columns
Also, take a good look on the data that is stored in the ua.query colomn. Possibly you need to distribute data in that column to multiple columns so you can use indexes
Say I have a simple function in MySQL:
SELECT SUM(Column_1)
FROM Table
WHERE Column_2 = 'Test'
If no entries in Column_2 contain the text 'Test' then this function returns NULL, while I would like it to return 0.
I'm aware that a similar question has been asked a few times here, but I haven't been able to adapt the answers to my purposes, so I'd be grateful for some help to get this sorted.
Use COALESCE to avoid that outcome.
SELECT COALESCE(SUM(column),0)
FROM table
WHERE ...
To see it in action, please see this sql fiddle: http://www.sqlfiddle.com/#!2/d1542/3/0
More Information:
Given three tables (one with all numbers, one with all nulls, and one with a mixture):
SQL Fiddle
MySQL 5.5.32 Schema Setup:
CREATE TABLE foo
(
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
val INT
);
INSERT INTO foo (val) VALUES
(null),(1),(null),(2),(null),(3),(null),(4),(null),(5),(null),(6),(null);
CREATE TABLE bar
(
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
val INT
);
INSERT INTO bar (val) VALUES
(1),(2),(3),(4),(5),(6);
CREATE TABLE baz
(
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
val INT
);
INSERT INTO baz (val) VALUES
(null),(null),(null),(null),(null),(null);
Query 1:
SELECT 'foo' as table_name,
'mixed null/non-null' as description,
21 as expected_sum,
COALESCE(SUM(val), 0) as actual_sum
FROM foo
UNION ALL
SELECT 'bar' as table_name,
'all non-null' as description,
21 as expected_sum,
COALESCE(SUM(val), 0) as actual_sum
FROM bar
UNION ALL
SELECT 'baz' as table_name,
'all null' as description,
0 as expected_sum,
COALESCE(SUM(val), 0) as actual_sum
FROM baz
Results:
| TABLE_NAME | DESCRIPTION | EXPECTED_SUM | ACTUAL_SUM |
|------------|---------------------|--------------|------------|
| foo | mixed null/non-null | 21 | 21 |
| bar | all non-null | 21 | 21 |
| baz | all null | 0 | 0 |
Use IFNULL or COALESCE:
SELECT IFNULL(SUM(Column1), 0) AS total FROM...
SELECT COALESCE(SUM(Column1), 0) AS total FROM...
The difference between them is that IFNULL is a MySQL extension that takes two arguments, and COALESCE is a standard SQL function that can take one or more arguments. When you only have two arguments using IFNULL is slightly faster, though here the difference is insignificant since it is only called once.
Can't get exactly what you are asking but if you are using an aggregate SUM function which implies that you are grouping the table.
The query goes for MYSQL like this
Select IFNULL(SUM(COLUMN1),0) as total from mytable group by condition
if sum of column is 0 then display empty
select if(sum(column)>0,sum(column),'')
from table
I would like to run a query from a table where the content is like that :
id | col1 | col2 | col3
-----------------------
1 | i_11 | i_12 | i_13
2 | i_21 | i_22 | i_23
3 | i_31 | i_32 | i_33
.. | ... | ... | ...
SELECT col1 FROM table WHERE id IN
(SELECT id-1, id+1 FROM table WHERE col1='xxx' AND col2='yyy' AND col3='zzz')
The aim is to get an interval [id-1, id+1] based on the id column which returns the content stored in col1 for id-1 and id+1. The subquery works but I guess I have a problem with the query itself, since I'm having an error "Operand should contain only one column". I understand it, but I don't see any other way to do it in one query ?
I'm quite sure there's a pretty easy solution but I can't figure it out for the moment, even after having carefully read other posts about multiples columns' subqueries...
Thank you for any help :-)
The only way I can think to do it right now is like this:
SELECT col1
FROM table T
WHERE id BETWEEN (SELECT id FROM table WHERE col1='xxx' AND col2='yyy' AND col3='zzz') -1
and (SELECT id FROM table WHERE col1='xxx' AND col2='yyy' AND col3='zzz') +1
Your problem is that you are retrieving two values - but as a list rather than a set. The SQL optimizer can't see 1,3 as a set of two items when they are presented in a single row. There may also be a cast needed.
This should work.
SELECT col1 FROM table WHERE id in
(
select cast(id as int) -1 from table where col1='i_21'
union
select cast(id as int) +1 from table where col1='i_21'
)