I have the following query on a student enrollment table, which has the student_id, first_name, enrollment_date (DATETIME) , price (Float) as the columns,
My problem is when I run this query on MySQL I get a "BLOB" value for the price column on the first part of the query when there is a value for price and in the second section of the query as well.
I want to extract the results to a csv, and do not want "NULL" in the excel sheet, hence I have used the IFNULL condition which seems to be the reason for "BLOB" value coming on the price column.
If I don't have IFNULL on the price column I get the results with "NULL" being set for price column.
If I change the IFNULL(price,'') to IFNULL(price, 0) then also things work but I am artificially putting a '0' for price when it is null this I don't want to do ... any help ?
select
student_id AS AS `student_id`,
first_name AS `first_name`,
enrolment_date AS `enrolment_date`,
IFNULL(price, '') AS `price`,
IFNULL(price * 0.1, '') AS `gst`,
IFNULL(price * 1.1, '') AS `price_ex_gst`
from
student_enrolment
where
student_id = 123 and
month(enrolment_date) = 10
union all
select
student_id AS `student_id`,
count(1) AS `count(1)`,
'Total' AS `Total`,
sum(`price`) AS `sum(price)`,
(sum(`price`) * 0.1) AS `gst`,
(sum(`price`) * 1.1) AS `price_ex_gst`
from
student_enrolment
where
student_id = 123 and
month(enrolment_date) = 10
I think the problem are the data types. Convert the price to a string and then use ifnull():
IFNULL(format(price, 4), '') AS `price`,
IFNULL(format(price * 0.1, 4), '') AS `gst`,
IFNULL(format(price * 1.1, 4), '') AS `price_ex_gst`
As a note: I would use coalesce() instead of ifnull(), because coalesce() is the ANSI standard function for this purpose.
Related
Here is an example. Suppose I have the following table:
id | list
--------+----------
10 |
| 10,20
20 | 10,20
For each id value, I'd like to calculate the number of rows having a list value which contains that id value. The result would be look like that:
id | count of lists
--------+----------
10 | 2
| 0
20 | 2
I suggest a window function should be used, but it seems that I can't access the id value from within such a function.
I totally agree that it is BAD design. This question is about the possibility.
Any MySQL/PostgreSQL solution is fine.
Assuming that you are using MySQL, and assuming that your table has the name test, and assuming that both columns are string types:
SELECT
t1.id, count(t2.list)
FROM
(test t1 LEFT JOIN test t2 ON
(t2.list LIKE CONCAT('%,', t1.id, ',%')) OR
(t2.list LIKE CONCAT('%,', t1.id)) OR
(t2.list LIKE CONCAT(t1.id, ',%')))
GROUP BY t1.id;
Please be aware that this solution might be very slow depending on the number of records you have and depending on the average length of the strings in the list field.
If you need something faster, I think it couldn't be a single query. Perhaps we would have to write a stored procedure or some application logic for that, or use additional tables or columns and a series of multiple SQL statements.
Before I start, as mentioned above, this is a poor design. However, this is how I would query it:
CREATE TABLE #Lists (id int, list varchar(500));
INSERT INTO #Lists (id, list) VALUES
(10, NULL), (NULL, '10,20'), (20, '10,20');
WITH cte AS (
SELECT LEFT(list, INSTR(list, '%,%')-1) AS value, SUBSTRING(list, INSTR(list, '%,%') + 1, 500) AS list FROM #Lists
UNION ALL
SELECT CASE WHEN list LIKE ('%,%') THEN LEFT(list, INSTR(list, '%,%')-1) ELSE list END AS value, CASE WHEN list LIKE ('%,%') THEN SUBSTRING(list, INSTR(list, '%,%') + 1, 500) END AS list FROM cte
WHERE CHAR_LENGTH(list) > 0
)
SELECT value, COUNT(*) FROM cte GROUP BY value;
DROP TABLE #Lists;
This solution allows for any number of values in the list string (like '10,20,30').
Ideally, the list values should be stored in a separate table so that each record has a single value, such as CREATE TABLE BetterDesign (id int, value int) INSERT INTO BetterDesign (id, value) VALUES (10, NULL), (NULL, 10), (NULL, 20), (20, 10), (20, 20). Along with a million other reasons, this is better for querying SELECT value, COUNT(*) FROM BetterDesign GROUP BY value. That being said, I understand the pains of legacy systems.
Im trying to insert some values with multiple selects in the query, but its given me unknown column 'rate' in where clause error
INSERT INTO oc_tax_rule (tax_class_id, tax_rate_id, based, priority)
VALUES (
(SELECT tax_class_id FROM oc_tax_class WHERE title LIKE '%0%'),
(SELECT tax_rate_id FROM oc_tax_rate WHERE rate ='0'),
'store', 1)
You are probably looking for this:
INSERT INTO oc_tax_rule (tax_class_id, tax_rate_id, based, priority)
SELECT
(SELECT tax_class_id FROM oc_tax_class WHERE title LIKE '%0%' LIMIT 1),
(SELECT tax_rate_id FROM oc_tax_rate WHERE rate ='0' LIMIT 1),
'store',
1
select query will return just one row, with the first and second columns that are the result of your two select query - you probably need to add a LIMIT 1 in order to make sure that only one row will be returned
I am trying to select a random number of entries based on a specified parameter.
Like If I have a table T
T( id, data1, data2 ,no);
Now no is a field with a random bunch of numbers.
I want to get a random subset of T such that the number of no is at a particular value.
For example lets say I want total no=7
T(0,a,a,4);
T(1,B,B,4);
T(2,v,v,1);
T(3,d,d,2);
T(4,d,d,3);
The output of the query to the query would be either
T(0,a,a,4);
T(4,d,d,3);
OR
T(1,B,B,4);
T(2,v,v,1);
T(3,d,d,2);
etc.
Is this possible? I couldn't think of a logic. The best I could think of was retrieving them one row at a time and keep counting no but that's inefficient.
(NOTE: If the count exceeds no it is acceptable but not the other way around)
This will get you pretty close.
create table t ( name varchar(1), num tinyint ) ;
insert into t (name, num) values ( 'a', 4 ), ('b', 2), ('c', 3),
('d',1), ('e', 4), ('f', 1),('g', 6);
Here's the query:
select * from
(select name, num, #cum := #cum + num as cumulate from
(select * from t order by rand()) as t3,
(select #cum := 0 ) as t1
) as t2
where cumulate <= 7;
Here's a fiddle. Im sure it can be optimized but I was curious to see how to create it.
I have the following SQL (I have removed some of the selesct fi:
SELECT node_revisions.title AS 'Task',
node_revisions.body AS 'Description',
Date_format(field_due_date_value, '%e/%c/%Y') AS 'Due Date',
users.name AS 'User Name',
(SELECT GROUP_CONCAT(Concat(CHAR(10),Concat_ws( ' - ', name, From_unixtime( TIMESTAMP,
'%e/%c/%Y' )),CHAR(10),COMMENT))
FROM comments
WHERE comments.nid = content_type_task.nid) AS 'Comments'
FROM content_type_task
INNER JOIN users
ON content_type_task.field_assigned_to_uid = users.uid
INNER JOIN node_revisions
ON content_type_task.vid = node_revisions.vid
ORDER BY content_type_task.nid DESC
This pulls back all my tasks and all comments associated with a task. The problem I am having is that the comments field; created using the *GROUP_CONCAT*, is truncating the output. I don't know why and I don't know how to overcome this. (It looks to be at 341ish chars)
GROUP_CONCAT() is, by default, limited to 1024 bytes.
To work around this limitation and allow up to 100 KBytes of data,
add group_concat_max_len=102400 in my.cnf
or query the server using SET GLOBAL group_concat_max_len=102400.
As Andre mentioned, the result of GROUP_CONCAT() is limited to group_concat_max_len bytes.
However, when using GROUP_CONCAT with ORDER BY, the result is further truncated to one third of group_concat_max_len. This is why your original result was being truncated to 341 (= 1024/3) bytes. If you remove the ORDER BY clause, this should return up to the full 1024 bytes for Comments. For example:
CREATE TABLE MyTable
(
`Id` INTEGER,
`Type` VARCHAR(10),
`Data` TEXT
);
INSERT INTO MyTable VALUES
(0, 'Alpha', 'ABCDEF'),
(1, 'Alpha', 'GHIJKL'),
(2, 'Alpha', 'MNOPQR'),
(3, 'Alpha', 'STUVWX'),
(4, 'Alpha', 'YZ'),
(5, 'Numeric', '12345'),
(6, 'Numeric', '67890');
SET SESSION group_concat_max_len = 26;
-- Returns 26 bytes of data
SELECT Type, GROUP_CONCAT(Data SEPARATOR '') AS AllData_Unordered
FROM MyTable
GROUP BY Type;
-- Returns 26/3 = 8 bytes of data
SELECT Type, GROUP_CONCAT(Data SEPARATOR '') AS AllData_Ordered
FROM MyTable
GROUP BY Type
ORDER BY Id;
DROP TABLE MyTable;
Will return
Type AllData_Unordered
Alpha ABCDEFGHIJKLMNOPQRSTUVWXYZ
Numeric 1234567890
Type AllData_Ordered
Alpha ABCDEFGH
Numeric 12345678
I have not found this interaction between GROUP_CONCAT() and ORDER BY mentioned in the MySQL Manual, but it affects at least MySQL Server 5.1.
I need some help with a query.
i want to select from a table, some values, but the values depend on the value of an other cell. after i select i need to sort them.
Can i use ELECT column FROM table WHERE one=two ORDER BY ...?
thanks,
Sebastian
Yes you can, as long as you spell SELECT correctly.
Here is an example you can copy and paste into your MySQL Query Browser to see a query of this type in action:
CREATE TABLE table1 (
id INT NOT NULL,
name1 VARCHAR(100) NOT NULL,
name2 VARCHAR(100) NOT NULL,
sortorder INT NOT NULL
);
INSERT INTO table1 (id, name1, name2, sortorder) VALUES
(1, 'Foo', 'Foo', 4),
(2, 'Boo', 'Unknown', 2),
(3, 'Bar', 'Bar', 3),
(4, 'Baz', 'Baz', 1);
SELECT id
FROM table1
WHERE name1 = name2
ORDER BY sortorder;
Result:
4
3
1
Maybe some working examples will help:
This returns over 8100 records from one of my databases:
SELECT * FROM fax_logs WHERE fee = service_charge
This returns over 2700 records from my data:
SELECT * FROM fax_logs WHERE fee = service_charge + 5
This returns over 6900 records:
SELECT * FROM fax_logs WHERE fee = service_charge + copies
I might misunderstood your question, but I think you are trying to compare values of the first and second column. In Mysql, you can refer columns by number, not by name, only inside ORDER BY clause:
SELECT * FROM table ORDER BY 1 (order by the first column). You cannot use column index in WHERE.