date1 tran_1 date2 tran_2 date3 tran_3 ..... date80 tran_80
may01 24 jun02 32 aug18 56 ..... sep10 44
Sep01 24 Nov08 32 Jan18 56 ..... Jun18 44
Now the output should be and How to write a dynamic query.. I have written a procedure by passing parameters , for the above 80 variables i have to call the procedure about 40 times, plz help me
date tran type
may01 24 tran_1
Sep01 24 tran_1
jun02 32 tran_2
Nov08 32 tran_2
aug18 56 tran_3
Jan18 56 tran_3
........................
........................
sep10 44 tran_80
Jun18 44 tran_80
One method is to just use union all:
select date, tran_1 as tran, 'tran_1' as type from t union all
select date, tran_2 as tran, 'tran_2' as type from t union all
select date, tran_3 as tran, 'tran_3' as type from t union all
. . .
My recommendation would be to generate the code in a spreadsheet. Just generate the numbers 1 to 80 and use spreadsheet functions. Alternatively, you could generate dynamic SQL, if you don't want to type all the column names in.
If performance is an issue and you have lots and lots data, there are other methods. However, this type of query is often run only once and a more efficient query is more difficult to construct.
Related
In the past I entried my data month wise due to lack of my knowledge in the month name table. But now I easily filter them, in my database, there are 6 months (Oct - Feb) table are there with same row name month-wise data can I put all the data in a table, for manually put them in a file little bit difficult for me because of id,
so Please suggest to me to make it easily
for example, the October 2018 table is this
id user_name date nota veet tree location
1 milon 10/10/2018 43 12 111 bandel
2 kadir 11/10/2018 12 34 76 katwa
3 javed 22/10/2018 33 56 92 sirampur
4 milon 29/10/2018 55 21 78 salar
november 2018 table is
id user_name date nota veet tree location
1 milon 10/11/2018 13 12 71 Rampurhat
2 kadir 11/11/2018 12 24 76 katwa
3 javed 12/11/2018 53 30 62 kandi
4 milon 24/11/2018 55 27 58 salar
now I want SQL table like this
id user_name date nota veet tree location
1 milon 10/10/2018 43 12 111 bandel
2 kadir 11/10/2018 12 34 76 katwa
3 javed 22/10/2018 33 56 92 sirampur
4 milon 29/10/2018 55 21 78 salar
5 milon 10/11/2018 13 12 71 Rampurhat
6 kadir 11/11/2018 12 24 76 katwa
7 javed 12/11/2018 53 30 62 kandi
8 milon 24/11/2018 55 27 58 salar
You can create a new table (for example user_locations). Then you can copy data from first table and then copy data from second table. If you have ID column with auto-increment, and if you do not specify the ID column in select, than the ID will be assigned automatically and there will not be collision of IDs. Example of SQL for selecting from one table and inserting to another table:
INSERT INTO user_locations (user_name, date, nota, veet, tree, location)
SELECT user_name, date, nota, veet, tree, location FROM user_locations_november_2018
... however I am not sure that I understand you correctly. What exactly are you trying to do? Move data from all tables to one table (this is the one I answer). Or just select data from all the tables (in which case the UNION from other answer is correct). Or to select all data and put them into a text file?
select * from table1 Union Select * from Table2
use The SQL UNION Operator
I have parsing Queries with below references
link1 - SET and Select Query combine Run in a Single MySql Query to pass result in pentaho
link2
Input will be shown in below Col1 showing ,In #input in the above reference link i am considering only 1 records and applying parsing logic for each cell , but issue is with multiple rows (n rows) and combining result with parsing logic.
Col1
--------------
22:4,33:4
33:6,89:7,69:2,63:2
78:6
blank record
22:6,63:1
I want to create single Query for same as in reference link i asked for.
Expected Output
xyz count
------------
22 10
33 10
89 7
69 2
63 3
78 6
I tried solutions Passing values with this conditions
where condition pass 1 by 1 col1 in (my query)
MAX (col1)
group_concat
but i am not getting expected output to fit this all things in a single query.
I finally found solution for my question. and group_concat worked for this
#input= (select group_concat(Col1) from (select Col1 from table limit 10)s);
group_concat will merge all the rows of Col1 into comma seperated string
22:4,33:4,33:6,89:7,69:2,63:2,78:6,blank record,22:6,63:1
as we have now single string we can apply same logic as shown in link 1
we can replace blank record with REPLACE command and neglect it.
Output after using logic from link1 result
xyz count
------------
22 4
33 4
33 6
89 7
69 2
63 2
78 6
22 6
63 1
Just use Group by
select xyz,sum(count) from (select link1 output)s group by xyz;
will give you Final Output
xyz count
------------
22 10
33 10
89 7
69 2
63 3
78 6
I'm just stuck with this issue atm and I'm not 100% sure how to deal with it.
I have a table where I'm aggregating data on week
select week(create_date),count(*)
from user
where create_date > '2015-02-01'
and id_customer between 9 and 17
group by week(create_date);
the results that I'm getting have missing values in the count, as shown below
5 334
6 376
7 394
8 405
9 504
10 569
11 709
12 679
13 802
14 936
15 1081
16 559
21 1
24 9
25 22
26 1
32 3
34 1
35 1
For example here from 16 to 21 there a obviously 4 values missing I would like these values to be included and count to be 0. I want this because I want the weeks to be matching with other metrics as we are outputting them in an excel file for internal analysis.
Any help would be greatly appreciated.
The problem is that an sql query cannot really produce data that is not there at all.
You have 3 options:
If you have data for each week in your entire table for the period you are querying, then you can use a self join to get the missing weeks:
select week(t1.create_date), count(t2.id_customer)
from customer t1
left join customer t2 on t1.id_customer=t2.id_customer and t1.create_date=t2.create_date and t2.id_customer between 9 and 17
where t1.create_date > '2015-02-01'
group by week(t1.create_date)
If you have missing weeks from the customer table as whole, then create a helper table that contain week numbers from 1 or 0 (depending on mysql config) to 53 and do a left join to this helper table.
Use a stored procedure that loops through the results of your original query and inserts the missing data in the resultset using a temporary table and then returns the extended dataset as result.
The problem is that there is no data matching your criteria for the missing weeks. A solution will be to join from a table that has all week numbers. For example if you create a table weeknumbers with one field weeknumber containing all the numbers from 0 to 53 you can use something like this
select weeknumber,count(user.*)
from weeknumbers left join user on (weeknumbers.weeknumber=week(user.create_date)
and user.create_date > '2015-02-01'
and user.id_customer between 9 and 17)
group by weeknumber;
Additionaly you might want to limit the week numbers you do not want to see.
The other way is to do it in the application.
I have these sample course_numbers
cmsc 11
cmsc 2
cmsc 56
cmsc 21
cmsc 128
I use this query
SELECT * FROM subject ORDER BY LENGTH(`course_number`)
to natural sort the result
and it worked, but when i add these course_numbers
it 1
it 256
it 20
they kinda mess up.
What query should i use to order them like this
cmsc 2
cmsc 11
cmsc 21
cmsc 56
cmsc 128
it 1
it 11
it 20
it 100
it 256
I've searched and saw 'case' on their select statements but I do not know how to use them
You should consider splitting up the both parts of your course number, since "CMSC"/"IT" is one Part (even with variable length), and the real number is another part. If you store the number in a number column (int), you could easily correct them.
so it would be
SELECT concat(course_type, " ", course_subnumber) as course_number, ...
from subject
order by course_type, course_subnumber
As long as you have a bit luck, you could try with you structure the following:
SELECT * from subject
order by left(course_number, 2), length(course_number), course_number
then you get only in trouble if different course_types start with the same two letters.
Let say i want to store several dataset ie
78 94 33 22 14 55 18 10 11
44 59 69 79 39 49 29 19 39
And later on i would like to be able run queries that will determine the frequency of certain number. What would be the best way to this? What would be table structure to make a fast query.
Please be specific as you can be.
To get the counts, you can run a query such as:
SELECT value, COUNT(*) from table_of_values GROUP BY value
Placing an index on the single integer value column is pretty much all you can do to speed that up.
You could of course also just keep a table with every two-digit value and a count. You will have to pre-fill the table with zero counts for every value.
Then increment the count instead of inserting:
UPDATE table_of_values SET count = count + 1 WHERE value = (whatever)