+---------------------+-----------+-----------+-----------+-----------+
| dt | val1 | val2 | val3 | total |
+---------------------+-----------+-----------+-----------+-----------+
| 2020-07-02 12:00:17 | 123317117 | 109962378 | 105746677 | 339026172 |
| 2020-07-02 12:01:18 | 123317269 | 109962533 | 105746841 | 339026643 |
| 2020-07-02 12:02:19 | 123317422 | 109962688 | 105747005 | 339027115 |
| 2020-07-02 12:03:20 | 123317574 | 109962843 | 105747169 | 339027586 |
| 2020-07-02 12:04:21 | 123317726 | 109962999 | 105747331 | 339028056 |
| 2020-07-02 12:05:22 | 123317877 | 109963153 | 105747492 | 339028522 |
| 2020-07-02 12:06:23 | 123318030 | 109963308 | 105747656 | 339028994 |
| 2020-07-02 12:07:23 | 123318182 | 109963464 | 105747820 | 339029466 |
| 2020-07-02 12:08:24 | 123318335 | 109963619 | 105747987 | 339029941 |
| 2020-07-02 12:09:25 | 123318487 | 109963774 | 105748153 | 339030414 |
| 2020-07-02 12:10:26 | 123318640 | 109963929 | 105748318 | 339030887 |
| 2020-07-02 12:11:26 | 123318792 | 109964085 | 105748482 | 339031359 |
| 2020-07-02 12:12:27 | 123318944 | 109964240 | 105748646 | 339031830 |
| 2020-07-02 12:13:28 | 123319096 | 109964395 | 105748808 | 339032299 |
| 2020-07-02 12:14:29 | 123319248 | 109964550 | 105748971 | 339032769 |
| 2020-07-02 12:15:30 | 123319400 | 109964705 | 105749134 | 339033239 |
| 2020-07-02 12:16:30 | 123319552 | 109964860 | 105749300 | 339033712 |
| 2020-07-02 12:17:31 | 123319704 | 109965015 | 105749466 | 339034185 |
| 2020-07-02 12:18:32 | 123319857 | 109965170 | 105749631 | 339034658 |
| 2020-07-02 12:19:33 | 123320009 | 109965325 | 105749795 | 339035129 |
| 2020-07-02 12:20:34 | 123320153 | 109965473 | 105749952 | 339035578 |
| 2020-07-02 12:21:34 | 123320305 | 109965627 | 105750114 | 339036046 |
| 2020-07-02 12:22:35 | 123320457 | 109965782 | 105750276 | 339036515 |
| 2020-07-02 12:23:36 | 123320609 | 109965937 | 105750438 | 339036984 |
| 2020-07-02 12:24:37 | 123320761 | 109966092 | 105750602 | 339037455 |
| 2020-07-02 12:25:38 | 123320913 | 109966246 | 105750768 | 339037927 |
| 2020-07-02 12:26:39 | 123321065 | 109966401 | 105750934 | 339038400 |
| 2020-07-02 12:27:39 | 123321218 | 109966556 | 105751098 | 339038872 |
| 2020-07-02 12:28:40 | 123321370 | 109966711 | 105751263 | 339039344 |
| 2020-07-02 12:29:41 | 123321522 | 109966867 | 105751426 | 339039815 |
| 2020-07-02 12:30:42 | 123321674 | 109967022 | 105751588 | 339040284 |
| 2020-07-02 12:31:42 | 123321827 | 109967176 | 105751751 | 339040754 |
| 2020-07-02 12:32:43 | 123321979 | 109967331 | 105751915 | 339041225 |
| 2020-07-02 12:33:44 | 123322130 | 109967487 | 105752079 | 339041696 |
| 2020-07-02 12:34:45 | 123322283 | 109967642 | 105752245 | 339042170 |
| 2020-07-02 12:35:45 | 123322435 | 109967797 | 105752411 | 339042643 |
| 2020-07-02 12:36:46 | 123322587 | 109967952 | 105752576 | 339043115 |
| 2020-07-02 12:37:47 | 123322739 | 109968108 | 105752741 | 339043588 |
| 2020-07-02 12:38:48 | 123322891 | 109968263 | 105752905 | 339044059 |
| 2020-07-02 12:39:49 | 123323043 | 109968419 | 105753067 | 339044529 |
| 2020-07-02 12:40:49 | 123323196 | 109968574 | 105753231 | 339045001 |
| 2020-07-02 12:41:50 | 123323348 | 109968729 | 105753395 | 339045472 |
| 2020-07-02 12:42:51 | 123323501 | 109968884 | 105753561 | 339045946 |
| 2020-07-02 12:43:52 | 123323653 | 109969040 | 105753727 | 339046420 |
| 2020-07-02 12:44:53 | 123323805 | 109969195 | 105753892 | 339046892 |
| 2020-07-02 12:45:53 | 123323957 | 109969350 | 105754056 | 339047363 |
| 2020-07-02 12:46:54 | 123324109 | 109969505 | 105754220 | 339047834 |
| 2020-07-02 12:47:55 | 123324261 | 109969660 | 105754381 | 339048302 |
| 2020-07-02 12:48:56 | 123324413 | 109969815 | 105754544 | 339048772 |
| 2020-07-02 12:49:56 | 123324565 | 109969970 | 105754708 | 339049243 |
| 2020-07-02 12:50:57 | 123324717 | 109970126 | 105754872 | 339049715 |
| 2020-07-02 12:51:58 | 123324869 | 109970281 | 105755038 | 339050188 |
| 2020-07-02 12:52:59 | 123325022 | 109970437 | 105755205 | 339050664 |
| 2020-07-02 12:54:00 | 123325174 | 109970592 | 105755370 | 339051136 |
| 2020-07-02 12:55:00 | 123325327 | 109970749 | 105755536 | 339051612 |
| 2020-07-02 12:56:01 | 123325478 | 109970904 | 105755700 | 339052082 |
| 2020-07-02 12:57:02 | 123325630 | 109971060 | 105755863 | 339052553 |
| 2020-07-02 12:58:03 | 123325783 | 109971216 | 105756027 | 339053026 |
| 2020-07-02 12:59:04 | 123325935 | 109971372 | 105756191 | 339053498 |
| 2020-07-02 13:00:04 | 123326087 | 109971527 | 105756357 | 339053971 |
+---------------------+-----------+-----------+-----------+-----------+
Hello,
I have the above table values, some data added to mysql every minute. I might have to add it faster though.
I want to know, if it is possible to create, and how to go about creating:
A table that takes the 1st value of the total column and subtracts it from the second value, the 2nd value from the 3rd, 3rd from 4th and so on, divided by 10.
+---------------------+-----------+---------------------+-----------+---------+
| date1 | t1 | date2 | t2 | diff |
+---------------------+-----------+---------------------+-----------+---------+
| 2020-07-02 12:01:18 | 339026643 | 2020-07-02 12:00:17 | 339026172 | 47.1000 |
+---------------------+-----------+---------------------+-----------+---------+
A table that takes the 1st entry of 1st hour and subtracts it from the 1st entry of the next hour. basically 01:00 - 00:00, 02:00-01:00 ... 24:00-23:00 divided by 10, and begins from 00:00 again after the day changes.
+---------------------+-----------+---------------------+-----------+-----------+
| date1 | t1 | date2 | t2 | diff |
+---------------------+-----------+---------------------+-----------+-----------+
| 2020-07-02 13:00:04 | 339053971 | 2020-07-02 12:00:17 | 339026172 | 2779.9000 |
+---------------------+-----------+---------------------+-----------+-----------+
It will be great if both tables will grow as the main tables adds data.
Thank you!
I found a way to do it, and also managed to create a view for it.
I got help from this post:
Here is the code i used:
create view total_diff as
SELECT
t1.dt as t1_dt,
t1.total as t1_total,
t2.dt as t2_dt,
t2.total as t2_total,
t1.total - COALESCE(t2.total, t1.total) AS diff
FROM values t1
LEFT JOIN values t2
ON t1.id = t2.id + 1
ORDER BY
t1.dt;
Thank you for the advice on views! Nifty trick.
After i found out how they work, i made a bunch of them, for different situations.
Related
+---------+----------------+--------+
| aid | fn | col_no |
+---------+----------------+--------+
| 2011768 | ABDUL | 5 |
| 2011499 | ABDULLA | 4 |
| 2011198 | ADNAN | 3 |
| 2011590 | AKSHAYA PRAISY | 2 |
| 2011749 | AMIR | 1 |
| 2011213 | AMOGHA | 5 |
| 2011027 | ANU | 4 |
| 2011046 | ANUDEV D | 3 |
| 2011435 | B S SAHANA | 2 |
| 2011112 | BENAKA | 1 |
+---------+----------------+--------+
How to sort the number like col_no as 1 2 3 4 5 and again repeat as 1 2 3 4 5?
i need output like this
+---------+----------------+--------+
| aid | fn | col_no |
+---------+----------------+--------+
| 2011749 | AMIR | 1 |
| 2011590 | AKSHAYA PRAISY | 2 |
| 2011198 | ADNAN | 3 |
| 2011499 | ABDULLA | 4 |
| 2011768 | ABDUL | 5 |
| 2011112 | BENAKA | 1 |
| 2011435 | B S SAHANA | 2 |
| 2011046 | ANUDEV D | 3 |
| 2011027 | ANU | 4 |
| 2011213 | AMOGHA | 5 |
+---------+----------------+--------+
You can use row_number() partition by col_no:
select t.*
from t
order by row_number() over (partition by col_no order by fn),
col_no;
Here is a db<>fiddle.
I'm importing data from a csv file into a table that looks like the following:
+------------+------------------------+----------------+
| date_1 | tournament_1 | misc_1 |
+------------+------------------------+----------------+
| 01/01/2020 | ATP-ACAPULCO | random_data_1 |
| 01/01/2020 | ATP-ACAPULCO | random_data_2 |
| 02/01/2020 | ATP-ACAPULCO | random_data_3 |
| 01/01/2020 | CALGARY-CHALLENGER-MEN | random_data_4 |
| 02/01/2020 | CALGARY-CHALLENGER-MEN | random_data_5 |
| 02/01/2020 | CALGARY-CHALLENGER-MEN | random_data_6 |
| 03/01/2020 | CALGARY-CHALLENGER-MEN | random_data_7 |
| 03/01/2020 | CALGARY-CHALLENGER-MEN | random_data_8 |
| 01/01/2021 | ATP-ACAPULCO | random_data_9 |
| 01/01/2021 | ATP-ACAPULCO | random_data_10 |
| 02/01/2021 | ATP-ACAPULCO | random_data_11 |
| 02/01/2021 | CALGARY-CHALLENGER-MEN | random_data_12 |
| 03/01/2021 | CALGARY-CHALLENGER-MEN | random_data_13 |
+------------+------------------------+----------------+
I need to be able to link this table to another linked table which has a list of tournaments from another system:
+------+----------------------+
| id_2 | tournament_2 |
+------+----------------------+
| 123 | Mexico-Acapulco-2020 |
| 456 | Canada-Calgary-2020 |
| 789 | Mexico-Acapulco-2021 |
| 1011 | Canada-Calgary-2021 |
+------+----------------------+
As the second table is linked then my plan was to build a third 'cross-reference' table:
+------+----------------------+------------------------+
| id_2 | tournament_2 | tournament_1 |
+------+----------------------+------------------------+
| 123 | Mexico-Acapulco-2020 | ATP-ACAPULCO |
| 456 | Canada-Calgary-2020 | CALGARY-CHALLENGER-MEN |
| 789 | Mexico-Acapulco-2021 | ATP-ACAPULCO |
| 1011 | Canada-Calgary-2021 | CALGARY-CHALLENGER-MEN |
+------+----------------------+------------------------+
However, I need unique pairs of tournaments and I don't.
To get around this I could build a primary key for the first table as follows:
+------------+------------------------+----------------+-----------------------------+
| date_1 | tournament_1 | misc_1 | tournament+year_1 |
+------------+------------------------+----------------+-----------------------------+
| 01/01/2020 | ATP-ACAPULCO | random_data_1 | ATP-ACAPULCO-2020 |
| 01/01/2020 | ATP-ACAPULCO | random_data_2 | ATP-ACAPULCO-2020 |
| 02/01/2020 | ATP-ACAPULCO | random_data_3 | ATP-ACAPULCO-2020 |
| 01/01/2020 | CALGARY-CHALLENGER-MEN | random_data_4 | CALGARY-CHALLENGER-MEN-2020 |
| 02/01/2020 | CALGARY-CHALLENGER-MEN | random_data_5 | CALGARY-CHALLENGER-MEN-2020 |
| 02/01/2020 | CALGARY-CHALLENGER-MEN | random_data_6 | CALGARY-CHALLENGER-MEN-2020 |
| 03/01/2020 | CALGARY-CHALLENGER-MEN | random_data_7 | CALGARY-CHALLENGER-MEN-2020 |
| 03/01/2020 | CALGARY-CHALLENGER-MEN | random_data_8 | CALGARY-CHALLENGER-MEN-2020 |
| 01/01/2021 | ATP-ACAPULCO | random_data_9 | ATP-ACAPULCO-2021 |
| 01/01/2021 | ATP-ACAPULCO | random_data_10 | ATP-ACAPULCO-2021 |
| 02/01/2021 | ATP-ACAPULCO | random_data_11 | ATP-ACAPULCO-2021 |
| 02/01/2021 | CALGARY-CHALLENGER-MEN | random_data_12 | CALGARY-CHALLENGER-MEN-2021 |
| 03/01/2021 | CALGARY-CHALLENGER-MEN | random_data_13 | CALGARY-CHALLENGER-MEN-2021 |
+------------+------------------------+----------------+-----------------------------+
I could then create the one to one pairing as follows:
+------+----------------------+-----------------------------+
| id_2 | tournament_2 | tournament_1 + year_1 |
+------+----------------------+-----------------------------+
| 123 | Mexico-Acapulco-2020 | ATP-ACAPULCO-2020 |
| 456 | Canada-Calgary-2020 | CALGARY-CHALLENGER-MEN-2020 |
| 789 | Mexico-Acapulco-2021 | ATP-ACAPULCO-2021 |
| 1011 | Canada-Calgary-2021 | CALGARY-CHALLENGER-MEN-2021 |
+------+----------------------+-----------------------------+
However, I can't create the key via a calculated field as I can't link on that so I'd have to fall back on some VBA to create it on import.
Am I missing a more simple or elegant solution here?
I would like to transpose the rows to columns in sql.
My Table looks like this:
+-------+--------+--------------+---------+--------------+---------+----------------+---------+---------+---------+---------+---------+
| ID | Desk | Reason1 | Amount1 | Reason2 | Amount2 | Reason3 | Amount3 | Reason4 | Amount4 | Reason5 | Amount5 |
+-------+--------+--------------+---------+--------------+---------+----------------+---------+---------+---------+---------+---------+
| 34850 | Desk1 | nktp | 2 | sectors | 1 | auc | 1 | thr | -13 | other | -3 |
| 34851 | Desk2 | TOC Reb | 5 | SG & HK ETF | 5 | | 0 | | 0 | | 0 |
| 34853 | Desk3 | China | -5 | HK | 0 | CNH | 0 | HK2 | 35 | | 0 |
| 34854 | Desk4 | ETFs | 2 | KSTA Opening | 6 | KSTA Rebalance | 14 | | 0 | | 0 |
| 34855 | Desk5 | BTC | 5 | | 0 | | 0 | | 0 | | 0 |
| 34856 | Desk6 | Sales | 10 | Delta | 5 | | 0 | | 0 | | 0 |
| 34857 | Desk7 | ES | 1 | HSI | 0 | | 0 | | 0 | | 0 |
| 34858 | Desk8 | OTC | 10 | SPREADS | 10 | | 0 | | 0 | | 0 |
| 34859 | Desk9 | MES/ZTW | 10 | O/N Spreads | -20 | | 0 | | 0 | | 0 |
| 34860 | Desk10 | CBBC TENCENT | 4 | CBBC HSI | 1 | | 0 | | 0 | | 0 |
+-------+--------+--------------+---------+--------------+---------+----------------+---------+---------+---------+---------+---------+
How do I transpose the table in SQL where the reasons are the rows and the desk are columns?
Output wanted:
+
----------------+---------+--------+-------------+--------+-------+--------+
| | Desk1 | Amount | Desk2 | Amount | Desk3 | Amount |
+----------------+---------+--------+-------------+--------+-------+--------+
| Reason1 | nktp | 2 | TOC Reb | 5 | China | -5 |
| Reason2 | sectors | 1 | SG & HK ETF | 5 | HK | 0 |
| Reason3 | auc | 1 | | | CNH | 0 |
| Reason4 | thr | -13 | | | HK2 | 35 |
| Reason5 | other | -3 | | | | |
| General_Remark | | | | | | |
+----------------+---------+--------+-------------+--------+-------+--------+
A normalized design might look something like this:
reasons
+-----------+---------+----------------+--------+
| reason_id | desk_id | reason | amount |
+-----------+---------+----------------+--------+
| 1 | 34850 | nktp | 2 |
| 2 | 34851 | TOC Reb | 5 |
| 3 | 34853 | China | -5 |
| 4 | 34854 | ETFs | 2 |
| 5 | 34855 | BTC | 5 |
| 6 | 34856 | Sales | 10 |
| 7 | 34857 | ES | 1 |
| 8 | 34858 | OTC | 10 |
| 9 | 34859 | MES/ZTW | 10 |
| 10 | 34860 | CBBC TENCENT | 4 |
| 11 | 34850 | sectors | 1 |
| 12 | 34851 | SG & HK ETF | 5 |
| 13 | 34853 | HK | 0 |
| 14 | 34854 | KSTA Opening | 6 |
| 15 | 34856 | Delta | 5 |
| 16 | 34857 | HSI | 0 |
| 17 | 34858 | SPREADS | 10 |
| 18 | 34859 | O/N Spreads | -20 |
| 19 | 34860 | CBBC HSI | 1 |
| 20 | 34850 | auc | 1 |
| 21 | 34853 | CNH | 0 |
| 22 | 34854 | KSTA Rebalance | 14 |
| 23 | 34850 | thr | -13 |
| 24 | 34853 | HK2 | 35 |
| 25 | 34850 | other | -3 |
+-----------+---------+----------------+--------+
desks
+---------+------------+
| desk_id | Desk_name |
+---------+------------+
| 34850 | Desk1 |
| 34851 | Desk2 |
| 34853 | Desk3 |
| 34854 | Desk4 |
| 34855 | Desk5 |
| 34856 | Desk6 |
| 34857 | Desk7 |
| 34858 | Desk8 |
| 34859 | Desk9 |
| 34860 | Desk10 |
+---------+------------+
If it was me, I'd start from here.
I have a query_table Table and wants to join with match_table Table with nearest matching string. If it was vice-versa then 'like' would have worked but have no idea how to do this.
query_table
+----+------------------+
| id | string |
+----+------------------+
| 1 | fcc9e8796feb |
| 2 | fcdbd7ebcf89 |
| 3 | fccc87896feb |
| 4 | fcc7c7896fef |
| 5 | fcced777aaaf |
+----+------------------+
match_table
+----+-----------+
| id | match_code|
+----+-----------+
| 1 | fcff |
| 2 | fcccc |
| 3 | fccc8 |
| 4 | fccc9 |
| 5 | fccdb |
| 6 | fccdc |
| 7 | fccd8 |
| 8 | fcce |
| 9 | fcced |
| 10 | fccee |
| 11 | fcce6 |
| 12 | fcc7b |
| 13 | fcc7c |
| 14 | fcc8e |
| 15 | fcc87 |
| 16 | fcc88 |
| 17 | fcc9e |
| 18 | fcdbb |
| 19 | fcdbc |
| 20 | fcdbd |
+----+-----------+
I expect
result
+----+------------------+----+----------------+
| id | string | id | match_code |
+----+------------------+----+----------------
| 1 | fcc9e8796feb | 17 | fcc9e |
| 2 | fcdbd7ebcf89 | 20 | fcdbd |
| 3 | fccc87896feb | 3 | fccc8 |
| 4 | fcc7c7896fef | 13 | fcc7c |
| 5 | fcced777aaaf | 9 | fcced |
+----+------------------+----+----------------+
How can I have for each grp_id 2 rows:
before and after row of current unix timestamp,
+---------+--------------------+------------+-------+
| id | grp_id | utimes | value |
+---------+--------------------+------------+-------+
| 4156187 | 5282 | 1455663600 | 15897 |
| 4159888 | 5282 | 1455630000 | 26998 |*
| 4156190 | 5282 | 1455676200 | 28497 |
| 4156186 | 5282 | 1455661800 | 14097 |
| 4156183 | 5282 | 1455652800 | 5097 |
| 4156184 | 5282 | 1455656400 | 8697 |
| 4156185 | 5282 | 1455660000 | 12297 |
| 4156182 | 5282 | 1455651000 | 3297 |*
| 4163311 | 7216 | 1455693000 | 45297 |
| 4163275 | 7203 | 1455681600 | 33897 |
| 4163309 | 7214 | 1455697800 | 50097 |
| 4163308 | 7214 | 1455696000 | 48297 |
| 4163307 | 7214 | 1455694200 | 46497 |
| 4163306 | 7214 | 1455692400 | 44697 |
| 4163305 | 7214 | 1455690600 | 42897 |
| 4163304 | 7214 | 1455688800 | 41097 |
| 4151121 | 4356 | 1455703200 | 55497 |
| 4163271 | 7205 | 1455685500 | 37797 |
| 4163272 | 7205 | 1455687000 | 39297 |
| 4163269 | 7205 | 1455684900 | 37197 |
| 4163273 | 7205 | 1455687300 | 39597 |
| 4163264 | 7206 | 1455674400 | 26697 |
| 4163270 | 7205 | 1455685200 | 37497 |
+---------+--------------------+------------+-------+
Example:
unix-timestamp : 1455647703
+---------+--------------------+------------+-------+
| id | grp_id | utimes | value |
+---------+--------------------+------------+-------+
| 4159888 | 5282 | 1455630000 | 26998 |
| 4156190 | 5282 | 1455651000 | 28497 |
| 4159889 | XYZ | 1455630000 | 26998 |
| 4156191 | XYZ | 1455651000 | 28497 |
| 4159883 | ABC | 1455630000 | 26998 |
| 4156195 | ABC | 1455651000 | 28497 |
+---------+--------------------+------------+-------+
Thank you !
You can do it with a LEFT JOIN operation. The ON clause contains the 'business logic':
SELECT t1.*
FROM mytable AS t1
LEFT JOIN mytable AS t2
ON t1.grp_id = t2.grp_id
AND
((t1.utimes < 1455647703 AND t2.utimes < 1455647703 AND t2.utimes > t1.utimes)
OR
(t1.utimes > 1455647703 AND t2.utimes > 1455647703 AND t2.utimes < t1.utimes))
WHERE t1.grp_id = 5282 AND t2.id IS NULL
Demo here