I have a oracle table where one column has JSON data. I need to extract two elements from the data and want to display as column.
I am adding JSON data as a code sample to test.
create table TEST_TABLE
(id number,
importdata clob);
insert into TEST_TABLE values (100,'{"ClassId":30074,"Attributes":[{"Name":"TYPE-SPEC","Value":"SJ;3;1"},{"Name":"HREF","Value":"-1"},{"Name":"HMETHOD","Value":"96"},{"Name":"GEO_METHOD","Value":"96"},{"Name":"HPRCSN","Value":2.7676}]}');
insert into TEST_TABLE values (101,'{"ClassId":30074,"Attributes":[{"Name":"TYPE-SPEC","Value":"SJ;3;1"},{"Name":"HREF","Value":"-1"},{"Name":"HMETHOD","Value":"96"},{"Name":"HPRCSN","Value":3.04}]}');
insert into TEST_TABLE values (102,'{"ClassId":30074,"Attributes":[{"Name":"TYPE-SPEC","Value":"SJ;3;1"},{"Name":"HREF","Value":"-1"},{"Name":"HMETHOD","Value":"96"},{"Name":"GEO_METHOD","Value":"96"},{"Name":"HPRCSN","Value":77.1814}]}');
insert into TEST_TABLE values (103,'{"ClassId":30074,"Attributes":[{"Name":"TYPE-SPEC","Value":"SJ;3;1"},{"Name":"HREF","Value":"-1"},{"Name":"HMETHOD","Value":"96"},{"Name":"GEO_METHOD","Value":"-1"},{"Name":"HPRCSN","Value":3.1121}]}');
insert into TEST_TABLE values (105,'{"ClassId":32000,"Attributes":[{"Name":"ID","Value":"69804"},{"Name":"HREF","Value":"-1"},{"Name":"HPRCSN","Value":"5"}]},{"Name":"HMETHOD","Value":"96"} ');
insert into TEST_TABLE values (106,'{"ClassId":32000,"Attributes":[{"Name":"ID","Value":"73576"},{"Name":"HREF","Value":"-1"},{"Name":"HPRCSN","Value":"5"}]},{"Name":"HMETHOD","Value":"96"}]} ');
insert into TEST_TABLE values (107,'{"ClassId":32000,"Attributes":[{"Name":"ID","Value":"73589"},{"Name":"HREF","Value":"-1"},{"Name":"HPRCSN","Value":"5"}]},{"Name":"HMETHOD","Value":"96"}]} ');
insert into TEST_TABLE values (108,'{"ClassId":32000,"Attributes":[{"Name":"ID","Value":"74015"},{"Name":"HREF","Value":"-1"},{"Name":"HPRCSN","Value":"5"}]},{"Name":"HMETHOD","Value":"96"}]} ');
commit;
Now my actual plan was to get two elements out of these data : HMETH and HPRCSN.
I want to write a sql which will give me the output like this.
But I faced two problem
Each elements position might not same for each row. So i cannot use fixed position for
substr for that.
If the Value of HPRCSN is round then it has "" enclosed and if it is decimal then it comes without "". so all decimal output comes as round integer.
We made some code which is working some of it, but not 100% working because elements position and the decimal values. If anyone have any suggestion to fix this sql it would be so helpful.
select t1.id
,to_number(regexp_substr(replace(regexp_replace(importdata, '[^,[:digit:]]',''),',,',','),'[^,]+',15)) as HMETH
,to_number(regexp_substr(replace(regexp_replace(importdata, '[^,[:digit:]]',''),',,',','),'[^,]+',18)) as HPRCSN
from TEST_TABLE t1;
Here is my output which is wrong for some rows because of the position.
Never use regular expressions to parse HTML JSON; use a proper parser.
You can use JSON_TABLE to extract the name-value pairs:
select id,
classid,
name,
value
from TEST_TABLE t1
CROSS APPLY JSON_TABLE(
t1.importdata,
'$'
COLUMNS (
classid NUMBER PATH '$.ClassId',
NESTED PATH '$.Attributes[*]' COLUMNS (
name VARCHAR2(20) PATH '$.Name',
value VARCHAR2(20) PATH '$.Value'
)
)
);
Which, for your sample data, outputs:
ID
CLASSID
NAME
VALUE
100
30074
TYPE-SPEC
SJ;3;1
100
30074
HREF
-1
100
30074
HMETHOD
96
100
30074
GEO_METHOD
96
100
30074
HPRCSN
2.7676
101
30074
TYPE-SPEC
SJ;3;1
101
30074
HREF
-1
101
30074
HMETHOD
96
101
30074
HPRCSN
3.04
102
30074
TYPE-SPEC
SJ;3;1
102
30074
HREF
-1
102
30074
HMETHOD
96
102
30074
GEO_METHOD
96
102
30074
HPRCSN
77.1814
103
30074
TYPE-SPEC
SJ;3;1
103
30074
HREF
-1
103
30074
HMETHOD
96
103
30074
GEO_METHOD
-1
103
30074
HPRCSN
3.1121
105
32000
ID
69804
105
32000
HREF
-1
105
32000
HPRCSN
5
106
32000
ID
73576
106
32000
HREF
-1
106
32000
HPRCSN
5
107
32000
ID
73589
107
32000
HREF
-1
107
32000
HPRCSN
5
108
32000
ID
74015
108
32000
HREF
-1
108
32000
HPRCSN
5
If you want the values in columns (instead of rows) then use PIVOT:
SELECT *
FROM (
select id,
classid,
name,
value
from TEST_TABLE t1
CROSS APPLY JSON_TABLE(
t1.importdata,
'$'
COLUMNS (
classid NUMBER PATH '$.ClassId',
NESTED PATH '$.Attributes[*]' COLUMNS (
name VARCHAR2(20) PATH '$.Name',
value VARCHAR2(20) PATH '$.Value'
)
)
) j
)
PIVOT (
MAX(value) FOR name IN (
'ID' AS idvalue,
'HREF' AS href,
'GEO_METHOD' AS geomethod,
'HPRCSN' AS hprcsn,
'HMETHOD' AS hmethod
)
);
Which outputs:
ID
CLASSID
IDVALUE
HREF
GEOMETHOD
HPRCSN
HMETHOD
100
30074
null
-1
96
2.7676
96
107
32000
73589
-1
null
5
null
108
32000
74015
-1
null
5
null
101
30074
null
-1
null
3.04
96
106
32000
73576
-1
null
5
null
103
30074
null
-1
-1
3.1121
96
105
32000
69804
-1
null
5
null
102
30074
null
-1
96
77.1814
96
fiddle
Related
I have a table named three_current. this tables is inserted with 3 new rows every 1 minutes from another application, so the tables keeps on increasing in rows. These three new rows always have their channel number to be 350, 351, and 352. I want a trigger to insert each of these three rows into three separate tables such that each tables contains data with the same channel number.
The three_current tables is as such:
three_current table
datetime
channel_number
Value
Status
01/06/2021 22:45:00
350
100
1
01/06/2021 22:45:00
351
120
1
01/06/2021 22:45:00
352
110
1
01/06/2021 22:46:00
350
95
1
01/06/2021 22:46:00
351
105
1
01/06/2021 22:46:00
352
150
1
01/06/2021 22:47:00
350
195
1
01/06/2021 22:47:00
351
205
1
01/06/2021 22:47:00
352
250
1
I also have three other tables name red_current, yellow_current, and blue_current. I am trying without success to create a trigger to update these three tables based on the channel_number of three_current table such that
red_current table will be
datetime
channel_number
Value
Status
01/06/2021 22:45:00
350
100
1
01/06/2021 22:46:00
350
95
1
01/06/2021 22:47:00
350
195
1
yellow_current table will be
datetime
channel_number
Value
Status
01/06/2021 22:45:00
351
120
1
01/06/2021 22:46:00
351
105
1
01/06/2021 22:47:00
351
205
1
blue_current table will be
datetime
channel_number
Value
Status
01/06/2021 22:45:00
352
110
1
01/06/2021 22:46:00
352
150
1
01/06/2021 22:47:00
352
250
1
But what I get after executing my code is that the red_current, yellow_current and the blue_current tables are all being inserted with rows where the channel number is 350. This means that only the red_current table is correct while the other two tables are duplicates of the red_current table. (I feel my code can only execute for the first row of each updates received by three_current table and thats the row with channel number 350).
My code is as follows:
DELIMITER //
CREATE TRIGGER `add` AFTER INSERT ON `three_current`
FOR EACH ROW
BEGIN
DECLARE new_datetime datetime ; -- choose the datatypes
DECLARE new_channel_number int; --
DECLARE new_value double; --
DECLARE new_status smallint; --
SET new_datetime = new.datetime ;
SET new_channel_number = new.channel_number ;
SET new_value = new.value ;
SET new_status = new.status;
INSERT INTO red_current (datetime, channel_number, value, status)
SELECT new.datetime, new.channel_number , new.value, new.status
FROM three_current WHERE channel_number = '350'
ON DUPLICATE KEY UPDATE status = new.status;
INSERT INTO yellow_current (datetime, channel_number, value, status)
SELECT new.datetime, new.channel_number , new.value, new.status
FROM three_current WHERE channel_number = '351'
ON DUPLICATE KEY UPDATE status = new.status;
INSERT INTO blue_current (datetime, channel_number, value, status)
SELECT new.datetime, new.channel_number , new.value, new.status
FROM three_current WHERE channel_number = '352'
ON DUPLICATE KEY UPDATE status = new.status ;
END
//
DELIMITER ;
Hi i need a complex query
my table structure is
attribute_id value entity_id
188 48,51,94 1
188 43,22 2
188 43,22 3
188 43,22 6
190 33,11 10
190 90,61 12
190 90,61 15
I need the count of the value like
attribute_id value count
188 48 2
188 43 3
188 51 1
188 94 1
188 22 2
190 33 1
190 11 1
190 90 2
190 61 2
I have searched a lot on google to have something like this but unfortunately i didn't get any success. Please suggest me how can i achieve this .
I use a UDF for things like this. If that could work for you:
CREATE FUNCTION [dbo].[UDF_StringDelimiter]
/*********************************************************
** Takes Parameter "LIST" and transforms it for use **
** to select individual values or ranges of values. **
** **
** EX: 'This,is,a,test' = 'This' 'Is' 'A' 'Test' **
*********************************************************/
(
#LIST VARCHAR(8000)
,#DELIMITER VARCHAR(255)
)
RETURNS #TABLE TABLE
(
[RowID] INT IDENTITY
,[Value] VARCHAR(255)
)
WITH SCHEMABINDING
AS
BEGIN
DECLARE
#LISTLENGTH AS SMALLINT
,#LISTCURSOR AS SMALLINT
,#VALUE AS VARCHAR(255)
;
SELECT
#LISTLENGTH = LEN(#LIST) - LEN(REPLACE(#LIST,#DELIMITER,'')) + 1
,#LISTCURSOR = 1
,#VALUE = ''
;
WHILE #LISTCURSOR <= #LISTLENGTH
BEGIN
INSERT INTO #TABLE (Value)
SELECT
CASE
WHEN #LISTCURSOR < #LISTLENGTH
THEN SUBSTRING(#LIST,1,PATINDEX('%' + #DELIMITER + '%',#LIST) - 1)
ELSE SUBSTRING(#LIST,1,LEN(#LIST))
END
;
SET #LIST = STUFF(#LIST,1,PATINDEX('%' + #DELIMITER + '%',#LIST),'')
;
SET #LISTCURSOR = #LISTCURSOR + 1
;
END
;
RETURN
;
END
;
The UDF takes two parameters: A string to be split, and the delimiter to split by. I've been using it for all sorts of different things over the years, because sometimes you need to split by a comma, sometimes by a space, sometimes by a whole string.
Once you have that UDF, you can just do this:
DECLARE #TABLE TABLE
(
Attribute_ID INT
,Value VARCHAR(55)
,Entity_ID INT
);
INSERT INTO #TABLE VALUES (188, '48,51,94', 1);
INSERT INTO #TABLE VALUES (188, '43,22', 2);
INSERT INTO #TABLE VALUES (188, '43,22', 3);
INSERT INTO #TABLE VALUES (188, '43,22', 6);
INSERT INTO #TABLE VALUES (190, '33,11', 10);
INSERT INTO #TABLE VALUES (190, '90,61', 12);
INSERT INTO #TABLE VALUES (190, '90,61', 15);
SELECT
T1.Attribute_ID
,T2.Value
,COUNT(T2.Value) AS Counter
FROM #TABLE T1
CROSS APPLY dbo.UDF_StringDelimiter(T1.Value,',') T2
GROUP BY T1.Attribute_ID,T2.Value
ORDER BY T1.Attribute_ID ASC, Counter DESC
;
I did an ORDER BY Attribute_ID ascending and then the Counter descending so that you get each Attribute_ID with the most common repeating values first. You could change that, of course.
Returns this:
Attribute_ID Value Counter
-----------------------------------
188 43 3
188 22 3
188 94 1
188 48 1
188 51 1
190 61 2
190 90 2
190 11 1
190 33 1
dt1 transaction_1 dt2 transaction_2 dt3 transaction_3....dt80 tra_80
may01 22 jun01 56 Aug09 73
sep02 49 feb12 53 dec23 80
Now the Should be like below
dt transaction type
may01 22 transaction_1
sep02 49 transaction_1
jun01 56 transaction_2
feb12 53 transaction_2
Aug09 73 transaction_3
dec23 80 transaction_3
.... .. ......
.... .. ......
.... .. transaction_80
Plz anyone provide Query in MYSQL.
thanks,
Use this function CONCAT(string1, string2)
example:
SELECT CONCAT(dt, transaction) FROM table_name
I have a table of all defined Unicode characters (the character column) and their associated Unicode points (the id column). I have the following query:
SELECT id FROM unicode WHERE `character` IN ('A', 'B', 'C')
While this query should return only 3 rows (id = 65, 66, 67), it instead returns 129 rows including the following IDs:
65 66 67 97 98 99 129 141 143 144 157 160 193 205 207 208 221 224 257
269 271 272 285 288 321 333 335 336 349 352 449 461 463 464 477 480
2049 2061 2063 2064 2077 2080 4161 4173 4175 4176 4189 4192 4929 4941
4943 4944 4957 4960 5057 5069 5071 5072 5085 5088 5121 5133 5135 5136
5149 5152 5953 5965 5967 5968 5984 6145 6157 6160 6176 8257 8269 8271
8272 8285 8288 9025 9037 9039 9040 9053 9056 9153 9165 9167 9168 9181
9184 9217 9229 9231 9232 9245 9248 10049 10061 10063 10064 10077 10080
10241 10253 10255 10256 10269 10272 12353 12365 12367 12368 12381
12384 13121 13133 13135 13136 13149 13152 13249 13261 13263 13264
13277 13280
I'm sure this must have something to do with multi-byte characters but I'm not sure how to fix it. Any ideas what's going on here?
String equality and order is governed by a collation. By default the collation used is determined from the column, but you can set the collation per-query with the COLLATE clause. For example, if your columns are declared with charset utf8 you could use utf8_bin to use a binary collation that considers A and à different:
SELECT id FROM unicode WHERE `character` COLLATE utf8_bin IN ('A', 'B', 'C')
Alternatively you could use the BINARY operator to convert character into a "binary string" which forces the use of a binary comparison, which is almost but not quite the same as binary collation:
SELECT id FROM unicode WHERE BINARY `character` IN ('A', 'B', 'C')
Update: I thought that the following should be equivalent, but it's not because a column has lower "coercibility" than the constants. The binary string constants would be converted into non-binary and then compared.
SELECT id FROM unicode WHERE `character` IN (_binary'A', _binary'B', _binary'C')
You can try:
SELECT id FROM unicode WHERE 'character' IN (_utf8'A',_utf8'B',_utf8'C')
I'm looking for an elegant way (in terms of syntax, not necessarily efficient) to get the frequency distribution of a decimal range.
For example, I have a table with ratings column which can be a negative or positive. I want to get the frequency of rows with a rating of certain range.
- ...
- [-140.00 to -130.00): 5
- [-130.00 to -120.00): 2
- [-120.00 to -110.00): 1
- ...
- [120.00 to 130.00): 17
- and so on.
[i to j) means i inclusive to j exclusive.
Thanks in advance.
You could get pretty close using 'select floor(rating / 10), count(*) from (table) group by 1'
I was thinking of seomthing that could do many levels like
DELIMITER $$
CREATE PROCEDURE populate_stats()
BEGIN
DECLARE range_loop INT Default 500 ;
simple_loop: LOOP
SET the_next = range_loop - 10;
Select sum(case when range between range_loop and the_next then 1 else 0 end) from table,
IF the_next=-500 THEN
LEAVE simple_loop;
END IF;
END LOOP simple_loop;
END $$
usage: call populate_stats();
Would handle 100 ranges from 500-490, 490-480, ... -480 - -490, -490 - -500
assuming a finite number of ranges.
Select
sum(case when val between -140 to -130 then 1 else 0 end) as sum-140_to_-130,
sum(Case when val between -130 to -120 then 1 else 0 end) as sum-130_to_-140,
...
FROM table
and if not, you could use dynamic SQL to generate the select allowing a number of ranges however you may run into a column limitation.
Just put your desired ranges into a table, and use that to discriminate the values.
-- SET search_path='tmp';
DROP TABLE measurements;
CREATE TABLE measurements
( zval INTEGER NOT NULL PRIMARY KEY
);
INSERT INTO measurements (zval)
SELECT generate_series(1,1000);
DELETE FROM measurements WHERE random() < 0.20 ;
DROP TABLE ranges;
CREATE TABLE ranges
( zmin INTEGER NOT NULL PRIMARY KEY
, zmax INTEGER NOT NULL
);
INSERT INTO ranges(zmin,zmax) VALUES
(0, 100), (100, 200), (200, 300), (300, 400), (400, 500),
(500, 600), (600, 700), (700, 800), (800, 900), (900, 1000)
;
SELECT ra.zmin,ra.zmax
, COUNT(*) AS zcount
FROM ranges ra
JOIN measurements me
ON me.zval >= ra.zmin AND me.zval < ra.zmax
GROUP BY ra.zmin,ra.zmax
ORDER BY ra.zmin
;
Results:
zmin | zmax | zcount
------+------+--------
0 | 100 | 89
100 | 200 | 76
200 | 300 | 76
300 | 400 | 74
400 | 500 | 86
500 | 600 | 78
600 | 700 | 75
700 | 800 | 75
800 | 900 | 80
900 | 1000 | 82
(10 rows)