I have select:
select regexp_replace(regexp_substr('[{"date": "01_2016", "val":"100_22"},{"date": "02_2016","val": "200.10"}]'
,'"val":\s*("(\w| )*")', 1, level)
,'"val":\s*"((\w| )*)"', '\1', 1, 1) val
from dual
connect by regexp_substr('[{"date": "01_2016", "val":"100_22"},{"date": "02_2016","val": "200.10"}]', '"val":\s*("(\w| )*")', 1, level) is not null
;
If my value have format 100_10 it is ok. But I want 100.10 and this select not support this. How to write regexp_replace?
Use (\d+)_(\d+) to match only the numeric values separated by an underscore:
SELECT REGEXP_REPLACE(
'[{"date": "01_2016", "val":"100_22"},{"date": "02_2016","val": "200.10"}]',
'"val":"(\d+)_(\d+)"',
'"val":"\1.\2"'
)
FROM DUAL;
Thanks everybody. I found the solution
select regexp_replace(regexp_substr('[{"date": "01-2016", "val":"100.22"},{"date": "02-2016","val": "200.10"},{"date": "03-2016","val": "200.15"}]','"val":\s*("(\w|[..])*")', 1, level),'"val":\s*"((\w|[..])*)"', '\1', 1, 1) val, regexp_replace(regexp_substr('[{"date": "01-2016", "val":"100.22"},{"date": "02-2016","val": "200.10"},{"date": "03-2016","val": "200.15"}]' ,'"date":\s*("(\w|[-])*")', 1, level) ,'"date":\s*"((\w|[-])*)"', '\1', 1, 1) date_period from dual connect by regexp_substr('[{"date": "01-2016", "val":"100.22"},{"date": "02-2016","val": "200.10"},{"date": "03-2016","val": "200.15"}]', '"val":\s*("(\w|[..])*")', 1, level) is not null
Related
I am trying to use mysql to solve the question below.
Any idea how should I make it work? Thank you.
Tried to use the code below but extracted duplicate strings in two columns and it's hard-code so it's not working..
SELECT itemid, SUBSTRING_INDEX(SUBSTRING_INDEX(item_variation, ',', 1), ',', -1) 'type one',
SUBSTRING_INDEX(SUBSTRING_INDEX(item_variation, ',', 2), ',', -1) 'type two',
SUBSTRING_INDEX(SUBSTRING_INDEX(item_variation, ',', 3), ',', -1) 'type three',
SUBSTRING_INDEX(SUBSTRING_INDEX(item_variation, ',', 4), ',', -1) 'type four'
FROM data
Question:
To extract items with more than 3 types
|itemid|shopid|item_name|item_type|price|stock|creation_date|
|1|10000|clothes|{}|5|100|27/1/2018|
|2|10000|dress|{Pink: 20, Black: 20, Grey: 20}|20|100|20/2/2018|
|3|10001|t-shirt|{S: 2, M: 2, L: 2, XL: 2}|2|50|1/1/2018|
|4|10002|socks|{us5.5: 1, us9: 1, us4.5: 1, us10: 1, us7: 1, us6: 1, us5: 1}|1|1000|4/1/2018|
|5|10002|Gloves|{S: 2, M: 2}|2|500|6/1/2018|
Expected result
|itemid |item_name |item_type|
|3 |t-shirt |{S: 2, M: 2, L: 2, XL: 2}|
|4 |socks |{us5.5: 1, us9: 1, us4.5: 1, us10: 1, us7: 1, us6: 1, us5: 1}|
This ought to do:
select itemid, item_name, item_type
from t
where length(item_type) - length(replace(item_type, ',', '')) >= 3;
You need a special case to tell 0 and 1 apart. It would not work if item_type contains ',' in either key or value of the json-like field (missing quoted around strings to be json).
You just need to count number of comma(,)>=3. Try below code:
SELECT
Itemid,Item_Name,Item_type
FROM myjson
where ROUND (
(
LENGTH(item_type)
- LENGTH( REPLACE ( item_type, ",", "") )
) / LENGTH(",")
)>=3
I have a json_array [1, 2, 3, 3, 3], and I want to find out where how many the element '3' is.
For example,
json_search('[1, 2, 3, 3, 3]', 'all', 3) return null;
json_search('["1", "2", "3", "3", "3"]', 'all', '3') return ["$[2]", "$[3]", "$[4]"];
Therefore,
json_length(json_search('[1, 2, 3, 3, 3]', 'all', 3)) return null;
I want to 3
I’ve been looking all day, but I don’t know the solution and ask for help.
One option here, assuming you have just a single top level array of JSON integers, would be to use a regex replacement trick to count the number of 3's:
WITH yourTable AS (
SELECT '[1, 2, 3, 3, 3]' AS array
)
SELECT
LENGTH(array) - LENGTH(REGEXP_REPLACE(array, '\\b3\\b', '')) AS num_3
FROM yourTable;
This returns 3 as the length, which is correct.
I have month value like "22018" in my column I need it like Feb-2018 in mysql workbench
You need to first extract the month from the date (considering it will have one or two digits), e.g.:
SELECT LPAD(SUBSTRING('22018', 1, LENGTH('22018') - 4), 2, '0');
This will give you 02. Now, you can extract the year with similar logic, e.g.:
SELECT SUBSTRING('22018', LENGTH('22018') - 4 + 1, LENGTH('22018'));
Finally, you can concatenate all these to get a string like 2018-02-01:
SELECT CONCAT(SUBSTRING('22018', LENGTH('22018') - 4 + 1, LENGTH('22018')),
'-',
LPAD(SUBSTRING('22018', 1, LENGTH('22018') - 4), 2, '0'), '-01');
Once this is done, you can use DATE_FORMAT function to get the required output:
SELECT DATE_FORMAT(CONCAT(SUBSTRING('22018', LENGTH('22018') - 4 + 1,
LENGTH('22018')),
'-',
LPAD(SUBSTRING('22018', 1, LENGTH('22018') - 4), 2, '0'), '-01'), '%M-%Y');
i have JSON string in one column in oracle 10g database like
[{"id":"1","contactBy":"Rajesh Kumar"},{"id":"2","contactBy":"Rakesh Kumar"}]
I have to get the value for ContactBy in that column for one of the reports.
is there any built in function to parse the JSON string in Oracle 10g or any user defined funciton to parse the String
As said by Jens in comments, JSON support is only available from 12c, but you can use regular expressions as a workaround to get what you want:
select regexp_replace(regexp_substr('[{"id": "1", "contactBy":"Rajesh Kumar"},{"id": "2","contactBy": "Emmanuel Test"}]',
'"contactBy":\s*("(\w| )*")', 1, level),
'"contactBy":\s*"((\w| )*)"', '\1', 1, 1) contact
from dual
connect by regexp_substr('[{"id": "1","contactBy":"Rajesh Kumar"},{"id": "2","contactBy": "Emmanuel Test"}]', '"contactBy":\s*("(\w| )*")', 1, level) is not null
;
EDIT : request modified to take both special characters and display answers in a single row:
select listagg(contact, ', ') within group (order by lev)
from
(
select regexp_replace(regexp_substr('[{"id": "1", "contactBy":"Rajesh Kumar"},{"id": "2","contactBy": "Emmanuel Test+-"}]',
'"contactBy":\s*(".*?")', 1, level),
'"contactBy":\s*"(.*?)"', '\1', 1, 1) contact, level lev
from dual
connect by regexp_substr('[{"id": "1","contactBy":"Rajesh Kumar"},{"id": "2","contactBy": "Emmanuel Test+-"}]', '"contactBy":\s*(".*?")', 1, level) is not null
)
;
# Emmanuel your code is really helped a lot, thank you very much. but your query is taking too much of time, so i changed to a function , which will return the required values.
CREATE OR REPLACE FUNCTION SFGETCRCONTACTBY(INCRID NUMBER) RETURN VARCHAR2 AS
TEMPINT NUMBER :=0;
OUTPUT VARCHAR2(10000) ;
TEMPVAR VARCHAR2(1000);
BEGIN
SELECT REGEXP_COUNT(CR_CONTACT_BY, '"contactBy":\S*(".*?")')
INTO TEMPINT
FROM T_LOAN_REQUEST_MARKET WHERE CR_ID=INCRID;
WHILE TEMPINT > 0
LOOP
SELECT REGEXP_REPLACE(REGEXP_SUBSTR(CR_CONTACT_BY, '"contactBy":\S*(".*?")', 1,TEMPINT), '"contactBy":\S*"(.*?)"', '\1', 1, 1) INTO TEMPVAR
FROM T_LOAN_REQUEST_MARKET WHERE CR_ID=INCRID;
IF OUTPUT IS NULL THEN
OUTPUT := TEMPVAR;
ELSE
OUTPUT := OUTPUT ||',' || TEMPVAR;
END IF;
TEMPINT := TEMPINT-1;
END LOOP;
RETURN OUTPUT;
END;
/
I have a column with values like this:
01709100011
I need to transform it to:
017.091.0001-1
The values have always the same characters number.
Both columns are varchar
Thanks in advance for any help.
SELECT CONCAT(SUBSTRING(test, 1,3),'.',SUBSTRING(test,4,3),'.',SUBSTRING(test,7,4),'-',SUBSTRING(test,11,1)) FROM test;
In the above example I used the table test and values in column test.
SELECT CONCAT_WS( "-", CONCAT_WS( ".", SUBSTRING( foo, 0, 3 ), SUBSTRING( foo, 3, 3 ), SUBSTRING( 6, 4 )), SUBSTRING( foo, 10 , 1 )) FROM bar WHERE 1=1;