I am looking for a way of find rows by given element of the json table that match the pattern.
Lets start with mysql table:
CREATE TABLE `person` (
`attributes` json DEFAULT NULL
);
INSERT INTO `person` (`attributes`)
VALUES ('[{"scores": 1, "name": "John"},{"scores": 1, "name": "Adam"}]');
INSERT INTO `person` (`attributes`)
VALUES ('[{"scores": 1, "name": "Johny"}]');
INSERT INTO `person` (`attributes`)
VALUES ('[{"scores": 1, "name": "Peter"}]');
How to find all records where attributes[*].name consists John* pattern?
In the John* case the query should return 2 records (with John and Johny).
SELECT DISTINCT person.*
FROM person
CROSS JOIN JSON_TABLE(person.attributes, '$[*]' COLUMNS (name TEXT PATH '$.name')) parsed
WHERE parsed.name LIKE 'John%';
https://sqlize.online/sql/mysql80/c9e4a3ffa159c4be8c761d696e06d946/
Related
This question already has answers here:
Convert JSON array in MySQL to rows
(8 answers)
Closed last month.
I have a table in which a columns has an Array.
id
data
1
["a", "b"]
2
["a", "b", "c"]
I am using a query that is given below.
select JSON_EXTRACT(t.date, '$') as id from table1 t where t.id = 1;
This gives result as the complete array, if I change the parameter like '$[0]' then I get value at 0 index.
How can I get result as follow : i.e (all the array values in separate row)
result
"a"
"b"
Are you looking for this:
CREATE TABLE mytable (
id INT PRIMARY KEY,
data JSON
);
INSERT INTO mytable (id, data) VALUES (1, '["a", "b"]');
INSERT INTO mytable (id, data) VALUES (2, '["a", "b", "c"]');
SELECT T.id
,data.value
FROM mytable T
INNER JOIN JSON_TABLE
(
T.data,
"$[*]"
COLUMNS(
Value varchar(50) PATH "$"
)
) data;
Requirement is to generate JSON from clob data type column.
environment version Oracle 12.2
I have a table with fields id (number data type) and details (clob type) like below
ID - details
100 - 134332:10.0, 1481422:1.976, 1483734:1.688, 2835036:1.371
101 - 134331:0.742, 319892:0.734, 1558987:0.7, 2132090:0.697
eg output:
{
"pId":100,
"cid":[
{
"cId":134332,
"wt":"10.0"
},
{
"cId":1481422,
"wt":"1.976"
},
{
"cId":1483734,
"wt":"1.688"
},
{
"cId":2835036,
"wt":"1.371"
}
]
}
please help with oracle SQL query to generate output.
Below I set up a table with a few input rows for testing; then I show one way you can solve your problem, and the output from that query. I didn't try to write the most efficient (fastest) query; rather, I hope this will show you how this can be done. Then if speed is a problem you can work on that. (In that case, it would be best to reconsider the inputs first, which break First Normal Form.)
I added a couple of input rows for testing, to see how null is handled. You can decide if that is the desired handling. (It is possible that no null are possible in your data - in which case you should have said so when you asked the question.)
Setting up the test table:
create table input_tbl (id number primary key, details clob);
insert into input_tbl (id, details) values
(100, to_clob('134332:10.0, 1481422:1.976, 1483734:1.688, 2835036:1.371'));
insert into input_tbl (id, details) values
(101, '134331:0.742, 319892:0.734, 1558987:0.7, 2132090:0.697');
insert into input_tbl (id, details) values
(102, null);
insert into input_tbl (id, details) values
(103, '2332042: ');
commit;
Query:
with
tokenized (pid, ord, cid, wt) as (
select i.id, q.ord, q.cid, q.wt
from input_tbl i cross apply
(
select level as ord,
regexp_substr(details, '(, |^)([^:]+):', 1, level, null, 2)
as cid,
regexp_substr(details, ':([^,]*)', 1, level, null, 1) as wt
from dual
connect by level <= regexp_count(details, ':')
) q
)
, arrayed (pid, json_arr) as (
select pid, json_arrayagg(json_object(key 'cId' value to_number(trim(cid)),
key 'wt' value to_number(trim(wt)))
)
from tokenized
group by pid
)
select pid, json_object(key 'pId' value pid, key 'cid' value json_arr) as json
from arrayed
;
Output:
PID JSON
---- -----------------------------------------------------------------------------------------------------------------------------
100 {"pId":100,"cid":[{"cId":134332,"wt":10},{"cId":2835036,"wt":1.371},{"cId":1483734,"wt":1.688},{"cId":1481422,"wt":1.976}]}
101 {"pId":101,"cid":[{"cId":134331,"wt":0.742},{"cId":2132090,"wt":0.697},{"cId":1558987,"wt":0.7},{"cId":319892,"wt":0.734}]}
102 {"pId":102,"cid":[{"cId":null,"wt":null}]}
103 {"pId":103,"cid":[{"cId":2332042,"wt":null}]}
I am trying to search comma separated values from database table column contains comma separated string.
MY DB
id interest status
------------------------
1 1,2,3 1
2 4 1
My search combination contains 1,2, 3,2, 3, 1,4 etc. Any combination will occure.
I want to show all the id that contains any digit from comma separated search combination.
For example, search for 1,4 should return
id
--
1
2
For example, search for 3,2 should return
id
--
1
I have tried using IN and FIND_IN_SET but none of them achieved my result. Is there any other option.
SELECT * FROM `tbl_test` WHERE interest IN (3)
The above code return empty set.
Like Jens has pointed out in the comments, it is highly recommended to normalize your schema.
If you wish to continue with string and comma separated values, you should then be looking at complex regex matching (which I leave it to you to explore).
However, one more alternative is to convert your column interest as JSON datatype. MYSQL 5.7 and above supports these datatypes.
CREATE TABLE IF NOT EXISTS `tbl` (
`id` int(6) unsigned NOT NULL,
`interest` JSON DEFAULT NULL,
`status` int(1) NOT NULL,
PRIMARY KEY (`id`)
) DEFAULT CHARSET=utf8;
INSERT INTO `tbl` (`id`, `interest`,`status`) VALUES
(1, '[1,2,3,4]',1),
(2, '[1,2]',1),
(3, '[3]',1);
And then query it as follows :
select id from tbl where JSON_CONTAINS( interest ,'[1,2]')
select id from tbl where JSON_CONTAINS( interest ,'[3,4]');
...
You can see it action in this sql fiddle.
I have a table that contains a bunch of numbers seperated by a comma.
I would like to retrieve rows from table where an exact number not a partial number is within the string.
EXAMPLE:
CREATE TABLE IF NOT EXISTS `teams` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
`uids` text NOT NULL,
`islive` tinyint(1) NOT NULL DEFAULT '1',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=5 ;
INSERT INTO `teams` (`id`, `name`, `uids`, `islive`) VALUES
(1, 'Test Team', '1,2,8', 1),
(3, 'Test Team 2', '14,18,19', 1),
(4, 'Another Team', '1,8,20,23', 1);
I would like to search where 1 is within the string.
At present if I use Contains or LIKE it brings back all rows with 1, but 18, 19 etc is not 1 but does have 1 within it.
I have setup a sqlfiddle here
Do I need to do a regex?
You only need 1 condition:
select *
from teams
where concat(',', uids, ',') like '%,1,%'
I would search for all four possible locations of the ID you are searching for:
As the only element of the list.
As the first element of the list.
As the last element of the list.
As an inner element of the list.
The query would look like:
select *
from teams
where uids = '1' -- only
or uids like '1,%' -- first
or uids like '%,1' -- last
or uids like '%,1,%' -- inner
You could probably catch them all with a OR
SELECT ...
WHERE uids LIKE '1,%'
OR uids LIKE '%,1'
OR uids LIKE '%, 1'
OR uids LIKE '%,1,%'
OR uids = '1'
You didn't specify which version of SQL Server you're using, but if you're using 2016+ you have access to the STRING_SPLIT function which you can use in this case. Here is an example:
CREATE TABLE #T
(
id int,
string varchar(20)
)
INSERT INTO #T
SELECT 1, '1,2,8' UNION
SELECT 2, '14,18,19' UNION
SELECT 3, '1,8,20,23'
SELECT * FROM #T
CROSS APPLY string_split(string, ',')
WHERE value = 1
You SQL Fiddle is using MySQL and your syntax is consistent with MySQL. There is a built-in function to use:
select t.*
from teams t
where find_in_set(1, uids) > 0;
Having said that, FIX YOUR DATA MODEL SO YOU ARE NOT STORING LISTS IN A SINGLE COLUMN. Sorry that came out so loudly, it is just an important principle of database design.
You should have a table called teamUsers with one row per team and per user on that team. There are numerous reasons why your method of storing the data is bad:
Numbers should be stored as numbers, not strings.
Columns should contain a single value.
Foreign key relationships should be properly declared.
SQL (in general) has lousy string handling functions.
The resulting queries cannot be optimized.
Simple things like listing the uids in order or removing duplicate are unnecessarily hard.
Say I have a column in my database called attributes which has this value as an example:
{"pages":["Page1"]}
How can I do a where clause so I can filter down rows that have "Page1" in it.
select JSON_QUERY(Attributes, '$.pages')
from Table
where JSON_QUERY(Attributes, '$.pages') in ('Page1')
Edit:
From the docs it seems like this might work though it seems so complicated for what it is doing.
select count(*)
from T c
cross apply Openjson(c.Attributes)
with (pages nvarchar(max) '$.pages' as json)
outer apply openjson(pages)
with ([page] nvarchar(100) '$')
where [page] = 'Page1'
Something like this:
use tempdb
create table T(id int, Attributes nvarchar(max))
insert into T(id,Attributes) values (1, '{"pages":["Page1"]}')
insert into T(id,Attributes) values (2, '{"pages":["Page3","Page4"]}')
insert into T(id,Attributes) values (3, '{"pages":["Page3","Page1"]}')
select *
from T
where exists
(
select *
from openjson(T.Attributes,'$.pages')
where value = 'Page1'
)
returns
id Attributes
----------- ---------------------------
1 {"pages":["Page1"]}
3 {"pages":["Page3","Page1"]}
(2 rows affected)