This question already has answers here:
Using row_to_json() with nested joins
(3 answers)
Return multiple columns of the same row as JSON array of objects
(2 answers)
Closed 8 years ago.
I have a table:
CREATE TABLE test (
item_id INTEGER NOT NULL,
item_name VARCHAR(255) NOT NULL,
mal_item_name VARCHAR(255),
active CHAR(1) NOT NULL,
data_needed CHAR(1) NOT NULL,
parent_id INTEGER);
The query:
select array_to_json(array_agg(row_to_json(t)))
from (select item_id as id,
item_name as text,
parent_id as parent,
(mal_item_name,data_needed) as data
from test) t
produces result:
[{"id":1,"text":"Materials","parent":0, "data": {"f1":null,"f2":"N"}},
{"id":2,"text":"Bricks","parent":1, "data":{"f1":null,"f2":"N"}},
{"id":3,"text":"Class(high)","parent":2, "data":{"f1":null,"f2":"Y"}},
{"id":4,"text":"Class(low)","parent":2, "data":{"f1":null,"f2":"Y"}}]
The original field names mal_item_name and data_needed are replaced with f1 and f2.
How can I get a JSON with field names itself? Documentation says by creating a type for these two fields. Is there an alternative?
Use json_build_object() in Postgres 9.4 or later:
SELECT json_agg(t) AS js
FROM (SELECT item_id AS id
, item_name AS text
, parent_id AS parent
, json_build_object('mal_item_name', mal_item_name
,'data_needed', data_needed) AS data
FROM test) t;
And use json_agg(...) instead of array_to_json(array_agg(row_to_json(...))).
For Postgres 9.3:
SELECT json_agg(t) AS js
FROM (SELECT item_id AS id
, item_name AS text
, parent_id AS parent
, (SELECT t FROM (SELECT mal_item_name, data_needed)
AS t(mal_item_name, data_needed)) AS data
FROM test) t;
Details:
Return multiple columns of the same row as JSON array of objects
Using row_to_json() with nested joins
PostgreSQL : select columns inside json_agg
Related
This question already has answers here:
Convert JSON array in MySQL to rows
(8 answers)
Closed last month.
I have a table in which a columns has an Array.
id
data
1
["a", "b"]
2
["a", "b", "c"]
I am using a query that is given below.
select JSON_EXTRACT(t.date, '$') as id from table1 t where t.id = 1;
This gives result as the complete array, if I change the parameter like '$[0]' then I get value at 0 index.
How can I get result as follow : i.e (all the array values in separate row)
result
"a"
"b"
Are you looking for this:
CREATE TABLE mytable (
id INT PRIMARY KEY,
data JSON
);
INSERT INTO mytable (id, data) VALUES (1, '["a", "b"]');
INSERT INTO mytable (id, data) VALUES (2, '["a", "b", "c"]');
SELECT T.id
,data.value
FROM mytable T
INNER JOIN JSON_TABLE
(
T.data,
"$[*]"
COLUMNS(
Value varchar(50) PATH "$"
)
) data;
I am transferring data from MYSQL to SQL_Server using SSIS and there are around 200 tables.
So I Wrote a dynamic ETL that only takes the name of the table and handles the rest.
but since I had to have a fixed table meta-data I used JSON_array in MYSQL to create a single column from all of the columns except ID something like this:
select id
,JSON_ARRAY(name,cellphone) as JSON
from table
because I know the schema of data I wanted to reduce my JSON size and removed table Schema from the JSON.
the created JSON_ARRAY looks like this:
["hooman", "12345"]
so after moving to SQL_Server I know I can use CROSS APPLY OPENJSON(t.json) like this and read it but then I have to pivot it and that's not efficient at all!
I can see how to open normal JSON so you don't need to pivot your data but I can't find anything for the Array type.
in an ideal world I want something like this:
CROSS APPLY OPENJSON(t.json) with(
name varchar(255) '$[0]' ,
cellphone int '$[1]' )
and as a result, I have 2 columns and don't need to pivot my table anymore.
declare #json nvarchar(max) = N'["hooman", "12345"]';
select json_value(#json, '$[0]') as name, json_value(#json, '$[1]') as cellphone;
select *
from openjson(concat('{"x":', #json, '}'))
with
(
name varchar(255) '$.x[0]' ,
cellphone int '$.x[1]'
);
select *
from openjson(concat('[', #json, ']'))
with
(
name varchar(255) '$[0]' ,
cellphone int '$[1]'
);
This question already has answers here:
Query with multiple values in a column
(4 answers)
Closed 6 years ago.
I am having a problem with my sql command.
So I have a column of type SET let's call it test_set, and I have multiple values that a row can have, let's say test1, test2
And let's say I have one row that has test1,
and another that has test1 and test2,
How could I select all rows that have test1, (should return both rows)
What about all rows that have test2 (Should return the second row)
As of right now, I know you can do SELECT * WHERE test_set='test1'
But this only returns rows that only have test1, not the ones that have test1 and test2.
Thanks!
If I am understanding you correctly you have a VARCHAR column containing comma delimited values.
In that case a LIKE will work for you.
SELECT * WHERE test_set LIKE '%test1%'
You might want to consider changing the database schema if you can though - For example have a separate "SETS" table that references your original table.
Ex.
CREATE TABLE MY_DATA (ID INT NOT NULL, NAME VARCHAR(255) NULL)
CREATE TABLE SETS (ID INT NOT NULL, MY_DATA_ID INT NOT NULL, SET_ITEM VARCHAR(50) NOT NULL)
SELECT *
FROM MY_DATA D
JOIN SETS S
ON S.MY_DATA_ID = D.ID
WHERE S.SET_ITEM = 'test1'
I tried this:
INSERT INTO event_log_tracker_table
SELECT * FROM event_tracker_table WHERE eventid = '560'
However I get this error:
Error Code: 1136. Column count doesn't match value count at row 1
The columns match exactly the same except for one thing...
I added one more column (eventlogid) in event_log_tracker_table to be a primary key. How can I insert a row, from another table and have it add to a primary key in the new table?
Below is a structure of the tables.
event_log_tracker_table (24 columns)
-----------------------
eventlogid - PK
eventid - INT
//
// 22 other columns
//
event_tracker_table (23 columns)
-----------------------
eventid - PK
//
// 22 other columns
//
I have tried to do this:
INSERT INTO event_log_tracker_table
SELECT null, * FROM event_tracker_table WHERE eventid = '560'
As documented under SELECT Syntax:
Use of an unqualified * with other items in the select list may produce a parse error. To avoid this problem, use a qualified tbl_name.* reference
SELECT AVG(score), t1.* FROM t1 ...
Therefore, instead of SELECT NULL, * you could should qualify the wildcard:
INSERT INTO event_log_tracker_table
SELECT NULL, event_tracker_table.*
FROM event_tracker_table
WHERE eventid = '560'
This question already has answers here:
Recursive MySQL Query with relational innoDB
(2 answers)
Closed 9 years ago.
I have a MySQL table which has the following format:
CREATE TABLE IF NOT EXISTS `Company` (
`CompanyId` INT UNSIGNED NOT NULL AUTO_INCREMENT ,
`Name` VARCHAR(45) NULL ,
`Address` VARCHAR(45) NULL ,
`ParentCompanyId` INT UNSIGNED NULL ,
PRIMARY KEY (`CompanyId`) ,
INDEX `fk_Company_Company_idx` (`ParentCompanyId` ASC) ,
CONSTRAINT `fk_Company_Company`
FOREIGN KEY (`ParentCompanyId` )
REFERENCES `Company` (`CompanyId` )
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;
So to clarify, I have companies which can have a parent company. This could result in the following example table contents:
CompanyId Name Address ParentCompanyId
1 Foo Somestreet 3 NULL
2 Bar Somelane 4 1
3 McD Someway 1337 1
4 KFC Somewhere 12 2
5 Pub Someplace 2 4
Now comes my question.
I want to retrieve all children of CompanyId 2 recursive. So the following result set should appear:
CompanyId Name Address ParentCompanyId
4 KFC Somewhere 12 2
5 Pub Someplace 2 4
I thought of using the With ... AS ... statement, but it is not supported by MySQL. Another solution I thought of was using a procedure or function which returns a result set and union it with the recursive call of that function. But MySQL does only support column types as return values.
The last possible solution I thought about was to create a table with two fields: CompanyId and HasChildId. I could then write a procedure that loops recursively through the companies and fills the table with all recursive children by a companyid. In this case I could write a query which joins this table:
SELECT CompanyId, Name, Address
FROM Company C -- The child
INNER JOIN CompanyChildMappingTable M
ON M.CompanyId = C.HasChildId
INNER JOIN Company P -- The parent
ON P.CompanyId = M.CompanyId
WHERE P.CompanyId = 2;
This option should be a fast one if i'd call the procedure every 24 hours and fill the table on the fly when new records are inserted into Company. But this could be very tricky and I should do this by writing triggers on the Company table.
I would like to hear your advice.
Solution: I've built the following procedure to fill my table (now it just returns the SELECT result).
DELIMITER $$
DROP PROCEDURE IF EXISTS CompanyFillWithSubCompaniesByCompanyId$$
CREATE PROCEDURE CompanyFillWithSubCompaniesByCompanyId(IN V_CompanyId BIGINT UNSIGNED, IN V_TableName VARCHAR(100))
BEGIN
DECLARE V_CONCAT_IDS VARCHAR(9999) DEFAULT '';
DECLARE V_CURRENT_CONCAT VARCHAR(9999) DEFAULT '';
SET V_CONCAT_IDS = (SELECT GROUP_CONCAT(CompanyId) FROM Company WHERE V_CompanyId IS NULL OR ParentCompanyId = V_CompanyId);
SET V_CURRENT_CONCAT = V_CONCAT_IDS;
IF V_CompanyId IS NOT NULL THEN
companyLoop: LOOP
IF V_CURRENT_CONCAT IS NULL THEN
LEAVE companyLoop;
END IF;
SET V_CURRENT_CONCAT = (SELECT GROUP_CONCAT(CompanyId) FROM Company WHERE FIND_IN_SET(ParentCompanyId, V_CURRENT_CONCAT));
SET V_CONCAT_IDS = CONCAT_WS(',', V_CONCAT_IDS, V_CURRENT_CONCAT);
END LOOP;
END IF;
SELECT * FROM Company WHERE FIND_IN_SET(CompanyId, V_CONCAT_IDS);
END$$
Refer:
Recursive MySQL Query with relational innoDB
AND
How to find all child rows in MySQL?
It shall give a idea of how such a data structure, can be dealt in MYSQL
One quickest way to search is, use company id values in power of 2. companyId = parentId * 2 then query database like, select * from company where ((CompanyId % $parentId) == 0 )
I tried this code, it's quick but problem is it creates child's id as parentId * 2 and if depth of child goes deep, int, float may go out of range. So, I re-created my whole program.