I'm working with Snowflake.
I'm executing these statements:
create table test(
src varchar
);
insert into test
values ('{"value":
{"evaluation_forms":
[ {"evaluations":
[ {"channel_meta":
{"after_call_work_time": [],
"agent_first_name": ["KATRINA"],
"agent_hung_up": [],
"agent_last_name": ["COX"],
"agent_unique_id": ["LO_00130604"],
"agent_username": [],
"alternate_call_id": [],
"total_time": []
}
} ]
} ]
}
}'
);
issuing this statement
SELECT
cm.*
FROM
(select parse_json(src) src from test) t
, LATERAL FLATTEN(INPUT => SRC:value) v
, LATERAL FLATTEN(INPUT => v.value) vv
, LATERAL FLATTEN(INPUT => vv.value) ev
, LATERAL FLATTEN(INPUT => ev.value) cm
gets stuff, including the json in the value column.
issuing this statement
SELECT
cm.channel_meta.agent_first_name[0],
cm.*
FROM
(select parse_json(src) src from test) t
, LATERAL FLATTEN(INPUT => SRC:value) v
, LATERAL FLATTEN(INPUT => v.value) vv
, LATERAL FLATTEN(INPUT => vv.value) ev
, LATERAL FLATTEN(INPUT => ev.value) cm
gets me an invalid identifier error.
All sorts of varieties gets me an invalid identifier error.
How do I extract the agent_first_name from this json?
Thanks, --sw
You have to specify '.value' in the select clause
SELECT
cm.value:channel_meta.agent_first_name[0],
cm.*
FROM
(select parse_json(src) src from test) t
, LATERAL FLATTEN(INPUT => SRC:value) v
, LATERAL FLATTEN(INPUT => v.value) vv
, LATERAL FLATTEN(INPUT => vv.value) ev
, LATERAL FLATTEN(INPUT => ev.value) cm
If there are more than one value in the agent_first_name array and you want to display all the values in this array, then specify as cm.value:channel_meta.agent_first_name ( without the array position [0] ).
If you are looking for the other elements like agent_username, agent_last_name etc, you can specify select clause as cm.value:channel_meta.agent_username, cm.value:channel_meta.agent_last_name
Related
I have this query below, and with just about 200k records in table, this query has started taking too long to execute. About 30 seconds or so.
I am not sure where or what is causing the problem.
I have other databases, with more than 2 million records, no issues of speed.
But somehow, for some reason, this query is causing problem on a site.
select p.pid, p.other_fields, c.user_name,
group_concat( t.tag ) as tags
from post_table as p, user_table as c, tag_table as t
where p.userID= c.userID
and p.stat=1
and p.mainID=0
and c.stat='y'
and t.pid=p.pid
group by p.pid
order by p.pid desc
limit 0, 20
This is the proper JOIN format of the same query, makes no difference, still slow.
This below is actually what I had earlier, but then changed it to the above older format, just to try if it makes any difference.
select p.pid, p.other_fields, c.user_name, group_concat( t.tag ) as tags
from post_table as p
LEFT JOIN user_table as c on p.userID = c.userID
LEFT JOIN tag_table as t on p.pid = t.pid
where p.stat=1
and p.mainID=0
and c.stat='y'
group by p.pid
order by p.pid desc
limit 0, 20
Structures and indexes on these tables:
post_table:
pid, userID, stat, mainID, title, other_fields...
index( userID, stat, mainID, title )
User_table:
userID, stat, user_name, pass_word, etc...
index( user_name, pass_word )
index( stat )
tag_table:
id, pid, tag
index( pid, tag )
I think I am following all indexes properly, but still the query takes lot of time to execute, and I don't know why.
Can someone please tell me what could be the reason?
Thanks
Below is the output of the EXPLAIN statement of this query above:
But I am not sure what this is doing, however, I think that for some reason its ignoring the "stat" index on both user_table and post_table.
3 in array
Array
(
[0] => Array
(
[id] => 1
[select_type] => SIMPLE
[table] => c
[type] => ALL
[possible_keys] => PRIMARY,id,id_2, userStat
[key] =>
[key_len] =>
[ref] =>
[rows] => 8
[Extra] => Using where; Using temporary; Using filesort
)
[1] => Array
(
[id] => 1
[select_type] => SIMPLE
[table] => p
[type] => ref
[possible_keys] => PRIMARY,id,id_2, userID, postmainID
[key] => userID
[key_len] => 27
[ref] =>
[rows] => 15091
[Extra] =>
)
[2] => Array
(
[id] => 1
[select_type] => SIMPLE
[table] => t
[type] => ref
[possible_keys] => pid
[key] => pid
[key_len] => 777
[ref] =>
[rows] => 1
[Extra] => Using where; Using index
)
)
select p.pid, p.other_fields, c.user_name,
( SELECT group_concat( t.tag ) FROM tag_table AS t
WHERE t.pid = p.pid ) as tags
FROM post_table as p
JOIN user_table as c ON p.userID = c.userID
where p.stat = 1
and p.mainID = 0
and c.stat = 'y'
order by p.pid desc
limit 0, 20
p: INDEX(stat, mainID, pid, userID, other_fields)
c: INDEX(userID, stat, user_name)
t: INDEX(pid, tag)
The GROUP BY p.pid is probably redundant now, put it back in if you need it.
There is no performance difference between the old comma-join and the new JOIN..ON. There is a possible semantic difference between JOIN/comma-join and LEFT JOIN. My reformulation for tags is equivalent to LEFT JOIN. The existence of c.stat = ... forces the other LEFT JOIN to turn into JOIN, so no semantic difference.
pid is key_len=777? Please provide SHOW CREATE TABLE so I can understand. Ditto for userID and 27.
There are a lot of possible reasons for the Optimizer to avoid a given index. It will probably use my indexes in preference to all others.
how to join this in single query any help to combine these two queries as one without looping,
$today_date = mktime(0, 0, 0, $mon, $day-1, $year);
SELECT * FROM (`lead_follow_up`) LEFT JOIN `leads` ON `leads`.`id` = `lead_follow_up`.`lead_id` WHERE `date` <= $today_date GROUP BY `lead_follow_up`.`lead_id` ORDER BY `lead_follow_up`.`date` DESC
from the above query i get array $previou
$previou= Array
(
[0] => stdClass Object
(
[id] => 1
[lead_id] => 75943
[date] => 1438930800
[updated_on] => 1438884890
)
[1] => stdClass Object
(
[id] => 2
[lead_id] => 75943
[date] => 1416459600
[updated_on] => 1415901523
),
[2] => stdClass Object
(
[id] => 3
[lead_id] => 75943
[date] => 1416459600
[updated_on] => 1415901523
),....etc
);
foreach($previou as $key => $p):
$q = "SELECT `id` FROM (`lead_follow_up`) WHERE `lead_id` = '".$p->id."' AND `date` > '".$p->date."' ORDER BY `updated_on` DESC ";
if(!$this->db->query($q)){
$previouData[$key] = $p;
$pCount++;
}
endforeach;
how to join this in single query any help to combine these two queries as one without looping,
Your queries don't make much sense. For a start your first query has a GROUP BY lead_follow_up.lead_id but no aggregate functions. So in MySQL that will return one row for each value of lead_id (which row it returns is not defined).
Yet your array of sample data has multiple rows per lead_id so cannot have come from the query.
You are also LEFT OUTER JOINing the leads table, yet it doesn't seem to make sense to have a lead_follow_up which doesn't relate to a lead. As such you may as well use an INNER JOIN.
I am going to assume that what you want is a list of leads / lead_follow_ups and for each one a couple of all the follow ups after that particular follow up. That would give you something like this (making loads of assumptions as I do not know your table structure):-
SELECT leads.id AS lead_id,
lead_follow_up.id
lead_follow_up.`date`,
lead_follow_up.updated_on,
COUNT(lead_follow_up_future.id) AS future_lead_count
FROM leads
INNER JOIN lead_follow_up ON leads.id = lead_follow_up.lead_id
LEFT OUTER JOIN lead_follow_up AS lead_follow_up_future ON leads.id = lead_follow_up.lead_id AND lead_follow_up_future.`date` > lead_follow_up.`date`
WHERE lead_follow_up.`date` <= $today_date
GROUP BY leads.id AS lead_id,
lead_follow_up.id
lead_follow_up.`date`,
lead_follow_up.updated_on
ORDER BY lead_follow_up.date DESC
I'm working on a language integrated query library in Scala (http://github.com/getquill/quill) and there's one type of monad composition that I'm struggling to generate the correspondent SQL query for.
It's possible to generate queries for these cases:
t1.flatMap(a => t2.filter(b => b.s == a.s).map(b => b.s))
SELECT t2.s FROM t1, t2 WHERE t2.s = t1.s
t1.flatMap(a => t2.map(b => b.s).take(10))
SELECT x.s FROM t1, (SELECT * FROM t2 LIMIT 10) x
But I can't figure out how to express this other one:
t1.flatMap(a => t2.filter(b => b.s == a.s).map(b => b.s).take(10))
Is it possible? The question also could be phrased as: is there a way to express this kind of data dependency in monadic compositions using applicative joins in SQL?
I'm looking for a generic solution so it could be used for other compositions like these ones:
t1.flatMap(a => t2.filter(b => b.s == a.s).sortBy(b => b.s % a.s).map(b => b.s).take(10))
t1.flatMap(a => t2.filter(b => b.s == a.s).map(b => b.s).take(10).flatMap(b => t3.filter(c => c.s == b.s/a.s))
I'm working on dialects for MySQL, Postgres and H2.
Once you need to filter the inner set, by the existence in the outer you need to push the join down. Something like this, maybe:
SELECT *
FROM t1, (
SELECT t2.s
FROM t2, t1 AS t1_inner
WHERE t1_inner.s = t2.s
LIMIT 10
)
Or, alternatively:
SELECT *
FROM t1, (
SELECT t2.s
FROM t2
WHERE EXISTS (SELECT * FROM t1 t1_inner WHERE t1_inner.s = t2.s)
LIMIT 10
)
I have a working SQL statement, but there is one issue within it I can't solve.
When I left join my table sites_photos there can be multiple matches on sp.sites_id = s.id, but I want the table to only return 1. Is this possible.
SELECT s.*, sp.photo
FROM sites s
LEFT JOIN sites_photos sp
ON sp.sites_id = s.id
My output: 2 times id 30, but with different photo paths, I only want 1 returned for that id, or both bundled in one array.
Array
(
[0] => Array
(
[id] => 30
[url] => www.test.nl
[name] => Aleve
[date] => 2014-08-16
[cms_active] => Y
[archive] => N
[photo] => 2014080812365920120214103601number_1.jpg
)
[1] => Array
(
[id] => 30
[url] => www.test.nl
[name] => Aleve
[date] => 2014-08-16
[cms_active] => Y
[archive] => N
[photo] => 20140811021102news.jpg
)
)
You can do so,by using GROUP_CONCAT which will concatenate all the photos per site by and produces comma separated list of photos then you can use SUBSTRING_INDEX over result of GROUP_CONCAT to pick one photo,you can also add the criteria of order by in GROUP_CONCAT as GROUP_CONCAT(sp.photo ORDER BY sp.id DESC)
SELECT s.*, SUBSTRING_INDEX(GROUP_CONCAT(sp.photo),',',1) photo
FROM sites s
LEFT JOIN sites_photos sp
ON sp.sites_id = s.id
GROUP BY s.id
I have a join that works exactly as expected, except any and all fields selected from the 'right' table are returned blank when they definitely are not.
SELECT score.recipient, score.amount, u.* FROM score
LEFT JOIN `users` AS u ON score.recipient = u.id AND u.team_id = ?
WHERE UNIX_TIMESTAMP(score.date) > ?
I don't actually need the entire users table, only users.email - but no fields work. The result set looks like this (sample):
[0] => stdClass Object ( [recipient] => 1 [amount] => 1 [id] => [fname] => [lname] => [nickname] => [email] => [phone] => [reg_key] => )
[1] => stdClass Object ( [recipient] => 103 [amount] => -1 [id] => [fname] => [lname] => [nickname] => [email] => [phone] => [reg_key] => )
All of the fields listed are in fact populated.
Any help would be appreciated! I'm at a loss.
Your join condition / where clause is broken if replacing the left join with an inner join returns an empty result set.
Try this (without bind variables and their conditions) and see if it returns any values:
SELECT score.recipient, score.amount, u.* FROM score
LEFT JOIN `users` AS u ON score.recipient = u.id
If that's the case, then look at the values for team_id / score.date you get - I bet you're using a combination of bind values that simply does not exist in your tables.