Postgres searching through arrays within JSON - json

Many similar questions, but unfortunately non helped me solve my problem. I tried to && and #> and similar, but no success.
I have a postgres DB with a table, that has a "value" column typed "json". All rows have the same basic structure, a simple JSON object, with the att "value" holding an array of strings:
{
value: ['one', 'two', 'three']
}
I need to make a query accepting an array of strings and returns all the rows, in which the value array and the passed array of strings have at least one common element.
Following the upper example, if I send ['one', 'four'], it should return the row with value: ['one', 'two', 'three'], since there is an intersection - 'one'.
If I send the array ['four', 'five', 'six'], it will not return this row.

You can use the ?| operator for that. But as you are using json and not the recommended jsonb type, you need to cast your column:
select *
from the_table
where (value::jsonb -> 'value') ?| array['one', 'four']

Related

Google Sheet Scripts -> ss.getRange() -> JSON.Stringify removing the brackets at the element level for 1 dimension arrays

If you want to stringify column A,B,C for a few rows it makes sense that JSON.stringify returns something like [ ["1a","2a","3a"], ["1b","2b", "3b"] ].
However if you are using just one column i.e. a 1 dimensional array, then what JSON.stringify does is terrible: [ ["1a"], ["1b"] ]
What my API expects is ["1a","1b"]
What I am missing?: How can I tell it to properly format it?
From the question
However if you are using just one column i.e. a 1 dimensional array, then what JSON.stringify does is terrible: [ ["1a"], ["1b"] ]
It looks that you have a misconception, as getValues() returns a bi-dimensional no matter if the range refers to a single row or a single column. Anyway, one way to convert the bi-dimentional array into a one-dimension array is by using Array.prototype.flat().
let column = [[1],[2],[3]]
console.log(column.flat())

jq join on common key

I'm very new to jq and this post is a result of not understanding the mechanics behind jq.
I could develop a bash script, which does what I want but jq and it's JSON super-powers have intrigued me and I'd like to learn it by applying to real world scenarios. Here's one...
BTW, I've tried to make use of the existing jq related SO solutions for merging/joining JSONs but have failed.
The closest I came to what I needed was to use an INDEX and a concatenation of $x + . , however I was only getting the LAST item from my second (c2) json.
So, my problem is as follows:
There are Two JSON files:
JSON #1 will have unique "id" and "type" keys - among other key/value pairs, which I've removed for better clarity of my post.
JSON #2 will contain multiples/non-unique "type" keys, which I'd like to match these two JSON files on. This JSON #2 will also contain other key/value pairs, which are expected to be contained in the resultant output.
My output requirements are:
I'd like to obtain a (one per line or a single array) list of all combinations of matching key/values pairs between c1 and c2 array where the value of the "type" key (string) matches between c1 and c2 exactly.
One more question, how much more difficult would it be to scale the solution to perform similar matching/joining between three JSON files at once - again on the same value of a particular key?
Any assistance or even just hints on how to solve and understand how to solve this would be greatly appreciated!
1st input file: JSON #1, Array c1 (collection 1)
{ "c1":
[
{ "c1id":1, "type":"alpha" },
{ "c1id":2, "type":"beta" }
]
}
2nd input file: JSON #2, Array c2 (collection 2)
{
"c2":
[
{ "c2id":1,"type":"alpha","serial":"DDBB001"} ,
{ "c2id":2,"type":"beta","serial":"DDBB007"} ,
{ "c2id":3,"type":"alpha","serial":"DDTT005"} ,
{ "c2id":4,"type":"beta","serial":"DDAA002"} ,
{ "c2id":5,"type":"yotta","serial":"DDCC017"}
]
}
Expected output:
{"c1id":1,"type":"alpha","c2id":1,"serial":"DDBB001"}
{"c1id":1,"type":"alpha","c2id":3,"serial":"DDTT005"}
{"c1id":2,"type":"beta","c2id":2,"serial":"DDBB007"}
{"c1id":2,"type":"beta","c2id":4,"serial":"DDAA002"}
You will notice that type "yotta" from the c2 is not included in the output. This is expected. Only "types" which exist in c1 and match c2 are expected to be in the results. I guess this is implied by this being a matching/joining exercise - I added it just for clarity - I hope it worked.
Here's an example of using INDEX and JOIN:
jq --compact-output --slurpfile c1 c1.json '
INDEX(
$c1[0].c1[];
.type
) as $index |
JOIN(
$index;
.c2[];
.type;
reverse|add
)
' c2.json
The first argument to INDEX needs to produce a stream of items, which is why we apply [] to get the items from the array individually. The second argument selects our index key.
We use the four argument version of JOIN. The first argument is the index itself, the second is a stream of objects to be joined to the index, the third argument selects the lookup key from the streamed objects, and the fourth argument is an expression to assemble the join object. The input to that expression is a stream of two-item arrays, each looking something like this:
[{"c2id":1,"type":"alpha","serial":"DDBB001"},{"c1id":1,"type":"alpha"}]
Since we just want to combine all the keys and values from the objects we just use add, but we first reverse the array to nicely arrange the c1 fields before the c2 fields. The end result is as you hoped:
{"c1id":1,"type":"alpha","c2id":1,"serial":"DDBB001"}
{"c1id":2,"type":"beta","c2id":2,"serial":"DDBB007"}
{"c1id":1,"type":"alpha","c2id":3,"serial":"DDTT005"}
{"c1id":2,"type":"beta","c2id":4,"serial":"DDAA002"}

Combine multiple JSON rows into one JSON object in PostgreSQL

I want to combine the following JSON from multiple rows into one single JSON object as a row.
{"Salary": ""}
{"what is your name?": ""}
{"what is your lastname": ""}
Expected output
{
"Salary": "",
"what is your name?": "",
"what is your lastname": ""
}
With only built-in functions, you need to expand the rows into key/value pairs and aggregate that back into a single JSON value:
select jsonb_object_agg(t.k, t.v)
from the_table, jsonb_each(ob) as t(k,v);
If your column is of type json rather than jsonb you need to cast it:
select jsonb_object_agg(t.k, t.v)
from the_table, jsonb_each(ob::jsonb) as t(k,v);
A slightly more elegant solution is to define a new aggregate that does that:
CREATE AGGREGATE jsonb_combine(jsonb)
(
SFUNC = jsonb_concat(jsonb, jsonb),
STYPE = jsonb
);
Then you can aggregate the values directly:
select jsonb_combine(ob)
from the_table;
(Again you need to cast your column if it's json rather than jsonb)
Online example

How to use ransack to search MySQL JSON array in Rails 5

Rails 5 now support native JSON data type in MySQL, so if I have a column data that contains an array: ["a", "b", "c"], and I want to search if this column contains values, so basically I would like to have something like: data_json_cont: ["b"]. So can this query be built using ransack ?
Well I found quite some way to do this with Arrays(not sure about json contains for hash in mysq). First include this code in your active record model:
self.columns.select{|column| column.type == :json}.each do |column|
ransacker "#{column.name}_json_contains".to_sym,
args: [:parent, :ransacker_args] do |parent, args|
query_parts = args.map do |val|
"JSON_CONTAINS(#{column.name}, '#{val.to_json}')"
end
query = query_parts.join(" * ")
Arel.sql(query)
end
end
Then assuming you have class Shirt with column size, then you can do the following:
search = Shirt.ransack(
c: [{
a: {
'0' => {
name: 'size_json_contains',
ransacker_args: ["L", "XL"]
}
},
p: 'eq',
v: [1]
}]
)
search.result
It works as follows: It checks that the array stored in the json column contains all elements of the asked array, by getting the result of each json contains alone, then multiplying them all, and comparing them to arel predicate eq with 1 :) You can do the same with OR, by using bitwise OR instead of multiplication.

Elixir - Creating JSON object from 2 collections

I'm using Postgrex in Elixir, and when it returns query results, it returns them in the following struct format:
%{columns: ["id", "email", "name"], command: :select, num_rows: 2, rows: [{1, "me#me.com", "Bobbly Long"}, {6, "email#tts.me", "Woll Smoth"}]}
It should be noted I am using Postgrex directly WITHOUT Ecto.
The columns (table headers) are returned as a collection, but the results (rows) are returned as a list of tuples. (which seems odd, as they could get very large).
I'm trying to find the best way to programmatically create JSON objects for each result in which the JSON key is the column title and the JSON value the corresponding value from the tuple.
I've tried creating maps from both, merging and then serialising to JSON objects but it seems there should be an easier/better way of doing this.
Has anyone dealt with this before? What is the best way of creating a JSON object from a separate collection and tuple?
Something like this should work:
result = Postgrex.query!(...)
Enum.map(result.rows, fn row ->
Enum.zip(result.columns, Tuple.to_list(row))
|> Enum.into(%{})
|> JSON.encode
end)
This will result in a list of json objects where each row in the resultset is a json object.