Examining jsonb object values in trigger WHEN clause - json

I have a trigger in postgresql 12 that fires like so:
WHEN (OLD.some_jsonb_object_column IS DISTINCT FROM NEW.some_jsonb_object_column)
I would like to only run this trigger when values have changed, and not run them if only keys have changed. In this use case, it can be guaranteed that we are not adding and removing keys at the same time. I do not know what the object keys will be ahead of time, so I cannot get the values via ->>.
I have tried something akin to:
WHEN (jsonb_each(OLD.some_jsonb_object_column) IS DISTINCT FROM jsonb_each(NEW.some_jsonb_object_column))
which results in the error:
set-returning functions are not allowed in trigger WHEN conditions
Is there a way to get the values of a jsonb object without using a set-returning function?

To test if a new key has been added:
key_added := EXISTS (SELECT *
FROM json_object_keys(NEW.some_jsonb_object_column) AS n
EXCEPT
SELECT *
FROM json_object_keys(OLD.some_jsonb_object_column) AS o
);
Similarly, you can check if a key was removed.
That should solve your problem.

Related

PostgreSQL cannot call json_object_keys on a scalar

I have a PostgreSQL table with a JSON column, and I'm trying to get a list of all the distinct keys in that column. I created a query to do this:
SELECT DISTINCT json_object_keys(j) FROM t;
Where t is the table and j is the JSON column. This worked on a small set of data correctly, it would list all the keys that exist in j, without repeating them. However, after adding a lot more data, it doesn't work anymore, giving the error:
ERROR: cannot call json_object_keys on a scalar
I'm not sure exactly why this is happening. By just limiting the query to certain rows one at a time, I found one that causes the error. In that row, j is null. However, calling SELECT json_object_keys(null); does not cause this error, while calling SELECT json_object_keys(j) FROM t WHERE id=12; does, and j in this row is just null. I'm not really sure where to go from here.
So I guess my question is what could be causing this, and how can I either work around it or prevent it from happening?
Running PostgreSQL 9.3.9
Edit: Ok, I may have posted this a little preemptively. I figured out that j in the problematic row isn't null, it's 'null'::json, which wasn't clear from just selecting the row. This does cause the scalar error, so now I just have to figure out a way to select the rows where j isn't 'null'::json.
I tried this query, to filter out the 'null'::json values with this query:
SELECT DISTINCT json_object_keys(j) from t WHERE j <> 'null'::json;
However, apparently there is no json <> json operator, so I had to cast it to text and compare.
SELECT DISTINCT json_object_keys(j) from t WHERE j::TEXT <> 'null';
This works! I'm not a Postgres expert though, so this may not be the most effecient way of doing this check.

MySql explode/in_array functionalilty

In my table I have a field with data such as 1,61,34, and I need to see if a variable is in that.
So far I have this
SELECT id, name FROM siv_forms WHERE LOCATE(TheVariable, siteIds) > 0
Which works, with the exception that if the siteIds were 2,61,53, and TheVariable was 1, it would return the row as there is a 1 in 61. Is there anyway around this using native MySql, or would I need to just loop the results in PHP and filter the siteIds that way?
I've looked through the list of string functions in MySql and can't see anything that would do what I'm after.
Try with find_in_set function.
SELECT id, name FROM siv_forms WHERE find_in_set(TheVariable, siteIds);
Check Manual for find_in_set function.

How to wrap existing functions (including aggregates) into a new one in Postgres?

I'm using Postgres 9.2 to generate some JSON data. For each nested table I'm doing this nested set of functions:
SELECT array_to_json(
coalesce(
array_agg(
row_to_json(foo)),
ARRAY[]::json[])
)
FROM foo
The effect is to create a json array with each row being the json collection for the row. The coalesce ensures that I get an empty array rather than nil if the table is empty. In most cases foo is actually a subquery but I don't think that is relevent to the question.
I want to create a function table_to_json_array(expression) such that this has the same effect as above:
SELECT table_to_json_array(foo) FROM foo
I need to use this lots so I was planning to create a Postgres function to have the effect of the combination of these calls to clean up my queries. Looking at the documentation it seems as if I need to create an aggregate rather than a function to take a table argument but those look like I would need to reimplement array_agg myself.
Have I missed something (possibly just the type a function would need to take)? Any suggestions?
In most cases foo is actually a subquery but I don't think that is
relevent to the question.
Unfortunately, it is. You can create a function with regclass argument:
create or replace function table_to_json(source regclass)
returns json language plpgsql
as $$
declare
t json;
begin
execute format ('
SELECT
array_to_json(
coalesce(array_agg(row_to_json(%s)),
ARRAY[]::json[]))
FROM %s', source, source)
into t;
return t;
end $$;
select table_to_json('my_table');
select table_to_json('my_schema.my_view');
But in context:
select table_to_json_rec(arg)
from (select * from my_table) arg
the argument arg is of type record. PL/pgSQL functions cannot accept type record. The only way to get this is a C function, what I guess is not an option. The same goes for aggregates (you must have a function to define an aggregate).
Postgres 9.3 adds a json_agg function which simplifies the specific query I need although this isn't a general solution to the aggregate functions issue. It still needs a coalesce function to ensure the empty set is properly returned.
SELECT coalesce( json_agg(foo), json'[]')
FROM foo
And it works even when foo is a subquery.

insert into table select variable from table2 equivalent in mongodb

I was searching in Internet but I found nothing.
I want to know if there is an equivalent in mongodb for this mysql command:
insert into table1 select variables from table2;
what i want to do is to select a document and insert it to another document as it's subdocument.
thanks for any help!
There is no way to do this in a single database command.
Whether you do this in the shell or in your scripting language of choice using a MongoDB driver, you will have to query for the document from collection1 and then do an appropriate update for collection2 using result from the query.
For example:
var doc = db.coll1.findOne( {_id:12345} );
db.coll2.update( {_id:98765}, {$push: {subs:doc} }, {upsert:1} )
What the update says is for document matching first argument do update in second argument (push into subs array value that's in doc) and last argument says create this document if it does not already exist (i.e. if _id: 98765 isn't already in collection2 it will be created).

MySQL SET Type in PostgreSQL? [duplicate]

This question already has an answer here:
convert MySQL SET data type to Postgres
(1 answer)
Closed 9 years ago.
I'm trying to use MySQL SET type in PostgreSQL, but I found only Arrays, that has quite similar functionality but doesn't met requirements.
Does PostgreSQL has similar datatype?
You can use following workarounds:
1. BIT strings
You can define your set of maximum N elements as simply BIT(N).
It is little bit awkward to populate and retrieve - you will have to use bit masks as set members. But bit strings really shine for set operations: intersection is simply &, union is |.
This type is stored very efficiently - bit per bit with small overhead for length.
Also, it is nice that length is not really limited (but you have to decide it upfront).
2. HSTORE
HSTORE type is an extension, but very easy to install. Simply executing
CREATE EXTENSION hstore
for most installations (9.1+) will make it available. Rumor has it that PostgreSQL 9.3 will have HSTORE as standard type.
It is not really a set type, but more like Perl hash or Python dictionary: it keeps arbitrary set of key=>value pairs.
With that, it is not very efficient (certainly not BIT string efficient), but it does provide functions essential for sets: || for union, but intersection is little bit awkward: use
slice(a,akeys(b)) || slice(b,akeys(a))
You can read more about HSTORE here.
What about an array with a check constraint:
create table foobar
(
myset text[] not null,
constraint check_set
check ( array_length(myset,1) <= 2
and (myset = array[''] or 'one'= ANY(myset) or 'two' = ANY(myset))
)
);
This would match a the definition of SET('one', 'two') as explained in the MySQL manual.
The only thing that this would not do, is to "normalize" the array. So
insert into foobar values (array['one', 'two']);
and
insert into foobar values (array['two', 'one']);
would be displayed differently than in MySQL (where both would wind up as 'one','two')
The check constraint will however get messy with more than 3 or 4 elements.
Building on a_horse_with_no_name's answer above, I would suggest something just a little more complex:
CREATE FUNCTION set_check(in_value anyarray, in_check anyarray)
RETURNS BOOL LANGUAGE SQL IMMUTABLE AS
$$
WITH basic_check AS (
select bool_and(v = any($2)) as condition, count(*) as ct
FROM unnest($1) v
GROUP BY v
), length_check AS (
SELECT count(*) = 0 as test FROM unnest($1)
)
SELECT bool_and(condition AND ct = 1)
FROM basic_check
UNION
SELECT test from length_check where test;
$$;
Then you should be able to do something like:
CREATE TABLE set_test (
my_set text[] CHECK (set_check(my_set, array['one'::text,'two']))
);
This works:
postgres=# insert into set_test values ('{}');
INSERT 0 1
postgres=# insert into set_test values ('{one}');
INSERT 0 1
postgres=# insert into set_test values ('{one,two}');
INSERT 0 1
postgres=# insert into set_test values ('{one,three}');
ERROR: new row for relation "set_test" violates check constraint "set_test_my_set_check"
postgres=# insert into set_test values ('{one,one}');
ERROR: new row for relation "set_test" violates check constraint "set_test_my_set_check"
Note this assumes that for your set, every value must be unique (we are talking sets here). The function should perform very well and should meet your needs. However this has the advantage of handling any size sets.
Storage-wise it is completely different from MySQL's implementation. It will take up more space on disk but should handle sets with as many members as you like, provided that you aren't running up against storage limits.... So this should have a superset of functionality in comparison to MySQL's implementation. One significant difference though is that this does not collapse the array into distinct values. It just prohibits them. If you need that too, look at a trigger.
This solution also leaves the ordinality of input data intact so '{one,two}' is distinct from '{two,one}' so if you need to ensure that behavior has changed, you may want to look into exclusion constraints on PostgreSQL 9.2.
Are you looking for enumerated data types?
PostgreSQL 9.1 Enumerated Types
From reading the page referenced in the question, it seems like a SET is a way of storing up to 64 named boolean values in one column. PostgreSQL does not provide a way to do this. You could use independent boolean columns, or some size of integer and twiddle the bits directly. Adding two new tables (one for the valid names, and the other to join names to detail rows) could make sense, especially if there is the possibility of needing to associate any other data to individual values.
some time ago I wrote one similar extension
https://github.com/okbob/Enumset
but it is not complete
some more complete and close to mysql is functionality from pltoolkit
http://okbob.blogspot.cz/2010/12/bitmapset-for-plpgsql.html
http://pgfoundry.org/frs/download.php/3203/pltoolbox-1.0.2.tar.gz
http://postgres.cz/wiki/PL_toolbox_%28en%29
function find_in_set can be emulated via arrays
http://okbob.blogspot.cz/2009/08/mysql-functions-for-postgresql.html