Strange JSON interpretation (polymorphic type), how to workaround? - json

This legal(!) CASE construct returns a JSON datatype:
SELECT CASE WHEN true THEN to_json(1) ELSE to_json('hello') END;
but:
ERROR: could not determine polymorphic type because input has type "unknown"
It is not "polymorphic", it is JSON.
... So, as bad workaround (lost number/string JSON representations),
SELECT to_json(CASE WHEN true THEN 1::text ELSE 'hello' END);
Is there a better way to do this SQL-to-JSON cast?

Do it the other way round:
SELECT CASE WHEN true THEN to_json(1) ELSE to_json(text 'hello') END;
Declare 'hello' as type text.
This way you retain 1 as number and 'hello' as string.
The explicit cast 'hello'::text is equivalent.
The reason is the Postgres type system. An unquoted 1 is a legal numeric constant and defaults to the Postgres data type integer. But 'hello' is just a string literal that starts out as type unknown. The function to_json() is polymorphic, that means it's input parameter is defined as ANYELEMENT. What it actually does depends on the input data type. And it does not know what to do with data type unknown. Hence the error message.
The result data type is json in either case (which is a regular Postgres data type), but that is orthogonal to the problem.
Related:
No function matches the given name and argument types
Is there a way to disable function overloading in Postgres

Related

SSIS Check GUID Equals "00000000-0000-0000-0000-000000000000"

I am doing a check for new GUID '00000000-0000-0000-0000-000000000000' in a Conditional Split step, but the expression is not recognized.
[WorkOrder.msdyn_workorderid] == "00000000-0000-0000-0000-000000000000"
The data types "DT_GUID" and "DT_WSTR" are incompatible for binary
operator "==". The operand types could not be implicitly cast into
compatible types for the operation. To perform this operation, one or
both operands need to be explicitly cast with a cast operator.
May I know how can a GUID be converted to string, or the right method to conduct this check? Thank you.

Storing json, jsonb, hstore, xml, enum, ipaddr, etc fails with "column "x" is of type json but expression is of type character varying"

When using PostgreSQL to store data in a field of a string-like validated type, like xml, json, jsonb, xml, ltree, etc, the INSERT or UPDATE fails with an error like:
column "the_col" is of type json but expression is of type character varying
... or
column "the_col" is of type json but expression is of type text
Why? What can I do about it?
I'm using JDBC (PgJDBC).
This happens via Hibernate, JPA, and all sorts of other abstraction layers.
The "standard" advice from the PostgreSQL team is to use a CAST in the SQL. This is not useful for people using query generators or ORMs, especially if those systems don't have explicit support for database types like json, so they're mapped via String in the application.
Some ORMs permit the implementation of custom type handlers, but I don't really want to write a custom handler for each data type for each ORM, e.g. json on Hibernate, json on EclipseLink, json on OpenJPA, xml on Hibernate, ... etc. There's no JPA2 SPI for writing a generic custom type handler. I'm looking for a general solution.
Why it happens
The problem is that PostgreSQL is overly strict about casts between text and non-text data types. It will not allow an implicit cast (one without a CAST or :: in the SQL) from a text type like text or varchar (character varying) to a text-like non-text type like json, xml, etc.
The PgJDBC driver specifies the data type of varchar when you call setString to assign a parameter. If the database type of the column, function argument, etc, is not actually varchar or text, but instead another type, you get a type error. This is also true of quite a lot of other drivers and ORMs.
PgJDBC: stringtype=unspecified
The best option when using PgJDBC is generally to pass the parameter stringtype=unspecified. This overrides the default behaviour of passing setString values as varchar and instead leaves it up to the database to "guess" their data type. In almost all cases this does exactly what you want, passing the string to the input validator for the type you want to store.
All: CREATE CAST ... WITH FUNCTION ...
You can instead CREATE CAST to define a data-type specific cast to permit this on a type-by-type basis, but this can have side effects elsewhere. If you do this, do not use WITHOUT FUNCTION casts, they will bypass type validation and result in errors. You must use the input/validation function for the data type. Using CREATE CAST is suitable for users of other database drivers that don't have any way to stop the driver specifying the type for string/text parameters.
e.g.
CREATE OR REPLACE FUNCTION json_intext(text) RETURNS json AS $$
SELECT json_in($1::cstring);
$$ LANGUAGE SQL IMMUTABLE;
CREATE CAST (text AS json)
WITH FUNCTION json_intext(text) AS IMPLICIT;
All: Custom type handler
If your ORM permits, you can implement a custom type handler for the data type and that specific ORM. This mostly useful when you're using native Java type that maps well to the PostgreSQL type, rather than using String, though it can also work if your ORM lets you specify type handlers using annotations etc.
Methods for implementing custom type handlers are driver-, language- and ORM-specific. Here's an example for Java and Hibernate for json.
PgJDBC: type handler using PGObject
If you're using a native Java type in Java, you can extend PGObject to provide a PgJDBC type mapping for your type. You will probably also need to implement an ORM-specific type handler to use your PGObject, since most ORMs will just call toString on types they don't recognise. This is the preferred way to map complex types between Java and PostgreSQL, but also the most complex.
PgJDBC: Type handler using setObject(int, Object)
If you're using String to hold the value in Java, rather than a more specific type, you can invoke the JDBC method setObject(integer, Object) to store the string with no particular data type specified. The JDBC driver will send the string representation, and the database will infer the type from the destination column type or function argument type.
See also
Questions:
Mapping postgreSQL JSON column to Hibernate value type
Are JPA (EclipseLink) custom types possible?
External:
http://www.postgresql.org/message-id/54096082.1090009#2ndquadrant.com
https://github.com/pgjdbc/pgjdbc/issues/265
http://www.pateldenish.com/2013/05/inserting-json-data-into-postgres-using-jdbc-driver.html

Extract an int, string, boolean, etc. as its corresponding PostgreSQL type from JSON [duplicate]

This question already has answers here:
Postgres: How to convert a json string to text?
(5 answers)
How to convert postgres json to integer
(3 answers)
How to convert PostgreSQL 9.4's jsonb type to float
(7 answers)
In postgresql, how can I return a boolean value instead of string on a jsonb key?
(1 answer)
Closed 8 months ago.
I feel like I must just be missing something simple here, but I've looked through PostgreSQL's documentation on JSON and the JSON operators and functions and don't see anything explaining it.
It's easy to turn things into JSON in PostgreSQL:
SELECT *, pg_typeof(j) FROM (VALUES
(to_json(5)),
(to_json(true)),
(to_json('foo'::TEXT))
) x (j);
will give me back a nice result set full of jsons:
j | pg_typeof
-------+-----------
5 | json
true | json
"foo" | json
But how do I convert these json values back into their original types? I don't expect to be able to do that all in one result set, of course, since the types aren't consistent. I just mean individually.
Lots of stuff I tried
Casting sure doesn't work:
SELECT to_json(5)::NUMERIC;
gives
ERROR: cannot cast type json to numeric
If I try to abuse the json_populate_record function like so:
SELECT json_populate_record(null::INTEGER, to_json(5));
I get
ERROR: first argument of json_populate_record must be a row type
In PG 9.4, I can pretty easily tell the type: SELECT json_typeof(to_json(5)); gives number, but that doesn't help me actually extract it.
Neither does json_to_record (also 9.4):
SELECT * FROM json_to_record(to_json(5)) x (i INT);
gets me another error:
ERROR: cannot call json_to_record on a scalar
So how do you convert json "scalars" (as PG calls them, apparently) into the corresponding PG type?
I'm interested in 9.3 and 9.4; 9.2 would just be a bonus.
The simplest way for booleans and numbers seems to be to first cast to TEXT and then cast to the appropriate type:
SELECT j::TEXT::NUMERIC
FROM (VALUES ('5.4575e6'::json)) x (j)
-- Result is 5457500, with column type NUMERIC
SELECT j::TEXT::BOOLEAN
FROM (VALUES ('true'::json)) x (j)
-- Result is t, with column type BOOLEAN
This leaves strings, where you instead get back a quoted value trying to this:
SELECT j::TEXT
FROM (VALUES (to_json('foo'::TEXT))) x (j)
-- Result is "foo"
Apparently, that particular part of my question has already been addressed. You can get around it by wrapping the text value in an array and then extracting it:
SELECT array_to_json(array[j])->>0
FROM (VALUES (to_json('foo'::TEXT))) x (j)
-- Result is foo, with column type TEXT.
First step: if your values are contained within structures (which is usually the case), you need to use the correct operators / functions to extract your data's string representation: ->> (9.3+), #>> (9.3+), json_each_text() (9.3+), json_array_elements_text() (9.4+).
To select json array elements' text representation in 9.3, you need something like this:
select json_array ->> indx
from generate_series(0, json_array_length(json_array) - 1) indx
For scalar values, you can use this little trick:
select ('[' || json_scalar || ']')::json ->> 0 -- ...
At this point, strings and nulls are handled (json nulls convered to sql NULLs by these methods). To select numbers, you need to use casts to numeric (that's fully1 compatible with json's number). To select booleans, use casts to boolean (both true and false supported as input representations). But note, that casts can make your query fail, if their input representation is not accepted. F.ex. if you have a json object in some of your columns, that object usually have some key, which is usually number (but not always), this query can fail:
select (json_object ->> 'key')::numeric
from your_table
If you have such data, you need to filter your selects with json_typeof() (9.4+):
select (json_object ->> 'key')::numeric
from your_table
where json_typeof(json_object -> 'key') = 'number'
1 I haven't checked their full syntaxes, but numeric also accepts scientific notation, so in theory, all json numbers should be handled correctly.
For 9.2+, this function can test a json value's type:
create or replace function json_typeof(json)
returns text
language sql
immutable
strict
as $func$
select case left(trim(leading E'\x20\x09\x0A\x0D' from $1::text), 1)
when 'n' then 'null'
when 't' then 'boolean'
when 'f' then 'boolean'
when '"' then 'string'
when '[' then 'array'
when '{' then 'object'
else 'number'
end
$func$;
This is question similar to yours. Essentially, the underlying bit-level representations of the data types are incompatible, and transforming a scalar into the native type is not something that has been implemented because of the ambiguities involved. JSON has a very strict spec that corresponds tightly to javascript objects and natives.
It is possible, but I do not think it has been implemented yet.

json string formatting integer values

I'm trying to understand a simple basic concept regarding JSON strings. I'm running a simple test that looks like this:
$(document).ready(function() {
var last = 9;
var json1 = $.parseJSON('{"id":"10"}');
var json2 = $.parseJSON('{"id":10}');
if(json1.id > last)
alert("json1.id is greater than last");
if(json2.id > last)
alert("json2.id is greater than last");
});
Since the variable "last" is type int I'm trying to make a comparison between it and the "id" from two different JSON strings. json1 denotes the ten value as a string, whereas json2 denotes it as an integer value. When this is run, both alerts are executed. I did not expect that. I expected that the second alert would execute, but not the first one since ten is presented as a string.
I believe that the correct way to format an integer value in JSON is in json2, right?
Why is the first test executing the alert?
I'm trying to troubleshoot a larger project and thought the problem might be in the way the JSON string is formatted.
The documentation of Javascript's operators holds all the answers:
Strings are compared based on standard lexicographical ordering, using
Unicode values. In most cases, if the two operands are not of the same
type, JavaScript attempts to convert them to an appropriate type for
the comparison. This behavior generally results in comparing the
operands numerically. The sole exceptions to type conversion within
comparisons involve the === and !== operators, which perform strict
equality and inequality comparisons. These operators do not attempt to
convert the operands to compatible types before checking equality.
Source: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Expressions_and_Operators#Comparison_operators

Specs2 JsonMatchers match 'empty array'

I have the following json string:
{"guid": "4bad1d9a-180f-4751-a698-4aac07b1eac7","partition":1,"roles": []}
I've been able to use specs2's org.specs2.matcher.JsonMatchers to enforce guid and partition, e.g.:
json must /("guid" -> "4bad1d9a-180f-4751-a698-4aac07b1eac7")
json must /("partition" -> 1)
But I cannot figure out the correct syntax to enforce that 'roles' is present and 'is an empty array'. Is this do-able?
Edit:
Per a commenter's question, I have tried the following:
json must /("roles" -> "[]")
which results in the following test failure:
[error] {guid : 5ad4c4c5-4fdb-461b-9883-b84ff3b84610,partition : 1.0,roles : []} doesn't contain 'roles' : '[]'
The value to be tested for roles is of scala.util.parsing.json.JSONArray type so you can write:
json must /("roles" -> JSONArray(Nil))
And if that comes up a lot maybe define a value:
val empty = JSONArray(Nil)
json must /("roles" -> empty)
For those coming to this question looking for an answer, the answer supplied by Eric is no longer valid.
It seems the current way (specs2 3.8) to match an empty array would be:
json must /("roles").andHave(exactly[String]())
This is not the best way, but it's the only one I found that works and makes some sense when read back.
The andHave matches against the contents of roles and exactly with no parameters matches an empty array. The use of the String type parameter is there because otherwise the compiler complains it can't infer the type.