How to use operands for json in postgres - json

I am using postgres 9.4. How to use regular operands such as < , > <= etc with json postgres where key is a numeric, and value is a text till a limit of key numeric value is reached?
This is my table:
create table foo (
id numeric,
x json
);
The values for the json are as follows:
id | x
----+--------------------
1 | '{"1":"A","2":"B"}'
2 | '{"3":"C","4":"A"}'
3 | '{"5":"B","6":"C"}'
so on randomly till key is 100
I am trying to get all the id, keys, values of the json key where key is <= 20.
I have tried:
select *
from foo
where x->>'key' <='5';
The above query ran, and should have given me 20 rows of output, instead it gave me 0. The below query ran, and gave me 20 rows but it took over 30 mins!
select
id
, key::bigint as key
, value::text as value
from foo
, jsonb_each(x::jsonb)
where key::numeric <= 100;
Is there a way to use a for loop or a do-while loop until x = 20 for json? Is there a way the run time be reduced?
Any help appreciated!

The only operator which can query JSON keys & use indexes on jsonb (but not on json) is the ? operator. But unfortunately, you cannot use it in conjunction with <=.
However, you can use generate_series() if your queried range is relatively small:
-- use `jsonb` instead of `json`
create table foo (
id numeric,
x jsonb
);
-- sample data
insert into foo
values (1, '{"1":"A","2":"B"}'),
(2, '{"3":"C","4":"A"}'),
(3, '{"5":"B","6":"C"}'),
(4, '{"7":"A","8":"B"}'),
(5, '{"9":"C","10":"A"}'),
(6, '{"11":"B","12":"C"}');
-- optionally an index to speed up `?` queries
create index foo_x_idx on foo using gin (x);
select distinct foo.*
from generate_series(1, 5) s
join foo on x ? s::text;
To work with larger ranges, you may need to extract all numeric keys of x into an integer array (int[]) & index that.

Related

Subtract number from enum

My number column in data table has enum values:
1, 2, 3, 4, 5, 101, 102, 103, 104, 105
I want to set values of number = number - 100 for all rows using query:
UPDATE data SET data.number = (data.number - 100) WHERE data.number > 100
But it does not work on enum data.
This is a little complicated.
Mysql doesn'z allow number to be cast from ENUM, because the says that numbers should not be used as ENUM value.
but as you need the value if the ENUM field you can convert it into characters.
The final step is to convert that text into a number.
Schema (MySQL v5.7)
CREATE TABLE IF NOT EXISTS `data` (
number ENUM('1', '2', '3', '4', '5', '101', '102', '103', '104', '105')
) ;
INSERT INTO `data` VALUES ( '103');
UPDATE `data` SET number = CAST(CAST(`number` AS CHAR) AS UNSIGNED) - 100 WHERE CAST(CAST(`number` AS CHAR) AS UNSIGNED) >= 100;
Query #1
SELECT * FROM `data`;
| number |
| ------ |
| 3 |
View on DB Fiddle
You can only do arithmetic operations on cardinal numbers (integer, float, double, decimal, date). An enum datatype is nominal.
You could try casting it within SQL but if this operation should be possible the issue is that you have chosen the wrong data type.
That you are exclusively using digits to represent states should have been a red flag.
But it does not work on enum data.
Of course. The string values of ENUM datatype are not the values stored in table data body - it contains 2-byte index of the value only, and according string values list is stored in table definition area which is not available for changing by any DML operation. Only DDL (ALTER TABLE) may change ENUM value string representation definition.
The posted query will treate data.number as index value - so you must get the error (source values are 1-10, final values will be negative whereas ENUM index values are UNSIGNED).

PostgreSQL: row_to_json with selective columns [duplicate]

This question already has answers here:
How to set correct attribute names to a json aggregated result with GROUP BY clause?
(6 answers)
Closed 6 years ago.
Task concept and my question
Using Postgres 9.4. How could I use row_to_json(row) with selective columns (not the entire row)? I need to discard one column from the row constructor while building JSON, but also need to preserve column names.
Restrictions
Do not use self join to the same table/cte with selective columns choice
Do not use external function to handle deleting key from json, afterwards
I'm well aware that I can write and use my own function to remove a key from JSON, or that in Postgres 9.5 there is - operator for JSONB. However, I would like to do this beforehand without additional function call and I'm pretty sure it's possible.
MVCE and explanation
Generating sample data
CREATE TABLE test_table ( id int, col1 int, col2 int, col3 text );
INSERT INTO test_table VALUES
(1, 23, 15, 'Jessica'), (2, 43, 84, 'Thomas');
1) First try, simple row_to_json(row), which is obviously not working:
SELECT id, row_to_json(t) FROM test_table t
I need to discard column id from the row constructor not to add it while parsing the row as json. Above returns:
id | row_to_json
----+-----------------------------------------------
1 | {"id":1,"col1":23,"col2":15,"col3":"Jessica"}
2 | {"id":2,"col1":43,"col2":84,"col3":"Thomas"}
2) Second try, with explicit passing of columns row_to_json(row(col1, ...)):
SELECT id, row_to_json(row(col1, col2, col3)) FROM test_table t
But I'm losing column names (as mentioned in docs it all converts to fX, where X is a number:
id | row_to_json
----+----------------------------------
1 | {"f1":23,"f2":15,"f3":"Jessica"}
2 | {"f1":43,"f2":84,"f3":"Thomas"}
Expected output
Expected output is obviously from the (1) point in MVCE but without id key-value pair:
id | row_to_json
----+-----------------------------------------------
1 | {"col1":23,"col2":15,"col3":"Jessica"}
2 | {"col1":43,"col2":84,"col3":"Thomas"}
It seems that creating a type with desired column names and matching data types and then casting the row to it will do the trick:
CREATE TYPE my_type AS (
col1 int,
col2 int,
col3 text
);
Then altering my statement by adding the cast of row to defined type:
SELECT id, row_to_json(cast(row(col1, col2, col3) as my_type)) FROM test_table t;
Brings out the expected output:
id | row_to_json
----+-----------------------------------------------
1 | {"col1":23,"col2":15,"col3":"Jessica"}
2 | {"col1":43,"col2":84,"col3":"Thomas"}
However, is there any method for this to be built without additional type?

postgres force json datatype

When working with JSON datatype, is there a way to ensure the input JSON must have elements. I don't mean primary, I want the JSON that gets inserted to at least have the id and name element, it can have more but at the minimum the id and name must be there.
thanks
The function checks what you want:
create or replace function json_has_id_and_name(val json)
returns boolean language sql as $$
select coalesce(
(
select array['id', 'name'] <# array_agg(key)
from json_object_keys(val) key
),
false)
$$;
select json_has_id_and_name('{"id":1, "name":"abc"}'), json_has_id_and_name('{"id":1}');
json_has_id_and_name | json_has_id_and_name
----------------------+----------------------
t | f
(1 row)
You can use it in a check constraint, e.g.:
create table my_table (
id int primary key,
jdata json check (json_has_id_and_name(jdata))
);
insert into my_table values (1, '{"id":1}');
ERROR: new row for relation "my_table" violates check constraint "my_table_jdata_check"
DETAIL: Failing row contains (1, {"id":1}).

Query Postgres for number of items in JSON

I am running Postgres 9.3 and have a problem with a query involving a JSON column that I cannot seem to crack.
Let's assume this is the table:
# CREATE TABLE aa (a int, b json);
# INSERT INTO aa VALUES (1, '{"f1":1,"f2":true}');
# INSERT INTO aa VALUES (2, '{"f1":2,"f2":false,"f3":"Hi I''m \"Dave\""}');
# INSERT INTO aa VALUES (3, '{"f1":3,"f2":true,"f3":"Hi I''m \"Popo\""}');
I now want to create a query that returns all rows that have exactly three items/keys in the root node of the JSON column (i.e., row 2 and 3). Whether the JSON is nested doesn't matter.
I tried to use json_object_keys and json_each but couldn't get it to work.
json_each(json) should do the job. Counting only root elements:
SELECT aa.*
FROM aa, json_each(aa.b) elem
GROUP BY aa.a -- possible, because it's the PK!
HAVING count(*) = 3;
SQL Fiddle.

ISSUE: Mysql converting Enum to Int

I have a very simple rating system in my database where each rating is stored as an enum('1','-1'). To calculate the total I tried using this statement:
SELECT SUM(CONVERT(rating, SIGNED)) as value from table WHERE _id = 1
This works fine for the positive 1 but for some reason the -1 are parsed out to 2's.
Can anyone help or offer incite?
Or should I give up and just change the column to a SIGNED INT(1)?
this is what you want
select enum+0 as enum
This conversion to int in MySQL for enum is only possible:
CAST(CAST(`rating` AS CHAR) AS SIGNED) as value from table WHERE _id = 1
Yes, I'd suggest to change the type of the column. The issue becomes clear when you read the doc about enum type (which strongly recommends not to use numbers as enumeration values!) - the index of the enum item is returned, not the enum value itself.
Ok guys,
Just had a bit of a mere of a time with this one. I learned that i shouldn't use ENUMs where integers are the values. However We had years worth of data and i couldn't alter the database.
This bad boy worked (turning it into a character, then into a signed int).
CAST(CAST(`rating` AS CHAR) AS SIGNED) as value from table WHERE _id = 1
use
SELECT SUM( IF( columnname >0, CAST( columnname AS CHAR ) , NULL ) ) AS vals
FROM `tableName`
I wouldn't use enum here too, but it is still possible in this case to get what is needed
Creating table:
CREATE TABLE test (
_id INT PRIMARY KEY,
rating ENUM('1', '-1')
);
Filling table:
INSERT INTO test VALUES(1, "1"), (2, "1"), (3, "-1"), (4, "-1"), (5, "-1");
Performing math operations on enums converts them to indexes, so it is possible just to scale the result value:
SELECT
SUM(3 - rating * 2)
FROM
test;
Result: -1 which is true for the test case.