There is a single page application which sends POST HTTP requests with payload in JSON:
{"product": { "name": "product A", "quantity": 100 }}
There is a Postgres database which has tables and stored procedures:
create table product {
product_id serial primary key,
name text,
quantity numeric,
description text
}
create function insert_product (product product) returns product as $$
-- This function accepts a product type as an argument
Is there a solution in any language that would sit on a server, handle HTTP requests, call stored procedures and automatically convert JSON objects to proper Postgres row types?
In pseudo-Express.js
app.post('/product', (req, res) =>
db.query('select insert_product($1)', [convertToPostgresPlease(req.body.product)])
What I don't consider solutions:
destructuring JSON object and feeding Postgres every key by the teaspoon
or handling JSON inside stored procedure (SP should accept a row type)
duplicating information from Postgres schema (solution must use Postgres introspection capabilities)
Manually concatenating '(' + product.name + ',' + ...
I know stored procedures are often frowned upon, but for small projects I honestly think they're great. SQL is an amazing DSL for working with data and Postgres is advanced enough to handle any data-related task.
In any case, what is the most simple way to connect JSON HTTP request with a proper SQL RDBMS?
Found solutions (almost):
postgraphql (works, but too much magic)
Chicken Scheme (list-unparser, didn't try yet)
As Abelisto mentioned in the comments, you can convert from JSON/JSONB parameters within a database function to a specific table row, using json_populate_record/jsonb_populate_record. Another alternative is using the json variable directly using the -> and ->> operators to retrieve its contents. The disadvantage of this is that there is a fair amount of coding for the maintenance of each table.
You may also be able to benefit from RESTful interfaces (e.g. https://github.com/QBisConsult/psql-api).
Another option for a heavily JSON-based solution is simplify operations for the bulk of small tables that wouldn't grow beyond a few hundred records each. There would be a performance toll, but for a few rows it would likely be negligible.
The following exemplifies the power of the JSON datatype in PostgreSQL and the GIN indexes which support JSON operators. You can still use normal tables and specialised functions for data that requires maximum performance.
The example:
CREATE TABLE public.jtables (
table_id serial NOT NULL PRIMARY KEY,
table_name text NOT NULL UNIQUE,
fields jsonb
);
INSERT INTO public.jtables VALUES (default, 'product', '{"id": "number", "name": "string", "quantity": "number", "description": "string"}'::jsonb);
CREATE TABLE public.jdata (
table_id int NOT NULL REFERENCES jtables,
data jsonb NOT NULL
);
CREATE UNIQUE INDEX ON public.jdata USING BTREE (table_id, (data->>'id'));
CREATE INDEX ON public.jdata USING GIN (data);
You could create functions to manipulate data in a generic JSON way, e.g.:
CREATE FUNCTION public.jdata_insert(_table text, _data jsonb) RETURNS text AS
$BODY$
INSERT INTO public.jdata
SELECT table_id, $2
FROM public.jtables
WHERE table_name = $1
RETURNING (data)->>'id';
$BODY$ LANGUAGE sql;
CREATE FUNCTION public.jdata_update(_table text, _id text, _data jsonb) RETURNS text AS
$BODY$
UPDATE public.jdata d SET data = jsonb_strip_nulls(d.data || $3)
FROM public.jtables t
WHERE d.table_id = t.table_id AND t.table_name = $1 AND (d.data->>'id') = $2
RETURNING (d.data)->>'id';
$BODY$ LANGUAGE sql;
Rows can then be inserted using these generic functions:
SELECT public.jdata_insert('product', '{"id": 1, "name": "Product 1", "quantity": 10, "description": "no description"}'::jsonb);
SELECT public.jdata_insert('product', '{"id": 2, "name": "Product 2", "quantity": 5}'::jsonb);
SELECT public.jdata_update('product', '1', '{"description": "test product"}'::jsonb);
And their data can be queried in a variety of ways which make use of the existing indexes:
SELECT * FROM public.jdata WHERE table_id = 1 AND (data->>'id') = '1';
SELECT * FROM public.jdata WHERE table_id = 1 AND data #> '{"quantity": 5}'::jsonb;
SELECT * FROM public.jdata WHERE table_id = 1 AND data ? 'description';
Views can make make queries easier:
CREATE VIEW public.vjdata AS
SELECT d.table_id, t.table_name, (d.data->>'id') AS id, d.data
FROM jtables t
JOIN jdata d USING (table_id);
CREATE OR REPLACE FUNCTION public.vjdata_upsert() RETURNS trigger AS $$
BEGIN
IF TG_OP = 'INSERT' THEN
PERFORM public.jdata_insert(NEW.table_name, NEW.data);
ELSE
PERFORM public.jdata_update(NEW.table_name, NEW.id, NEW.data);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER vjdata_upsert_trigger INSTEAD OF INSERT OR UPDATE
ON public.vjdata FOR EACH ROW EXECUTE PROCEDURE public.vjdata_upsert();
UPDATE public.vjdata SET
data = data || jsonb_build_object('quantity', (data->>'quantity')::int + 2)
WHERE table_name = 'product' AND id = '2'
SELECT * FROM public.vjdata WHERE table_name = 'product' AND id = '2';
Check out PostgREST? BTW, I don't know why anyone would frown on stored procs. The correct way to interact with the DB is through views and functions/procs. Having SQL in plain code is something that just happened over the last 15 years, really due simply to convenience and loss of SQL skills. It's harder for most people to do set operations than procedural processing.
Related
I have a postgres DB called sales with a json-object, data containing around 100 outer-keys, lets name them k1,k2,k3..,k100.
I want to write a query
select * from sales some_function(data)
which simply returns something like
k1 | k2 | .. | k100
--------------------
"foo" | "bar" | .. | 2
"fizz"| "buzz"| .. | 10
ie. just unpacks the keys as columsn and their values as row.
Note, k1,k2..k100 is not their real name thus I can't do a
data->> key loop
That's not possible. One restriction of the SQL language is, that all columns (and their data types) must be known to the database when parsing the statement - so before it is actually run.
You will have to write each one separately:
select data ->> 'k1' as k1, data ->> 'k2' as k2, ...
from sales
One way to make this easier, is to generate a view dynamically by extracting all JSON keys from the column, then using dynamic SQL to create the view. You will however need to re-create that view each time the number of keys change.
Something along the lines (not tested!)
do
$$
declare
l_columns text;
l_sql text;
begin
select string_agg(distinct format('data ->> %L as %I', t.key, t.key), ', ')
into l_columns
from sales s
cross join jsonb_each(s.data) as t(key, value);
-- l_columns now contains something like:
-- data ->> 'k1' as k1, data ->> 'k2' as k2
-- now create a view from that
l_sql := 'create view sales_keys as select '||l_columns||' from sales';
execute l_sql;
end;
$$
;
You probably want to add e.g. the primary key column(s) to the view, so that you can match the JSON values back to the original row(s).
Latest release of PostgreSQL have capabilities to work like document oriented databases (e.g. MongoDB). There Is promising benchmarks that says postgres x times faster then mongo. Can someone give me advice how to work with postgres as with MongoDB. I'm seeking for step by step simple example concerned on
1) How to create simpliest table that contain JSON/JSONB objects like documents in mongodb
2) How to make search on it at least by id, like I can do in mongodb with collection.find({id: 'objectId'}) for example
3) How to create new object or overwrite existing at least by id, like I can do in mongodb with
collection.update(
{id: objectId},
{$set: someSetObject, $unset: someUnsetObject}
{upsert: true, w: 1}
)
4) How to delete object if it exists at leas by id, like I can do in mongodb with collection.remove({id: 'objectId'})
It's too large topic to be covered in one answer. So there is just some examples as requested. For more information see documentation:
8.14. JSON Types
9.15. JSON Functions and Operators
Create table:
create table test(
id serial primary key,
data jsonb);
Search by id:
select * from test where id = 1;
Search by json value:
select * from test where data->>'a' = '1';
Insert and update data:
insert into test(id, data) values (1, '{"a": 1, "b": 2, "c": 3}');
update test set data = data - 'a' || '{"c": 5}' where id = 1;
Delete data by id:
delete from test where id = 1;
Delete data by json value:
delete from test where data->>'b' = '2';
I have created a struct to store spatial types and I have created a scan function to help query rows in my database. I am having issues inserting this type.
I can insert data using the following sql;
INSERT INTO 'table' ('spot') VALUES (GeomFromText('POINT(10 10)'));
If I use Value interface in database/sql/driver;
type Value interface{}
Value is a value that drivers must be able to handle. It is either nil or an instance of one of these types:
int64
float64
bool
[]byte
string [*] everywhere except from Rows.Next.
time.Time
And use this code;
func (p Point) Value() (driver.Value, error) {
return "GeomFromText('" + p.ToWKT() + "')", nil
}
I end up with the following sql statement going to the database;
INSERT INTO 'table' ('spot') VALUES ('GeomFromText('POINT(10 10)')');
The issue being that the function GeomFromText is in quotes. Is there a way to avoid this scenario? I am using gorm and trying to keep raw sql queries to a minimum.
The mysql type being used on the database end is a point.
Please see the two urls below where the concept was poached from
Schema
-- http://howto-use-mysql-spatial-ext.blogspot.com/
create table Points
( id int auto_increment primary key,
name VARCHAR(20) not null,
location Point NOT NULL,
description VARCHAR(200) not null,
SPATIAL INDEX(location),
key(name)
)engine=MyISAM; -- for use of spatial indexes and avoiding error 1464
-- insert a row, so we can prove Update later will work
INSERT INTO Points (name, location, description) VALUES
( 'point1' , GeomFromText( ' POINT(31.5 42.2) ' ) , 'some place');
Update statement
-- concept borrowed from http://stackoverflow.com/a/7135890
UPDATE Points
set location = PointFromText(CONCAT('POINT(',13.33,' ',26.48,')'))
where id=1;
Verify
select * from points;
(when you open the Value Editor to see the blob, the point is updated)
So, the takeaway is to play with the concat() inside of the update statement.
I have a settings table with two columns - name and value. Names are unique. I can easily read it into memory and then create a dictionary using the entry names as the keys.
I was wondering whether this can be done entirely from the SQL using some postgresql functions and applying the row_to_json function at the end.
I have version 9.2
Is it possible? It should be.
I think what you'd have to do is create a function for pulling a record in (as an argument) and transforming it to a record of arbitrary type and turning that into JSON.
This was done on 9.1 with the json extension.
create or replace function to_json(test) returns json language plpgsql
as $$
declare t_row record;
retval json;
begin
EXECUTE $E$ SELECT $1 AS $E$ || quote_ident($1.name) INTO t_row
USING $1.value;
RETURN row_to_json(t_row);
end;
$$;
Then I can:
select * from test;
name | value
-------+--------
test1 | foo
test2 | foobar
(2 rows)
SELECT to_json(test) from test;
to_json
--------------------
{"test1":"foo"}
{"test2":"foobar"}
Now if you want to merge these all into one object you have a little more work to do but it could be done using the same basic tools.
This should work in postgres-9.3. (untested, since I don't have 9.3 available here yet)
DROP SCHEMA tmp CASCADE;
CREATE SCHEMA tmp ;
SET search_path=tmp;
CREATE table pipo (name varchar NOT NULL PRIMARY KEY
, value varchar);
INSERT INTO pipo (name, value ) VALUES
('memory' , '10Mb'), ('disk' , '1Gb'), ('video' , '10Mpix/sec'), ('sound' , '100dB');
SELECT row_to_json( ROW(p.name,p.value) )
FROM pipo p ;
Now I use to manually parse json into insert string like so
insert into Table (field1, field2) values (val1, val2)
but its not comfortable way to insert data from json!
I've found function json_populate_record and tried to use it:
create table test (id serial, name varchar(50));
insert into test select * from json_populate_record(NULL::test, '{"name": "John"}');
but it fails with the message: null value in column "id" violates not-null constraint
PG knows that id is serial but pretends to be a fool. Same it do for all fieds with defaults.
Is there more elegant vay to insert data from json into a table?
There's no easy way for json_populate_record to return a marker that means "generate this value".
PostgreSQL does not allow you to insert NULL to specify that a value should be generated. If you ask for NULL Pg expects to mean NULL and doesn't want to second-guess you. Additionally it's perfectly OK to have a generated column that has no NOT NULL constraint, in which case it's perfectly fine to insert NULL into it.
If you want to have PostgreSQL use the table default for a value there are two ways to do this:
Omit that row from the INSERT column-list; or
Explicitly write DEFAULT, which is only valid in a VALUES expression
Since you can't use VALUES(DEFAULT, ...) here, your only option is to omit the column from the INSERT column-list:
regress=# create table test (id serial primary key, name varchar(50));
CREATE TABLE
regress=# insert into test(name) select name from json_populate_record(NULL::test, '{"name": "John"}');
INSERT 0 1
Yes, this means you must list the columns. Twice, in fact, once in the SELECT list and once in the INSERT column-list.
To avoid the need for that this PostgreSQL would need to have a way of specifying DEFAULT as a value for a record, so json_populate_record could return DEFAULT instead of NULL for columns that aren't defined. That might not be what you intended for all columns and would lead to the question of how DEFAULT would be treated when json_populate_record was not being used in an INSERT expression.
So I guess json_populate_record might be less useful than you hoped for rows with generated keys.
Continuing from Craig's answer, you probably need to write some sort of stored procedure to perform the necessary dynamic SQL, like as follows:
CREATE OR REPLACE FUNCTION jsoninsert(relname text, reljson text)
RETURNS record AS
$BODY$DECLARE
ret RECORD;
inputstring text;
BEGIN
SELECT string_agg(quote_ident(key),',') INTO inputstring
FROM json_object_keys(reljson::json) AS X (key);
EXECUTE 'INSERT INTO '|| quote_ident(relname)
|| '(' || inputstring || ') SELECT ' || inputstring
|| ' FROM json_populate_record( NULL::' || quote_ident(relname) || ' , json_in($1)) RETURNING *'
INTO ret USING reljson::cstring;
RETURN ret;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
Which you'd then call with
SELECT jsoninsert('test', '{"name": "John"}');