Postgresql JSON Dynamic Expand & Increment - json

Hello guys ive been trying to build an software using postgresql and python.
Basically i want increment and/or dynamically expand the json
example: at first the field will be empty then:
#insert (toyota,honda,nissan)
{"toyota":1,
"honda":1,
"nissan":1}
#insert (toyota)
{"toyota":2,
"honda":1,
"nissan":1}
#insert (honda,mitsubitshi)
{"toyota":2,
"honda":2,
"nissan":1,
"mitsubitshi":1}
Yes i know it can be done by first retrieving json doing it via python but i dont it that way:
I dont have much experience with postgresql procedure or trigger feature.
Any Help will be apreciated: :-)

Normalized tables would be more performant, however json solution may be quite comfortable using this function:
create or replace function add_cars(cars jsonb, variadic car text[])
returns jsonb language plpgsql as $$
declare
new_car text;
begin
foreach new_car in array car loop
cars = cars || jsonb_build_object(new_car, coalesce(cars->>new_car, '0')::int+ 1);
end loop;
return cars;
end $$;
Find the full example in DbFiddle.

Please check function below. Hopefully it meets your requirement!
CREATE FUNCTION sp_test(json)
RETURNS VOID AS
$BODY$
DECLARE
var_sql varchar;
BEGIN
IF (EXISTS (
SELECT json_object_keys($1)
EXCEPT
SELECT column_name FROM information_schema.columns WHERE table_schema = 'your schema' AND table_name = 'test_table'
)) THEn
RAISE EXCEPTION 'There is column(s) does not exists on table'; -- Checking structure.
END IF;
var_sql := 'Update test_table t SET ' || (SELECT string_agg(CONCAT(t.key, ' = (t.', t.key, ' + ', t.value,')'),', ') FROM json_each($1) t);
EXECUTE (var_sql);
END;
$BODY$
LANGUAGE plpgsql;

Related

want to Write a Stored Procedure in MYSQL [duplicate]

I have made a stored procedure. I want it to filter the data by different parameters. If I pass one parameter, it should be filtered by one; if I pass two, it should be filtered by two, and so on, but it is not working.
Can anyone help me please?
DROP PROCEDURE IF EXISTS medatabase.SP_rptProvince2;
CREATE PROCEDURE medatabase.`SP_rptProvince2`(
IN e_Region VARCHAR(45)
)
BEGIN
DECLARE strQuery VARCHAR(1024);
DECLARE stmtp VARCHAR(1024);
SET #strQuery = CONCAT('SELECT * FROM alldata where 1=1');
IF e_region IS NOT NULL THEN
SET #strQuery = CONCAT(#strQuery, ' AND (regionName)'=e_Region);
END IF;
PREPARE stmtp FROM #strQuery;
EXECUTE stmtp;
END;
AFAIK, you can't have a variable argument list like that. You can do one of a couple of things:
Take a fixed maximum number of parameters, and check them for null-ness before concatenating:
CREATE PROCEDURE SP_rptProvince2(a1 VARCHAR(45), a2 VARCHAR(45), ...)
...
IF a1 IS NOT NULL THEN
SET #strQuery = CONCAT(#strQuery, ' AND ', a2);
END IF;
If you need predetermined fields to which the criteria in the argument apply (like the e_Region parameter in your existing code), then you modify the CONCAT operation appropriately.
Possible invocation:
CALL SP_rptProvince2('''North''', 'column3 = ''South''')
Take a single parameter that is much bigger than just 45 characters, and simply append it to the query (assuming it is not null).
Clearly, this places the onus on the user to provide the correct SQL code.
Possible invocation:
CALL SP_rptProvince2('RegionName = ''North'' AND column3 = ''South''')
There's not a lot to choose between the two. Either can be made to work; neither is entirely satisfactory.
You might note that there was a need to protect the strings in the arguments with extra quotes; that is the sort of thing that makes this problematic.
I found a JSON-based approach which works with the latest MySQL/MariaDB systems. Check the link below (Original Author is Federico Razzoli): https://federico-razzoli.com/variable-number-of-parameters-and-optional-parameters-in-mysql-mariadb-procedures
Basically, you take a BLOB parameter which is actually a JSON object and then do JSON_UNQUOTE(JSON_EXTRACT(json object, key)) as appropriate.
Lifted an extract here:
CREATE FUNCTION table_exists(params BLOB)
RETURNS BOOL
NOT DETERMINISTIC
READS SQL DATA
COMMENT '
Return whether a table exists.
Parameters must be passed in a JSON document:
* schema (optional). : Schema that could contain the table.
By default, the schema containing this procedure.
* table : Name of the table to check.
'
BEGIN
DECLARE v_table VARCHAR(64)
DEFAULT JSON_UNQUOTE(JSON_EXTRACT(params, '$.table'));
DECLARE v_schema VARCHAR(64)
DEFAULT JSON_UNQUOTE(JSON_EXTRACT(params, '$.schema'));
IF v_schema IS NULL THEN
RETURN EXISTS (
SELECT TABLE_NAME
FROM information_schema.TABLES
WHERE
TABLE_SCHEMA = SCHEMA()
AND TABLE_NAME = v_table
);
ELSE
RETURN EXISTS (
SELECT TABLE_NAME
FROM information_schema.TABLES
WHERE
TABLE_SCHEMA = v_schema
AND TABLE_NAME = v_table
);
END IF;
END;

plpgsql extract array before loop on it's elements

I am trying to create a plpg function taking as parameter :
[{'id_product': 100000158, 'd_price': '7,75'}, {'id_product': 100000339, 'd_price': '9,76'}]
Or maybe :
{'products': [{'id_product': 100000158, 'd_price': '7,75'}, {'id_product': 100000339, 'd_price': '9,76'}]}
Can't tell the best approach yet.
I want to transform this jsonb object or string to an array so I can loop on it.
The idea is to loop en every {'id_product': xxxxxxxxx, 'd_price': 'xxxxx'} so I if values are the same in a table.
What's the most optimal way to do this ?
I am still playing with jsonb functions.
You can create a function containing JSONB_POPULATE_RECORDSET() function
CREATE OR REPLACE FUNCTION fn_extract_elements( i_jsonb JSONB )
RETURNS TABLE (o_product VARCHAR(500), o_price VARCHAR(500))
AS $BODY$
BEGIN
RETURN QUERY
WITH tab AS
(
SELECT *
FROM JSONB_POPULATE_RECORDSET(NULL::record,i_jsonb )
AS tab(id_product VARCHAR(500), d_price VARCHAR(500))
)
SELECT *
FROM tab;
END
$BODY$
LANGUAGE plpgsql;
and invoke in such a way that
SELECT *
FROM fn_extract_elements(
'[{"id_product": "100000158", "d_price": "7,75"},
{"id_product": "100000339", "d_price": "9,76"}]'
);
o_product o_price
100000158 7,75
100000339 9,76
Demo
Here is a solution for those who might want to do something similar :
I changed the input to something like : {0: [100000158, 7.76], 1: [100000339, 9.76]}
And function :
CREATE OR REPLACE FUNCTION public.check_d_price(
p_products jsonb)
RETURNS jsonb
LANGUAGE 'plpgsql'
COST 100
VOLATILE PARALLEL UNSAFE
AS $BODY$
DECLARE
_key varchar;
_real_price NUMERIC;
_bad_price jsonb;
BEGIN
FOR _key IN (
SELECT jsonb_object_keys(p_products)
)
LOOP
SELECT ei.price INTO _real_price FROM product pr JOIN ecom_input ei ON (pr.ecom_id,pr.sku) = (ei.ecom_id,ei.sku_supplier) WHERE pr.id = (p_products->_key->>0)::INT;
IF _real_price <> (p_products->_key->>1)::NUMERIC THEN
_bad_price = COALESCE(_bad_price,'{}'::jsonb) || jsonb_build_object((p_products->_key->>0)::TEXT,
jsonb_build_object('d_price', p_products->_key->>1,'new_price', _real_price));
END IF;
END LOOP;
RETURN _bad_price;
END;
$BODY$;

How to separate a string and re build it

Separating String list and replacing same list with new values in mysql
I have following data in my table_1 Table
table_1(Currently saved structure)
code value
12_A ["A","B","C","D"]
12_B ["E","F","G","H"]
12_3 ["I","J","K","L"]
But each code have different values with different description. like::
code value description
12_A A Apple
12_A B Ball
12_A C Cat
12_A D Dog
12_B E Eagle
12_B F Flag
. . .
. . .
. . .
I have to Separate the value list from table_1 and
need to save again in same table i.e table_1(in this structure)::
code value
12_A ["Apple","Ball","Cat","Dog"]
12_B ["Eagle","Flag",.......]
12_3 [......................]
You can use GROUP_CONCAT() :
UPDATE Table1 s
SET s.value = (SELECT t.code,CONCAT('["',
GROUP_CONCAT(t.description ORDER BY t.description SEPARATOR '","'),
']')
FROM Table_With_val t
WHERE t.code = s.code
AND s.value LIKE CONCAT('%"',t.value,'"%'))
You didn't provide any conclusive information, I assumed the second data sample you provided is an existing table, and table1 is the table you want to update.
NOTE: This is a bad DB structure! it would most defiantly cause problem in the future especially when required to make joins . I strongly advise you to normalize your data and store each description and value in its own record.
you can create a function in which you can pass your string list as parameter in case of your example ["A","B","C","D"] will be the parameter. The function will break down the string and will concatenate the descriptions according. The example of the function you can use is given below:
DELIMITER $$
DROP FUNCTION IF EXISTS codeToDesc$$
CREATE FUNCTION codeToDesc(commaSeperatedCodeList TEXT) RETURNS TEXT CHARSET utf8
BEGIN
DECLARE finalString TEXT;
DECLARE inputCodeList TEXT;
DECLARE codeName VARCHAR(255);
DECLARE codecount BIGINT(5);
SET finalString='';
SET inputCodeList = REPLACE(REPLACE(REPLACE(commaSeperatedCodeList,'[',''),']',''),'"','');
DROP TEMPORARY TABLE IF EXISTS test.code_table;
DROP TEMPORARY TABLE IF EXISTS test.code_count;
CREATE TEMPORARY TABLE test.code_table (CODE VARCHAR(255));
CREATE TEMPORARY TABLE test.code_count (countNo BIGINT(11));
INSERT INTO test.code_count(countNo) SELECT(LENGTH(inputCodeList)-LENGTH(REPLACE(inputCodeList,',','')) + 1);
BEGIN
DECLARE table_cursor CURSOR FOR SELECT countNo FROM test.code_count;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET codecount = (SELECT countNo FROM test.code_count ORDER BY countNo ASC LIMIT 1);
OPEN table_cursor;
readLoop1: LOOP
FETCH table_cursor INTO codecount;
IF codecount=0 THEN
LEAVE readLoop1;
END IF;
SET codeName=(SELECT SUBSTRING_INDEX(inputCodeList,',',1));
INSERT INTO test.code_table(CODE) SELECT codeName;
SET inputCodeList=(SELECT TRIM(BOTH ',' FROM REPLACE(inputCodeList,codeName,'')));
INSERT INTO test.code_count(countNo) SELECT codecount-1;
SET codeName='';
END LOOP;
CLOSE table_cursor;
END;
-- use your code and description here, i guess those should be fixed
SELECT CONCAT('["',REPLACE(GROUP_CONCAT(CASE WHEN CODE='A' THEN 'Apple'
WHEN CODE = 'B' THEN 'Ball'
WHEN CODE = 'C' THEN 'Cat'
WHEN CODE = 'D' THEN 'Dog'
ELSE '' END),',','","'),'"]') INTO finalString FROM test.code_table;
RETURN finalString;
END$$
DELIMITER ;
Try this, let me know if you any issue occurred.

Postgres function error

I'm creating a function in postgres and getting strange error. What am I doing wrong? I also would like to see your variants how to do it
CREATE OR REPLACE FUNCTION export_csv(request TEXT, filename VARCHAR(255))
RETURNS VOID AS
$$
BEGIN
EXECUTE 'COPY (' || request || ') TO "/home/r90t/work/study/etl/postgres_etl/export/' || filename || '" WITH CSV;';
END
$$
LANGUAGE plpgsql;
REQUEST:
SELECT export_csv('SELECT * FROM orders', 'orders.csv')
ERROR:
psql:/tmp/vUp267V/dbext.sql:2: ERROR: syntax error at or near ""/home/r90t/work/study/etl/postgres_etl/export/orders.csv""
LINE 1: COPY (SELECT * FROM orders) TO "/home/r90t/work/study/etl/po...
^$
QUERY: COPY (SELECT * FROM orders) TO "/home/r90t/work/study/etl/postgres_etl/export/orders.csv" WITH CSV;
CONTEXT: PL/pgSQL function export_csv(text,character varying) line 3 at EXECUTE statement
Oh boy.....
First, because you COPY TO FILE, your function must run as superuser, and you are inlining an SQL query provided by the user into that file. At least you must run the query as superuser and you haven't set SECURITY DEFINER on it. But the whole point of your function is SQL injection and for very little gain. I get that this is a bit for personal study but there is nothing to be gained by doing it in a way that would put a business's data at risk in the future.
In particular, I wonder what would happen if I do something like:
SELECT export_csv('SELECT * FROM ORDERS TO STDOUT; DROP DATABASE critical_db; --', 'foo');
or
SELECT export_csv('SELECT * FROM ORDERS', '../../../../../../../var/lib/pgsql/data/log/postgresql-Tue.log');
Really, really, really bad stuff are possible with your function. Don't do it. these are contained now but as soon as someone does the following, you are totally:
ALTER FUNCTION export_csv SET SECURITY DEFINER;
A better approach would be to take a single argument that can be quoted and processed in place. Something like:
CREATE OR REPLACE FUNCTION export_csv(relation name, columns name[])
RETURNS VOID AS
$$
DECLARE column_list text;
BEGIN
SELECT array_to_string(cols, ', ') INTO column_list
FROM (SELECT array_agg(quote_literal(col)) as cols
FROM unnest(columns) col
) a;
EXECUTE 'COPY (SELECT ' || column_list || ' FROM ' || quote_ident(relation) || ')
TO ' || quote_literal('/home/r90t/work/study/etl/postgres_etl/export/' || relation) || ' WITH CSV;';
END
$$
LANGUAGE plpgsql;
This would give you protection against sql injection, and if you need the date added on to the end, you can do that with something inside the quote_literal call.

Get max id of all sequences in PostgreSQL

We have a monitor on our databases to check for ids approaching max-int or max-bigint. We just moved from MySQL, and I'm struggling to get a similar check working on PostgreSQL. I'm hoping someone can help.
Here's the query in MySQL
SELECT table_name, auto_increment FROM information_schema.tables WHERE table_schema = DATABASE();
I'm trying to get the same results from PostgreSQL. We found a way to do this with a bunch of calls to the database, checking each table individually.
I'd like to make just 1 call to the database. Here's what I have so far:
CREATE OR REPLACE FUNCTION getAllSeqId() RETURNS SETOF record AS
$body$
DECLARE
sequence_name varchar(255);
BEGIN
FOR sequence_name in SELECT relname FROM pg_class WHERE (relkind = 'S')
LOOP
RETURN QUERY EXECUTE 'SELECT last_value FROM ' || sequence_name;
END LOOP;
RETURN;
END
$body$
LANGUAGE 'plpgsql';
SELECT last_value from getAllSeqId() as(last_value bigint);
However, I need to somehow add the sequence_name to each record so that I get output in records of [table_name, last_value] or [sequence_name, last_value].
So I'd like to call my function something like this:
SELECT sequence_name, last_value from getAllSeqId() as(sequence_name varchar(255), last_value bigint);
How can I do this?
EDIT: In ruby, this creates the output we're looking for. As you can see, we're doing 1 call to get all the indexes, then 1 call per index to get the last value. Gotta be a better way.
def perform
find_auto_inc_tables.each do |auto_inc_table|
check_limit(auto_inc_table, find_curr_auto_inc_id(auto_inc_table))
end
end
def find_curr_auto_inc_id(table_name)
ActiveRecord::Base.connection.execute("SELECT last_value FROM #{table_name}").first["last_value"].to_i
end
def find_auto_inc_tables
ActiveRecord::Base.connection.execute(
"SELECT c.relname " +
"FROM pg_class c " +
"WHERE c.relkind = 'S'").map { |i| i["relname"] }
end
Your function seems quite close already. You'd want to modify it a bit to:
include the sequences names as literals
returns a TABLE(...) with typed columns instead of SET OF RECORD because it's easier for the caller
Here's a revised version:
CREATE OR REPLACE FUNCTION getAllSeqId() RETURNS TABLE(seqname text,val bigint) AS
$body$
DECLARE
sequence_name varchar(255);
BEGIN
FOR sequence_name in SELECT relname FROM pg_class WHERE (relkind = 'S')
LOOP
RETURN QUERY EXECUTE 'SELECT ' || quote_literal(sequence_name) || '::text,last_value FROM ' || quote_ident(sequence_name);
END LOOP;
RETURN;
END
$body$
LANGUAGE 'plpgsql';
Note that currval() is not an option since it errors out when the sequence has not been set in the same session (by calling nextval(), not sure if there's any other way).
Would something as simple as this work?
SELECT currval(sequence_name) from information_schema.sequences;
If you have sequences that aren't keys, I guess you could use PG's sequence name generation pattern to try to restrict it.
SELECT currval(sequence_name) from information_schema.sequences
WHERE sequence_name LIKE '%_seq';
If that is still too many false positives, you can get table names from the information_schema (or the pg_* schemata that I don't know very well) and refine the LIKE parameter.