Postgresql test JSON and delete - json

I have a PGSQL database with a table that contains column containingJSON data along the lines of
{"kind":2,"msgid":102}
{"kind":99,"pid":"39s-8KeH306vhjzNta3Yrg,,","msgid":101}
...
Is it possible to write and execute DELETE statement along the lines of
DELETE FROM table WHERE data.kind = '99' AND data.pid = '39s-8KeH306vhjzNta3Yrg,,'?
where data happens to be the name of that particular column. I tried the above and got the error
missing FROM-clause entry for table "data"
i.e. PGSQL is interpreting that as being the table data. Clearly, the require syntax is different. I'd be grateful to anyone who might be able to tell me what to do here.

assuming you have:
t=# with c(j) as (values('{"kind":99,"pid":"39s-8KeH306vhjzNta3Yrg,,","msgid":101}'::json))
select * from c where j->>'kind' = '99' and j->>'pid' = '39s-8KeH306vhjzNta3Yrg,,';
j
----------------------------------------------------------
{"kind":99,"pid":"39s-8KeH306vhjzNta3Yrg,,","msgid":101}
(1 row)
then your statement will be:
delete from table where data->>'kind' = '99' and data->>'pid' = '39s-8KeH306vhjzNta3Yrg,,';
check json operators here: https://www.postgresql.org/docs/current/static/functions-json.html

Related

SELECT statement inside a CASE statement in SNOWFLAKE

I have a query where i have "TEST"."TABLE" LEFT JOINED to PUBLIC."SchemaKey". Now in my final select statement i have a case statement where i check if c."Type" = 'FOREIGN' then i want to grab a value from another table but the table name value i am using in that select statement is coming from the left joined table column value. I've tried multiple ways to get to work but i keep getting an error, although if i hard code the table name it seems to work. i need the table name to come from c."FullParentTableName". Is what i am trying to achieve possible in snowflake and is there a way to make this work ? any help would be appreciated !
SELECT
c."ParentColumn",
c."FullParentTableName",
a."new_value",
a."column_name"
CASE WHEN c."Type" = 'FOREIGN' THEN (SELECT "Name" FROM TABLE(c."FullParentTableName") WHERE "Id" = 'SOME_ID') ELSE null END "TestColumn" -- Need assistance on this line...
FROM "TEST"."TABLE" a
LEFT JOIN (
select s."Type", s."ParentSchema", s."ParentTable", s."ParentColumn", concat(s."ParentSchema",'.','"',s."ParentTable",'"') "FullParentTableName",s."ChildSchema", s."ChildTable", trim(s."ChildColumn",'"') "ChildColumn"
from PUBLIC."SchemaKey" as s
where s."Type" = 'FOREIGN'
and s."ChildTable" = 'SOMETABLENAME'
and "ChildSchema" = 'SOMESCHEMANAME'
) c
on a."column_name" = c."ChildColumn"
Thanks !
In Snowflake you cannot dynamically use the partial results as tables.
You can use a single bound value via identifier to bind a value to table name
But you could write a Snowflake Scripting but it would need to explicitly join the N tables. Thus if you N is fixed, you should just join those.

Getting the input List of a WHERE IN Filter for SQL QUERY From A File or a Local Table

I have a simple SQL Query. However, the query has a where filter which takes a list.
The list contains at least 2000 items and it is becoming extremely inconvenient to put the long list into the query itself.
I was trying to find if I can create a table/ file and call that into the query instead.
EXAMPLE CODE:
Select * from Table_XXXX where aa = 'yy' and date > zzz and mylist = [..............]
So instead of the list above, I will like to call the file (locally) in which the elements of the list reside or a table (locally and not in the database) in which the elements are in a column...
Any help will be appreciated.
First you would store the contents of the file/list in a table. And after that you can use the in condition
create table mylist(x int);
insert into mylist values(<all values in your file>);
select *
from Table_XXXX tt
where tt.aa = 'yy'
and tt.date > zzz
and tt.mylist IN (select x from mylist)

Postgresql update json data property

I created a field name is result and type is text. I just want to update 'lat' in column. When I use this query I get syntax error. How can I do?
The column data is
"{"lat":"48.00855","lng":"58.97342","referer":"https:\/\/abc.com\/index.php"}"
Query is
update public.log set (result::json)->>'lat'=123 where id=6848202
Syntax error is
ERROR: syntax error at or near "::"
Use the jsonb concatenation operator (Postgres 9.5+):
update log
set result = result::jsonb || '{"lat":"123"}'
where id = 6848202
In Postgres 9.4 use json_each() and json_object_agg() (because jsonb_object_agg() does not exists in 9.4).
update log
set result = (
select json_object_agg(key, case key when 'lat' then '123' else value end)
from json_each(result)
)
where id = 6848202
Both solutions assume that the json column is not null. If it does not contain the lat key, the first query will create it but the second will not.
In PostgreSQL 13, You can:
update public.log set result = jsonb_set(result,'{lat}','"123"') where id=6848202;
In case the column is still null, you can use coalesce. The answer is provided here: PostgreSQL 9.5 - update doesn't work when merging NULL with JSON
I also tried to update json value in json type field, but couldn't find appropriate example. So I've connected to postgres DB using PgAdmin4, opened desired table and changed desired field's value, then looked at Query History to see what command it uses to change it.
So, finally, I've got the next simple python code for own purposes to update json field in postgres db:
import psycopg2
conn = psycopg2.connect(host='localhost', dbname='mydbname', user='myusername', password='mypass', port='5432')
cur = conn.cursor()
cur.execute("UPDATE public.mytable SET options = '{\"credentials\": \"required\", \"users\": [{\"name\": \"user1\", \"type\": \"string\"}]}'::json WHERE id = 8;")
cur.execute("COMMIT")

MySQL: How do I use Load Data Infile and replace existing rows' fields if it is not empty field in the file

I have a file with some empty fields like this:
(first column being primary key- a1,b1,b2)
a1,b,,d,e
b1,c,,,,e
b2,c,c,,
I have already present in table like
a1,c,f,d,e
Now for this key a1 using replace option and lad data infile I want final output like:
a1,b,f,d,e
Here c in second column has been replaced by b,
but f has not been replaced by empty string.
To make it clear: Replace field if an actual value is present in file
if an empty field is present, retain the old value.
Let consider 2 tables having 5 columns present
in t1 table -columns are c1,c2,c3,c4,c5
in t2 table -columns are d1,d2,d3,d4,d5
so query will become like this:
select c1 as e1
ifnull(c2,d2) as e2,
ifnull(c3,d3) as e3,
ifnull(c4,d4) as e4,
ifnull(c5,d5) as e5
from t1
inner join t2 on c1 = d1;
hope it will helpful to you.
Please try the following...
CREATE TABLE tempTblDataIn LIKE tblTable;
/* Read your data into tempTblDataIn here */
UPDATE tblTableName
JOIN tempTblDataIn ON tblTableName.fldID = tempTblDataIn.fldID
SET tblTableName.fldField1 = COALESCE( tempTblDataIn.fldField1, tblTableName.fldField1 ),
tblTableName.fldField2 = COALESCE( tempTblDataIn.fldField2, tblTableName.fldField2 ),
tblTableName.fldField3 = COALESCE( tempTblDataIn.fldField3, tblTableName.fldField3 ),
tblTableName.fldField4 = COALESCE( tempTblDataIn.fldField4, tblTableName.fldField4 );
DROP TABLE tempTblDataIn;
This Answer is based on Eric's Answer at MySQL - UPDATE query based on SELECT Query.
It is also based on the assumption that the data file will contain update data only rather than update data and new records.
Yes, you will need to do a COALESCE() line for each field. You will probably have to code each line yourself. You could use a PROCEDURE if there are many fields with a repeated structure to programmatically produce the above statements, but you may find the above simpler.
If you have any questions or comments, then please feel free to post a Comment accordingly.
Further Reading
https://dev.mysql.com/doc/refman/5.7/en/create-table-like.html (on MySQL's CREATE TABLE ... LIKE)
https://dev.mysql.com/doc/refman/5.7/en/update.html (on MySQL's UPDATE statement)

MYSQL: Updating a column based on select statement with joins

I've got a select statement that returns a list of all items in a document store that have comments (stored in a separate comments table).
What I'm trying to do is to update the value of another column in public_document_store (skin_id) for all the documents that have released comments, based on the statement below.
This returns the records I want to update:
SELECT public_document_store_talkback.document_id,
public_document_store.section_id
FROM public_document_store
INNER JOIN public_document_store_talkback ON public_document_store_talkback.document_id = public_document_store.document_id
WHERE public_document_store_talkback.is_released = 1
AND public_document_store_talkback.is_rejected = 0
AND public_document_store.section_id = 10;
I've tried to update the skin_id field like this:
Update public_document_store SET skin_id = 6
WHERE document_id IN (Select... [as per the statement above] )
But this returns an error:
[Err] 1241 - Operand should contain 1 column(s)
I've tried various other permutations based on other answers here, but without any luck (My knowledge of SQL is pretty basic, so apologies if I am missing something obvious here)
Any ideas how I can make this work would be much appreciated.
Your SELECT query needs only a little modification to convert it into an UPDATE statement,
UPDATE public_document_store a
INNER JOIN public_document_store_talkback b
ON b.document_id = a.document_id
SET a.skin_id = 6
WHERE b.is_released = 1 AND
b.is_rejected = 0 AND
a.section_id = 10