Difference between NEW and OLD trigger variables as JSON in pgsql - json

I am using postgres 9.3 and implementing audit triggers to log changes in my tables. To know about the columns updated i need to take a diff between OLD and NEW trigger variables. I have achieved it using hstore. But hstore converts array type columns to string which needs extra handling. So any idea how can i do this using json?

Related

Is the a way for SQLDelight to allow unrecognized expression?

I use SQLDelight's MySQL dialect on my server. Recently I plan to migrate a table to combine many fields into a JSON field so the server code no longer needs to know the complex data structure. As part of the migration, I need to do something like this during runtime - when the sever sees a client with the new version, it knows the client won't access the old table anymore, so it's safe to migrate the record to new table.
INSERT OR IGNORE INTO new_table SELECT id, a, b, JSON_OBJECT('c', c, 'd', JSON_OBJECT(…)) FROM old_table WHERE id = ?;
The only problem is - Unlike the SQLite dialect, the MySQL dialect doesn't recognize JSON_OBJECT or other JSON expressions, even though in this case it doesn't have to - no matter how complex the query is, the result is not passed back to Kotlin.
I wish I could add the feature by myself, but I'm pretty new to Kotlin. So my question is: is there a way to evade the rigid syntax check? I could also retrieve from old table, convert the format in Kotlin, then write to the new table, but that would take hundreds of lines of complex code, instead of just one INSERT.
I assume from your links you're on the alpha releases already, in alpha03 you can add currently unsupported behaviour by creating a local SQLDelight module (see this example) and adding the JSON_OBJECT to the functionType override. Also new function types are one of the easiest things to contribute up to SQLDelight so if you want it in the next release
For the record I ended up using CONCAT with COALESCE as a quick and dirty hack to scrape the fields together as JSON.

Update and modify data in MySQL with adding new items and removing items, which are not in an update patch

From time-to-time, my system generates JSONs with the current state of the data to be stored in MySQL DB. As long as these JSONs are small, there is no issue to apply updates by DELETE the entire current data from the table and INSERT data from JSON.
However, this approach is not suitable if the size of JSONs becomes significant.
Obviously, for «data to be added» case I can use INSERT, the problems begin when I have to identify removed data. The key idea: if the items are in JSON, then insert them, but if the existing in the DB items are not in the JSON anymore, thus, it should be removed.
Can the DELETE/INSERT approach be replaced with MySQL built-in merge-like functionality to apply the data update on DB? Or the only way is to implement the merge logic manually?

Can I use Laravel JSON Where Clauses with MariaDB 10.2.16 LongText column?

I tried to add a json column to my database by using phpMyAdmin
but Unfortunately, phpMyAdmin converts the json column to Longtext type
So, I'm asking about the ability to use the JSON Where Clauses with this type
https://laravel.com/docs/5.7/queries#json-where-clauses
You cannot use those queries on non-JSON data types in MariaDB. And as of 10.2, it doesn't officially support it.
You can use the JSON helper functions to query against data (ie: where JSON_CONTAINS(...) and others.
You can also create columns that are extracted values from the JSON data using Virtual Columns
Here's a good post with much more detail.

Insert JSON into Hadoop

I have a lot of data (JSON string) per day (around 150-200B).
I want to insert the JSON to Hadoop, what is the best way to do it (I need a fast insert and a fast query on JSON fields)?
Do I need to use hive and create Avro scheme to my JSON? Or do I need to insert the JSON as a string to a specific column?
If you want to make the data available in Hive to perform mostly aggregations on top of it, I would suggest 1 of the following method using spark.
If you have multiple-line json files
var df = spark.read.json(sc.wholeTextFiles("hdfs://ypur/hdfs/path/*.json").values)
df.write.format("parquet").mode("overwrite").saveAsTable("yourhivedb.tablename")
If you have single-line json files
val df = spark.read.json("hdfs://ypur/hdfs/path/*.json")
df.write.format("parquet").mode("overwrite").saveAsTable("yourhivedb.tablename")
Spark will automatically infer the table schema for you. If you are using cloudera distribution you will be able to read the data using impala (depending on your cloudera version it may not support complex structures)
I want to insert the JSON to Hadoop
You just put it in HDFS... Since you have data over a time period, you'll want to create partitions for Hive to read
jsondata/dt=20180619/foo.json
jsondata/dt=20180620/bar.json
Do I need to use hive and create Avro scheme to my JSON?
Nope. Not sure where you got mixed up between Avro and JSON. Now, if you could convert the JSON into defined Avro with a schema, then that would help improve Hive queries since querying structured binary is better than parsing JSON text.
do I need to insert the JSON as a string to a specific column?
Not recommended. You could, but then you cannot query it, via Hive's JSON Serde support
Don't forget with the above structure you'll need PARTITIONED BY (dt STRING). And in order for partitions to be created on the table for existing files, you'll need to manually (and daily) run an MSCK REPAIR TABLE command
i have JSON as string (from kafka)
Don't use Spark for that (at least, don't reinvent the wheel). My suggestion would be to use Confluent's HDFS Kafka Connect that comes with Hive table creation support.

How to save Array in database - Rails

I want to store array [3,9,21] in database (mySQL).Other than saving the values of array, I want to save the whole array in the database. Is it possible?
If you are using MySQL 5.7+ you can; it introduced a JSON data type https://dev.mysql.com/doc/refman/5.7/en/json.html
A quick read about the changes: http://lornajane.net/posts/2016/mysql-5-7-json-features
PS -- I'm a fan of the comment above -- storing values as separate rows instead of as arrays is a better option