POSTGRESQL VACUUM ANALYZE fails for table with a hstore column - postgresql-8.4

When running VACUUM ANALYZE on a table with a hstore column i get the following error:
ERROR: could not identify a comparison function for type hstore
Can I force it to complete without a comparison function and if not, how do I define one?

Related

MySQL - Invalid GIS data provided to function st_polygonfromtext

I have a table in mysql with geometry data in one of the columns. The datatype is text and I need to save it as Polygon geometry.
I have tried a few solutions, but keep running into Invalid GIS data provided to function st_polygonfromtext. error.
Here's some data to work with and an example:
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=78ac63e16ccb5b1e4012c21809cba5ff
Table has 25k rows, there are likely some bad geometries in there. When I attempt to update on a subset of rows, it seems to successfully work, like it did in the fiddle example. It fails when I attempt to update all 25k rows.
Someone suggested using wrapping the statements around TRY and CATCH. Detecting faulty geometry WKT and returning the faulty record
I am not too familiar with using them in MySQL or stored procedures either.
I need a spatial index on the table to be able to use spatial functions and filter queries by location.
Plan A: Create a new table and try to convert as you INSERT IGNORE INTO that table from your existing table. I don't know if this will apply the "IGNORE" to conversion failures. Also, you would end up with the "good" values. What do you want to do about the "bad" values?
Plan B: Write a loop in application code -- read one row, convert the varchar value, check for errors.

Athena federated queries against Postgres JSONB field

It seems to me that Athena don't understand Postgres JSONB fields, and consider them VARCHARS. This means that any query involving json path expressions will be executed Athena-side, which again means that every row in database must be sent to Athena for evaluation.
How would it be possible to query Postgres JSONB fields with native postgres json functions instead?
Meaning how can I make an Athena query that uses native Postgres functions that are executed on Postgres to filter the rows returned?

Is there certain functionality JDBC driver from MYSQL does not have?

I am trying to find the type of data that a column is through the JDBC driver from MYSQL.
The query I am trying to execute is SHOW FIELDS FROM PARAMETERS WHERE FIELD='COLUMN_NAME' through Java and and the exception thrown is a java.sql.SQLException with the following error: Before Start of Result set. I am positive I am executing the same query as the one I am doing through SQL. Is there any other way of retrieving the column type with just the column name

Difference between NEW and OLD trigger variables as JSON in pgsql

I am using postgres 9.3 and implementing audit triggers to log changes in my tables. To know about the columns updated i need to take a diff between OLD and NEW trigger variables. I have achieved it using hstore. But hstore converts array type columns to string which needs extra handling. So any idea how can i do this using json?

How do I convert RDBMS DDL to Hive DDL script

We've a large and disparate data sources including oracle,db2,mysql. We also need to append few audit columns at the end.
I came across the following Java class org.apache.sqoop.hive.HiveTypes. I am planning to create a simple interpreter that accepts RDBMS DDL and spits out Hive DDL script. Any pointers on how I can achieve this?
Hive QL is more or less similar to normal RDBMS DDL. But there are certain things that it lacks and thats why it does not fully follow ANSI SQL. There is no automated process to convert it.
But you have to try running the SQL queries on Hive and wherever it violates you have to change the query according to hive.
For instance Hive takes only equality condition as join condition which is not the case in RDBMS.
For creating an interpreter yourself you first have to list down the common differences between RDBMS query construct and Hive QL construct. Whenever you encounter a RDBMS construct which according to your list will violate in hive the query gets rebuild as per hive. This replacement logic has to be coded.