Building geo_point object during migration from Mysql to Elastic using Logstash - mysql

I'm struggling to be able to provide geo_point as an object in one of my indexes.
Migration works smoothly for other types: geohash, string etc. but developer requested specifically to have object there.
I'm using Logstash/Elastic 7.6 with mysql 8.0 JDBC driver on CentOS 7. Index has template with mapping as geo_point for this specific column.
Query in Mysql uses JSON_OBJECT to build it (but I tried different approaches with CONCAT() as well):
JSON_OBJECT('lat', latitude, 'lon', longitude) as geo_point
Data is there (no issue with nulls), but I receive error like this:
[2020-07-27T17:29:14,978][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"264a4b7994a2867fe1bfba7a3fbaa0ba", :_index=>"elastic_index", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x22952f4a>], :response=>{"index"=>{"_index"=>"live_listings-pub_1_test_200725", "_type"=>"_doc", "_id"=>"264a4b7994a2867fe1bfba7a3fbaa0ba", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [geo_point] of type [geo_point]", "caused_by"=>{"type"=>"parse_exception", "reason"=>"latitude must be a number"}}}}}
Is there a way to do it in the mysql query or is there another way to do it maybe with filter functionality?
Would appreciate any help or point me in right direction.
Thanks.

Related

Prisma Unsupported("point") MySql Approach

So I have my location column using Point data type, I'm using Apollo Server and Prisma, and when I use "npx prisma db pull" generates this data type because is not currently supported on Prisma (generated script)
so I say "Ok, I'm using string and I manage how to insert this data type" so I changed to this script, surprise! didn't work enter image description here, try to find any approach to handling MySql Point data type in Prisma but no info at soever, I really appreciate any ideas
You cannot convert it to String and use it as it isn't supported yet. You need to leave it as unsupported and you can only add data via raw queries.
For now, only adding data is supported. You cannot query for it using PrismaClient.
We can query data using Prisma Client, via raw queries as SELECT id, ST_AsText(geom) as geom from training_data where geom has dataType geometry for using Unsupported("geometry").

Can store and retrieve Object Django BinaryField in SQLite but not MySQL

I have some implementation that is best served by pickling a pandas dataframe and storing it in a DB.
This works fine if the database is sqlite but fails with a load error when it is MySQL
I have found other people with similar issues on stackoverflow and google but it seems that everybodys solution is to use sql to store the dataframe.
As a last resort I would go down that route but it would be a shame for this use case to do that.
Anybody got a solution to get the same behaviour from mysql as sqlite here?
I simply dump the dataframe with
pickledframe = pickle.dumps(frame)
and store pickledframe as a BinaryField
pickledframe = models.BinaryField(null=True)
I load it in with
unpickled = pickle.loads(pickledframe)
with sqlite it works fine, with mysql I get
Exception Type: UnpicklingError
Exception Value: invalid load key, ','.
upon trying to load it.
Thanks

Read/Write json objects from Postgres database through cache

I'm working with read/write through the cache via Apache Ignite and encountered the following problem:
In lastest versions of postgres there is special json type of data and jsonb add-on to work with json in database.
In Apache Ignite this functions don't implemented, how I know. Moreover, when Trying to do this part, I found it possible to read json from the database as PGobject, but there is no way to add jsonb processing to the built-in SQL query parser.
For example, I'm trying send next query:
SELECT jdata->>'tag1' FROM jsontest;
And get exception:
Syntax error in SQL statement "SELECT JDATA-[*]>>'tag1' FROM JSONTEST; "; SQL statement:
SELECT jdata->>'tag1' FROM jsontest; [42000-195]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.message.DbException.getSyntaxError(DbException.java:191)
at org.h2.command.Parser.getSyntaxError(Parser.java:533)
at org.h2.command.Parser.getSpecialType(Parser.java:3842)
at org.h2.command.Parser.read(Parser.java:3352)
at org.h2.command.Parser.readIf(Parser.java:3259)
at org.h2.command.Parser.readSum(Parser.java:2375)
at org.h2.command.Parser.readConcat(Parser.java:2341)
at org.h2.command.Parser.readCondition(Parser.java:2172)
at org.h2.command.Parser.readAnd(Parser.java:2144)
at org.h2.command.Parser.readExpression(Parser.java:2136)
at org.h2.command.Parser.parseSelectSimpleSelectPart(Parser.java:2047)
at org.h2.command.Parser.parseSelectSimple(Parser.java:2079)
at org.h2.command.Parser.parseSelectSub(Parser.java:1934)
at org.h2.command.Parser.parseSelectUnion(Parser.java:1749)
at org.h2.command.Parser.parseSelect(Parser.java:1737)
at org.h2.command.Parser.parsePrepared(Parser.java:448)
at org.h2.command.Parser.parse(Parser.java:320)
at org.h2.command.Parser.parse(Parser.java:296)
at org.h2.command.Parser.prepareCommand(Parser.java:257)
at org.h2.engine.Session.prepareLocal(Session.java:573)
at org.h2.engine.Session.prepareCommand(Session.java:514)
at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1204)
at org.h2.jdbc.JdbcPreparedStatement.(JdbcPreparedStatement.java:73)
at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:288)
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.prepareStatement(IgniteH2Indexing.java:402)
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1365)
... 9 more
And when I'm trying to research structure of Ignite or H2 databse engine, there is no way to add processing of such queries.
So, maybe someone has met this or a similar problem and can advise how to solve it?

Kafka Connect with MongoDB Connector

I have tried to use Apache Kafka Connect with MongoDB sink connector (connector)
When I used Avro format it worked except of one issue, I had to create the topic with one partition because the connector uses record.kafkaOffset() for the _id of the new Mongo record (so with multi partitions I get the same id for different records).
How can I fix it?
I would like to test it with Json so I created a new topic for that. And changed the converter configurations to JsonConverter. When I run it I get the following error:
java.lang.ClassCastException: java.util.HashMap cannot be cast to org.apache.kafka.connect.data.Struct
at org.apache.kafka.connect.mongodb.MongodbSinkTask.put(MongodbSinkTask.java:106)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:280)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:176)
at org.apache.kafka.connect.runtime.WorkerSinkTaskThread.iteration(WorkerSinkTaskThread.java:90)
at org.apache.kafka.connect.runtime.WorkerSinkTaskThread.execute(WorkerSinkTaskThread.java:58)
at org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)
Does this connector work with Json?

Alternate solution to save as JSON datatype in postgres spring-boot + eclipselink

I am using eclipselink 2.6 with spring-boot JPA for persistance in postgres.
I am persisting a List of objects as a JSON column in database.Acording to this solution: eclipselink + #convert(json) + postgres + list property
I am able to save the data in postgres.
When the column is null, I get this exception:
Caused by: org.postgresql.util.PSQLException: ERROR: column "sample_column" is of type json but expression is of type character varying
Hint: You will need to rewrite or cast the expression.
I can solve this issue by this answer:
Writing to JSON column of Postgres database using Spring / JPA
Q1: Is there an alternate solution other than setting this property stringtype=unspecified int url spring.datasource.url=jdbc:postgresql://localhost:5432/dbnam‌​e?stringtype=unspeci‌​fied
Q2: If not, How can I set stringtype=unspecified in application.prooerties of spring-boot rather than embedding it in the spring.datasource.url
The answer is yes, but is implementation-specific.
For example, in Tomcat, this attribute is called connectionProperties, you would therefore write:
spring.datasource.tomcat.connection-properties: stringtype=unspecified
From the Spring-Boot documentation.
It is also possible to fine-tune implementation-specific settings using their respective prefix (spring.datasource.tomcat., spring.datasource.hikari., spring.datasource.dbcp.* and spring.datasource.dbcp2.*). Refer to the documentation of the connection pool implementation you are using for more details.
If you are using spring-boot 1.4.1 or above,
add a data.sql file in resources folder,
-- IN H2 database, create a column as 'OTHER' data type,
-- if H2 fails to create a column as 'JSON' data type.
CREATE DOMAIN IF NOT EXISTS JSON AS OTHER;
This .sql file will be executed during startup of your application and will create a column with data type others in table for the 'json' data type columns in H2 database.