Prisma Unsupported("point") MySql Approach - mysql

So I have my location column using Point data type, I'm using Apollo Server and Prisma, and when I use "npx prisma db pull" generates this data type because is not currently supported on Prisma (generated script)
so I say "Ok, I'm using string and I manage how to insert this data type" so I changed to this script, surprise! didn't work enter image description here, try to find any approach to handling MySql Point data type in Prisma but no info at soever, I really appreciate any ideas

You cannot convert it to String and use it as it isn't supported yet. You need to leave it as unsupported and you can only add data via raw queries.
For now, only adding data is supported. You cannot query for it using PrismaClient.

We can query data using Prisma Client, via raw queries as SELECT id, ST_AsText(geom) as geom from training_data where geom has dataType geometry for using Unsupported("geometry").

Related

Building geo_point object during migration from Mysql to Elastic using Logstash

I'm struggling to be able to provide geo_point as an object in one of my indexes.
Migration works smoothly for other types: geohash, string etc. but developer requested specifically to have object there.
I'm using Logstash/Elastic 7.6 with mysql 8.0 JDBC driver on CentOS 7. Index has template with mapping as geo_point for this specific column.
Query in Mysql uses JSON_OBJECT to build it (but I tried different approaches with CONCAT() as well):
JSON_OBJECT('lat', latitude, 'lon', longitude) as geo_point
Data is there (no issue with nulls), but I receive error like this:
[2020-07-27T17:29:14,978][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"264a4b7994a2867fe1bfba7a3fbaa0ba", :_index=>"elastic_index", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x22952f4a>], :response=>{"index"=>{"_index"=>"live_listings-pub_1_test_200725", "_type"=>"_doc", "_id"=>"264a4b7994a2867fe1bfba7a3fbaa0ba", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [geo_point] of type [geo_point]", "caused_by"=>{"type"=>"parse_exception", "reason"=>"latitude must be a number"}}}}}
Is there a way to do it in the mysql query or is there another way to do it maybe with filter functionality?
Would appreciate any help or point me in right direction.
Thanks.

Can store and retrieve Object Django BinaryField in SQLite but not MySQL

I have some implementation that is best served by pickling a pandas dataframe and storing it in a DB.
This works fine if the database is sqlite but fails with a load error when it is MySQL
I have found other people with similar issues on stackoverflow and google but it seems that everybodys solution is to use sql to store the dataframe.
As a last resort I would go down that route but it would be a shame for this use case to do that.
Anybody got a solution to get the same behaviour from mysql as sqlite here?
I simply dump the dataframe with
pickledframe = pickle.dumps(frame)
and store pickledframe as a BinaryField
pickledframe = models.BinaryField(null=True)
I load it in with
unpickled = pickle.loads(pickledframe)
with sqlite it works fine, with mysql I get
Exception Type: UnpicklingError
Exception Value: invalid load key, ','.
upon trying to load it.
Thanks

Read/Write json objects from Postgres database through cache

I'm working with read/write through the cache via Apache Ignite and encountered the following problem:
In lastest versions of postgres there is special json type of data and jsonb add-on to work with json in database.
In Apache Ignite this functions don't implemented, how I know. Moreover, when Trying to do this part, I found it possible to read json from the database as PGobject, but there is no way to add jsonb processing to the built-in SQL query parser.
For example, I'm trying send next query:
SELECT jdata->>'tag1' FROM jsontest;
And get exception:
Syntax error in SQL statement "SELECT JDATA-[*]>>'tag1' FROM JSONTEST; "; SQL statement:
SELECT jdata->>'tag1' FROM jsontest; [42000-195]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.message.DbException.getSyntaxError(DbException.java:191)
at org.h2.command.Parser.getSyntaxError(Parser.java:533)
at org.h2.command.Parser.getSpecialType(Parser.java:3842)
at org.h2.command.Parser.read(Parser.java:3352)
at org.h2.command.Parser.readIf(Parser.java:3259)
at org.h2.command.Parser.readSum(Parser.java:2375)
at org.h2.command.Parser.readConcat(Parser.java:2341)
at org.h2.command.Parser.readCondition(Parser.java:2172)
at org.h2.command.Parser.readAnd(Parser.java:2144)
at org.h2.command.Parser.readExpression(Parser.java:2136)
at org.h2.command.Parser.parseSelectSimpleSelectPart(Parser.java:2047)
at org.h2.command.Parser.parseSelectSimple(Parser.java:2079)
at org.h2.command.Parser.parseSelectSub(Parser.java:1934)
at org.h2.command.Parser.parseSelectUnion(Parser.java:1749)
at org.h2.command.Parser.parseSelect(Parser.java:1737)
at org.h2.command.Parser.parsePrepared(Parser.java:448)
at org.h2.command.Parser.parse(Parser.java:320)
at org.h2.command.Parser.parse(Parser.java:296)
at org.h2.command.Parser.prepareCommand(Parser.java:257)
at org.h2.engine.Session.prepareLocal(Session.java:573)
at org.h2.engine.Session.prepareCommand(Session.java:514)
at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1204)
at org.h2.jdbc.JdbcPreparedStatement.(JdbcPreparedStatement.java:73)
at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:288)
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.prepareStatement(IgniteH2Indexing.java:402)
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1365)
... 9 more
And when I'm trying to research structure of Ignite or H2 databse engine, there is no way to add processing of such queries.
So, maybe someone has met this or a similar problem and can advise how to solve it?

Using "Point"-Datatype in Phinx-Migration (in CakePhp)

I'm creating an API for POIs and use the POINT-Type to store the coordinates.
As my company uses CakePHP I have to write a migration-script with Phinx.
But I don't have any Idea how to correctly create a column with the POINT-Type.
Sure, I just could make an "ALTER TABLE ..." in a handwritten Query, but maybe there is a better way?
Versions:
Cake: 3.4.7
Phinx: 0.6.5
MySQL: 5.7.18
Phinx Does not provide an adapter for POINT yet.
You should create your query manually.
See also Unable to seed data with POINT datatype #999
Just use "point" as you would use any other datatype as the second parameter of addColumn().
It's just not documented yet.
Credits for this solution are going to #ndm;
I just think it's worth putting this as answer instead as a comment.
Looks like Phinx supports point types for quite some time now (the docs are not up to date)... try to use \Phinx\Db\Adapter\AdapterInterface::PHINX_TYPE_POINT as the type

SAS pass through - Extract from MySQL does not work

I'm trying to build a Data Integration job uses pass through to extract data from a view in a MySQL database.
Wev'e been using pass through a lot in the project, mostly extracting data from Redshift,
however with MySQL I was not able to do make it work properly.
It keeps complaining a table is missing even though when pass through is off, view is found and data is extracted...
tried every trick I know, starting from enabling case-sensitive DBMS object names, to manually remove single/double quotes from the statement just in case MySQL confuses confuses it with something else...
No luck.
ODBC driver is [MySQL][ODBC 5.3(a) Driver][mysqld-5.5.53].
Ran on a Windows environment.
Any idea how to solve this?
Thank you in advance.
EDIT
So, first of all, one correction (even though not that important - I extract from a view, not a table).
This is the code generated by SAS Create Table transformation, pass through enabled. I only put an asterisk instead of the full list of columns:
proc sql;
connect to ODBC
(
READBUFF=10000 DATASRC="cmp.web_api" AUTHDOMAIN="MYSQL_CMP_Auth"
);
create table work."W7ZZZKOC"n as
select
*
from connection to ODBC
(
select
V_BI_ACCOUNT.ACCOUNT_NAME,
V_BI_ACCOUNT.ACQUISITION_SOURCE__C,
V_BI_ACCOUNT.ZUORA__ACTIVE__C,
V_BI_ACCOUNT.ADDRESS_LINE_1__C,
V_BI_ACCOUNT.ADDRESS_LINE_2__C,
V_BI_ACCOUNT.ADDRESS_LINE_3__C,
V_BI_ACCOUNT.AGREEMENT_DATE,
V_BI_ACCOUNT.AGREEMENT_LEGAL_CLAUSE_1__C,
V_BI_ACCOUNT.AGREEMENT_LEGAL_CLAUSE_2__C,
V_BI_ACCOUNT.PERSONBIRTHDATE,
V_BI_ACCOUNT.BLOCKED_REASON__C,
V_BI_ACCOUNT.BRAND__C,
V_BI_ACCOUNT.CPN__C,
V_BI_ACCOUNT.ACCCREATEDBYID,
V_BI_ACCOUNT.ACCCREATEDDATE,
V_BI_ACCOUNT.CURRENCY_PREFERENCE__C,
V_BI_ACCOUNT.CUSTOMER_FULL_NAME__PC,
V_BI_ACCOUNT.ACCOUNTID,
V_BI_ACCOUNT.ZUORA__CUSTOMERPRIORITY__C,
V_BI_ACCOUNT.DELIVERY_SALUTATION__C,
V_BI_ACCOUNT.DISPLAY_NAME,
V_BI_ACCOUNT.PERSONEMAIL,
V_BI_ACCOUNT.EMAILKEY__C,
V_BI_ACCOUNT.FACEBOOKKEY,
V_BI_ACCOUNT.FIRSTNAME,
V_BI_ACCOUNT.GENDER__C,
V_BI_ACCOUNT.PHONE,
V_BI_ACCOUNT.ACCLASTACTIVITYDATE,
V_BI_ACCOUNT.ACCLASTMODIFIEDDATE,
V_BI_ACCOUNT.LASTNAME,
V_BI_ACCOUNT.OTHER_EMAIL__C,
V_BI_ACCOUNT.PI_TYPE__C,
V_BI_ACCOUNT.ACCPARENTID,
V_BI_ACCOUNT.POSTCODE__C,
V_BI_ACCOUNT.PRIMARY_ACCOUNT_OF_THIS_CUSTOMER,
V_BI_ACCOUNT.ACCPRIMARY__C,
V_BI_ACCOUNT.ACCREASON_FOR_STATUS__C,
V_BI_ACCOUNT.ZUORA__SLA__C,
V_BI_ACCOUNT.ZUORA__SLASERIALNUMBER__C,
V_BI_ACCOUNT.SALUTATION,
V_BI_ACCOUNT.ACCSYSTEMMODSTAMP,
V_BI_ACCOUNT.PERSONTITLE,
V_BI_ACCOUNT.ZUORA__UPSELLOPPORTUNITY__C,
V_BI_ACCOUNT.X_CODE__C,
V_BI_ACCOUNT.ZUORA__ACCOUNT_ID__C,
V_BI_ACCOUNT.ZUORA__PAYMENTMETHODID__C,
V_BI_ACCOUNT.CITY,
V_BI_ACCOUNT.ORIGINAL_CREATED_DATE,
V_BI_ACCOUNT.SOURCE_SYSTEM_ID,
V_BI_ACCOUNT.STATUS,
V_BI_ACCOUNT.ZUORA__CONTACT_ID,
V_BI_ACCOUNT.ACCISDELETED,
V_BI_ACCOUNT.BILLING_ACCOUNT_NAME,
V_BI_ACCOUNT.ACZCREATEDDATE,
V_BI_ACCOUNT.ACZSYSTEMMODSTAMP,
V_BI_ACCOUNT.ACZLASTACTIVITYDATE,
V_BI_ACCOUNT.ZUORA__ACCOUNT__C,
V_BI_ACCOUNT.ZUORA__ACCOUNTNUMBER__C,
V_BI_ACCOUNT.ZUORA__AUTOPAY__C,
V_BI_ACCOUNT.ZUORA__BALANCE__C,
V_BI_ACCOUNT.ZUORA__CREDITCARDEXPIRATION__C,
V_BI_ACCOUNT.ZUORA__CURRENCY__C,
V_BI_ACCOUNT.ZUORA__MRR__C,
V_BI_ACCOUNT.ZUORA__PAYMENTTERM__C,
V_BI_ACCOUNT.ZUORA__PURCHASEORDERNUMBER__C,
V_BI_ACCOUNT.ZUORA__LASTINVOICEDATE__C,
V_BI_ACCOUNT.COUNTRY_NAME,
V_BI_ACCOUNT.COUNTRY_CODE,
V_BI_ACCOUNT.FAVOURITE_FOOTBALL_CLUB,
V_BI_ACCOUNT.COUNTY
from
web_api.V_BI_ACCOUNT as V_BI_ACCOUNT
);
%rcSet(&sqlrc);
disconnect from ODBC;
quit;
And again, when I extract data without pass through - works successfully,
I found out the problem was a column name exceeds 32 positions.
As SAS supports up column names up to 32,
the query fails to find PRIMARY_ACCOUNT_OF_THIS_CUSTOMER as the original column name is PRIMARY_ACCOUNT_OF_THIS_CUSTOMER__C.
EDIT
One more thing I found out is, MySQL doesn't like specifying schema name nor aliases.
Therefore,
From clause to only specify table name i.e : 'from v_bi_account' rather than 'web_api.v_bi_account'
and do not use aliases i.e use 'from v_bi_account' rather than 'from v_bi_account as v_bi_account'
Thank you guys so much for your help.