In operator with one or More negative values not working with API Version 1.4 for Oracle Service Cloud - in-operator

We have upgraded our application to use API version 1.4 and we observed the following error when ROQL query contains “in operator” with one or more negative values in the “in clause”. If all the values in the in clause are positive then this error is not occurring.
When we use API version 1.2, In clause with negative values is working fine without any exception.
Is this intentional change in API version 1.4 or a regression introduced in version 1.4? Or we need to change the SOAP Request in some way?
The following is the ROQL query example.
USE operational;
SELECT ovs2.ITABLE.ID, ovs2.ITABLE.INTEGERCOL FROM ovs2.ITABLE WHERE (ovs2$ITABLE.INTEGERCOL IN **(-2,4)**)
Exception received :
WHERE clause contains mismatched data types in comparison

Related

MySQL ERROR 3037: Invalid GIS data provided to function st_within

Following query:
SELECT st_contains(ST_GeomFromText('POLYGON((
9.2949170148074 3.5550157117451,
12.230624391667 3.5550157117451,
12.24455565975 4.9035765788215,
9.300807190941 4.8942904468525,
9.3019824958588 3.5550157117451,
9.2949170148074 3.5550157117451
))'),ST_GeomFromText("POINT( 6.31 8.92)"))
issues error Invalid GIS data provided to function st_within.
Although I don't see anything wrong with the query (last point is the same as first, points create ring, syntax is correct). When I remove the second to last point (9.3019824958588 3.5550157117451) then the query succeeds:
SELECT st_contains(ST_GeomFromText('POLYGON((
9.2949170148074 3.5550157117451,
12.230624391667 3.5550157117451,
12.24455565975 4.9035765788215,
9.300807190941 4.8942904468525,
9.2949170148074 3.5550157117451))'),ST_GeomFromText("POINT( 6.31 8.92)"))
Did I miss something?
Can I somehow further debug the message about Invalid GIS data to be more useful?
I'm using MySQL 5.7.31-0ubuntu0.18.04.1
I believe I found the answer: I tried ST_IsValid on both polygons, first one returned false (probably because it intersects itself), second one true. However, I tried several other self-intersecting polygons to feed the st_contains function without raising an error.
Then according to https://dev.mysql.com/doc/refman/5.7/en/geometry-well-formedness-validity.html
Spatial computations may detect some cases of invalid geometries and raise an error, but they may also return an undefined result without detecting the invalidity.
So whenever I got some result using self-intersecting polygons in the past, it was undefined and only this particular polygon in this question raised an error.
Conclusion is that if one uses spatial functions, one should check validity of geometry beforehand using ST_IsValid, because MySQL does not check it on inserts, nor updates - according to documentation:
It is permitted to insert, select, and update geometrically invalid geometries, but they must be syntactically well-formed. Due to the computational expense, MySQL does not check explicitly for geometric validity.
One point to add, this functionality has apparently changed between MySQL 5.6 and 5.7, in 5.6 the "faulty" polygon does not raise an error, whereas in 5.7 does.

Working on migration of SPL 3.0 to 4.2 (TEDA)

I am working on migration of 3.0 code into new 4.2 framework. I am facing a few difficulties:
How to do CDR level deduplication in new 4.2 framework? (Note: Table deduplication is already done).
Where to implement PostDedupProcessor - context or chainsink custom? In either case, do I need to remove duplicate hashcodes from the list or just reject the tuples? Here I am also doing column updating for a few tuples.
My file is not moving into archive. The temporary output file is getting generated and that too empty and outside load directory. What could be the possible reasons? - I have thoroughly checked config parameters and after putting logs, it seems correct output is being sent from transformer custom, so I don't know where it is stuck. I had printed TableRowGenerator stream for logs(end of DataProcessor).
1. and 2.:
You need to select the type of deduplication. It is not a big difference if you choose "table-" or "cdr-level-deduplication".
The ite.businessLogic.transformation.outputType does affect this. There is one Dedup only. You can not have both.
Select recordStream for "cdr-level-deduplication", do the transformation to table row format (e.g. if you like to use the TableFileWriter) in xxx.chainsink.custom::PostContextDataProcessor.
In xxx.chainsink.custom::PostContextDataProcessor you need to add custom code for duplicate-handling: reject (discard) tuples or set special column values or write them to different target tables.
3.:
Possibly reasons could be:
Missing forwarding of window punctuations or statistic tuple
error in BloomFilter configuration, you would see it easily because PE is down and error log gives hints about wrong sha2 functions be used
To troubleshoot your ITE application, I recommend to enable the following debug sinks if checking the StreamsStudio live graph is not sufficient:
ite.businessLogic.transformation.debug=on
ite.businessLogic.group.debug=on
ite.businessLogic.sink.debug=on
Run a test with a single input file only and check the flow of your record and statistic tuples. "Debug sinks" write punctuations markers also to debug files.

gcloud sql create instance : POSTGRES_9_6 is not a valid value

I see that the POSTGRES database option for gce sql is still in BETA, just looking for confermation that the issue mentioned below is an issue with the API and not something stupid I've overlooked.
gcloud sql instances create example-db --activation-policy=ALWAYS --tier="db-n1-standard-1" --pricing-plan="PER_USE" --region="asia-east1" --gce-zone="asia-east1-a" --database-version=POSTGRES_9_6
HTTPError 400: Invalid value for: POSTGRES_9_6 is not a valid value
Documentation says that this is a valid option:
- https://cloud.google.com/sdk/gcloud/reference/sql/instances/create
Found more documentation that explains I needed to use the gcloud beta command syntax.
https://cloud.google.com/sql/docs/postgres/create-instance
Actual Working Example
gcloud beta sql instances create example-db --activation-policy=ALWAYS --pricing-plan="PER_USE" --region="asia-east1" --gce-zone="asia-east1-a" --cpu=2 --memory=3840MiB --database-version="POSTGRES_9_6"

Zabbix load/cpu roll-your-own formula

I know that newer versions of Zabbix (2.0 onward) has a simple way of determining average load per cpu via the introduction of the "percpu" parameter. Unfortunately, I'm using 1.8.
With 2.0 I would be able to create an item with this key: system.cpu.load[percpu,avg15]
How do I roll-my-own calculated item using 1.8? I have tried the following formulas (Many are desperate and improbable, I know):
system.cpu.load[,avg15].last/system.cpu.num.last
Template_Linux:system.cpu.load[,avg15]/Template_Linux:system.cpu.num
{Template_Linux:system.cpu.load[,avg15]}/{Template_Linux:system.cpu.num}
{Template_Linux:system.cpu.load[,avg15].last}/{Template_Linux:system.cpu.num.last}
{Template_Linux:system.cpu.load[,avg15].last()}/{Template_Linux:system.cpu.num.last()}
{"Template_Linux:system.cpu.load[,avg15]".last()}/{"Template_Linux:system.cpu.num".last()}
"Template_Linux:system.cpu.load[,avg15]".last()/"Template_Linux:system.cpu.num".last()
"Template_Linux:system.cpu.load[,avg15].last()"/"Template_Linux:system.cpu.num.last()"
Thanks!
Zabbix documentation page on item configuration describes the correct calculated item syntax.
In this case, the formula would be something like this:
last("system.cpu.load[,avg15]") / last("system.cpu.num")

java.sql.Clob reading : weird results b/w MySQL and Oracle

I got an unified JDBC code for reading/writing large texts. Column is CLOB on Oracle and TEXT on MySQL. The following code
java.sql.Clob aClob = resultSet.getClob(COLUMN_NAME);
java.io.InputStream aStream = aClob.getAsciiStream();
int av = aStream.available();
gives relevant value on MySQL (Connector/J 5.0.4) but zero on Oracle (Oracle JDBC driver 11.2.0.2). Clob.length() fortunately gives correct value on both and InputStream.read() up to -1 works too, so there are other ways of obtaining the data in unified way.
Javadoc gives this weird note:
The available method for class InputStream always returns 0.
So which driver is right? And no, i don't want to drag vendor-specific packages into the code :-) This question is JDBC neutral.
I would be tempted to say that both drivers were right.
The Javadoc for the available() method appears to suggest that the value returned is an estimate of how many bytes the InputStream currently has cached and can return to you without an I/O operation. How many bytes it has cached, and how it does any caching, would seem to me to be an implementation detail. The fact that these values are different merely suggests that the two drivers are implemented differently. Nothing in the Javadoc for the available() method suggests to me that either driver is doing anything wrong.
I'd guess that the Oracle driver doesn't cache any data from the CLOB immediately after executing the query, so that might be why the available() method returns 0. However, once data has been read from the stream, the available() method for the Oracle driver no longer returns 0, as it seems Oracle JDBC driver has been to the database and fetched some data out of the CLOB column. On the other hand, MySQL seems to be a bit more proactive in actually fetching data out of the TEXT column as soon as the query has finished executing.
Having read the Javadoc for the available() method I'm not sure why I'd use it. What are you using it for?