Insert data into JSON column in postgres using JOOQ - json

I have a postgres database to which I read/write using JOOQ. One of my DB tables has a column of type JSON. When I try to insert data into this column using the query below, I get the error
Exception in thread "main" org.jooq.exception.DataAccessException: SQL [update "public"."asset_state" set "sites_as_json" = ?]; ERROR: column "sites_as_json" is of type json but expression is of type character varying
Hint: You will need to rewrite or cast the expression.
Below is the code for inserting data into the column
SiteObj s1 = new SiteObj();
s1.setId("1");
s1.setName("Site1");
s1.setGeofenceType("Customer Site");
SiteObj s2 = new SiteObj();
s2.setId("2");
s2.setName("Site2");
s2.setGeofenceType("Customer Site");
List<SiteObj> sitesList = Arrays.asList(s1, s2);
int result = this.dsl.update(as).set(as.SITES_AS_JSON, LambdaUtil.convertJsonToStr(sitesList)).execute();
The call LambdaUtil.convertJsonToStr(sitesList) outputs a string that looks like this...
[{"id":"1","name":"Site1","geofenceType":"Customer Site"},{"id":"2","name":"Site2","geofenceType":"Customer Site"}]
What do I need to do to be able to insert into the JSON column?

Current jOOQ versions
jOOQ natively supports JSON and JSONB data types. You shouldn't need to have to do anything custom.
Historic answer
For jOOQ to correctly bind your JSON string to the JDBC driver, you will need to implement a data type binding as documented here:
https://www.jooq.org/doc/latest/manual/code-generation/custom-data-type-bindings
The important bit is the fact that your generated SQL needs to produce an explicit type cast, for example:
#Override
public void sql(BindingSQLContext<JsonElement> ctx) throws SQLException {
// Depending on how you generate your SQL, you may need to explicitly distinguish
// between jOOQ generating bind variables or inlined literals.
if (ctx.render().paramType() == ParamType.INLINED)
ctx.render().visit(DSL.inline(ctx.convert(converter()).value())).sql("::json");
else
ctx.render().sql("?::json");
}

Related

Can I declare type of parameters for sql command? I cannot insert a boolean value, it is being considered a string

I am trying to insert into MySQL DB using Powershell, the input data is from a REST API call. I am using Prepare Statement approach to optimize the inserts, I am having issues while inserting values into a column (let my_col_bool ) which is of type Boolean (i.e tinyint(1)).
The input data received from REST API would assign values to $myVar1,$myVar3,$myVar3. The values assigned to $myVar3 would be "true / false", as I am adding these values to command parameter and Executing the query, may be it is considering these values as String instead of Boolean as I am having an Error.
Approach 1:
$oMYSQLCommand.CommandText = "INSERT INTO myTable VALUES(#my_col_string,#my_col_int,#my_col_bool)"
cmd.Prepare()
$oMYSQLCommand.Parameters.AddWithValue("#my_col_string", "")
$oMYSQLCommand.Parameters.AddWithValue("#my_col_int", "")
$oMYSQLCommand.Parameters.AddWithValue("#my_col_bool", "")
$oMYSQLCommand.Parameters("#my_col_string").Value = $myVar1
$oMYSQLCommand.Parameters("#my_col_int").Value = $myVar2
$oMYSQLCommand.Parameters("#my_col_bool").Value = $myVar3
$oMYSQLCommand.ExecuteNonQuery() /*Error: Exception calling "ExecuteNonQuery" with "0" argument(s): "Incorrect integer value: 'false' for column 'my_col_bool' at row 1" */
Approach 2:
$oMYSQLCommand.Parameters.Add("#my_col_bool",[System.Data]::$SqlDbType.TinyInt) /*Error: Unable to find type [System.Data] */
Approach 3:
$oMYSQLCommand.Parameters.Add("#my_col_bool",$SqlDbType.TinyInt) /*Error: Cannot find an overload for "Add" and the argument count: "2". */
Approach 4:
$param_var = New-Object MySql.Data.MySqlClient.MySqlParameter("#my_col_bool",$SqlDbType.TinyInt)
$oMYSQLCommand.Parameters.Add($param_var) | Out-Null
$oMYSQLCommand.ExecuteNonQuery()/*Error: Exception calling "ExecuteNonQuery" with "0" argument(s): "Incorrect integer value: 'false' for column 'my_col_bool' at row 1" */
Every .Net driver tries to have exact same interface as MS SQL Connector has.
MS SQL Example:
$sqlCmd = [System.Data.SqlClient.SqlCommand]::new()
[void]$sqlCmd.Parameters.AddWithValue('#param1', [System.Data.SqlTypes.SqlInt16]::new(22))
[void]$sqlCmd.Parameters.Add('#param2', [System.Data.SqlTypes.SqlInt16])
$sqlCmd.Parameters['#param2'].Value = [System.Data.SqlTypes.SqlInt16]::new(22)
Reference: System.Data.SqlTypes
So usually, the methods are same, you just have to use different namespace inside [].
Note that some .Net providers use SQL type system, and some use own type system, which is usually at [VendorName.something] namespace.
For example, MySQL seems to use [MySql.Data.MySqlClient.MySqlDbType]::%typeName% namespace,
Here is update on my approach 3, to make it work $myVar1 should be bool variable
$oMYSQLCommand.Parameters.Add("#my_col_bool",$SqlDbType.TinyInt)
[bool]$myVar3_bool=[boolean]::parse($myVar1)
$oMYSQLCommand.Parameters("#my_col_bool").Value = $myVar3_bool

Catch SAPSQL_DATA_LOSS

I want to catch and handle SAPSQL_DATA_LOSS in my ABAP code.
I tried this:
try.
SELECT *
FROM (rtab_name) AS rtab
WHERE (sub_condition)
into table #<sub_result>
.
catch SAPSQL_DATA_LOSS into error.
...
endtry.
But above code is not valid. I get this message:
Type "SAPSQL_DATA_LOSS" is not valid
And I tried this:
catch SYSTEM-EXCEPTIONS SAPSQL_DATA_LOSS = 123.
SELECT *
...
.
endcatch.
if sy-subrc = 123.
...
endif.
But above code gives me:
Instead of "SAPSQL_DATA_LOSS" expected "system-exception" (translated from german to english by me)
How to catch SAPSQL_DATA_LOSS?
This question is not about "why does this exception happen?". This is already solved. My code should handle the exception.
SAPSQL_DATA_LOSS is a runtime error.
As SAPSQL_DATA_LOSS is not a class-based exception, it is not possible to catch it using try catch.
As SAPSQL_DATA_LOSS is not a catchable runtime error, it is not possible to catch it using try catch SYSTEM-EXCEPTIONS.
see the below catchable runtime errors.
https://help.sap.com/doc/abapdocu_751_index_htm/7.51/en-US/abenueb-abfb-sysexc.htm
After some tries I can propose you a possible solution.
This is a workaround:
I don't know if it can be applied to your case, since it needs the select statement to be wrapped into an RFC function module !
The main point is that a short dump (message type X) CAN be handled in RFC calls.
So using an RFC (CALL FUNCTION 'xxxxx' destination 'NONE' for example) and using special exception SYSTEM_FAILURE, the system does not terminate the caller program, but instead it returns a SY-SUBRC > 0 with the Short dump informations in system message fields (SY-MSGxx).
STEPS
Create a Function module (RFC enabled) with your select statement input + the row type of the result table. (All parameters passed by value)
You need this last parameter since generic tables can't be passed in RFC (no "TYPE ANY TABLE" allowed)
FUNCTION Z_DYN_SEL .
*"----------------------------------------------------------------------
*"*"Local interface:
*" IMPORTING
*" VALUE(RTAB_NAME) TYPE TABNAME16
*" VALUE(SUB_CONDITION) TYPE STRING
*" VALUE(RESULT_TYPE) TYPE STRING
*"----------------------------------------------------------------------
* RTAB_NAME: DB Table
* SUB_CONDITION: WHERE Condition
* RESULT_TYPE: The ROW type of the internal table
field-symbols <sub_result> type any table.
* DEFINE LOCAL DYNAMIC TABLE TO STORE THE RESULT
data: lr_res type ref to data.
create data lr_res type standard table of (result_type).
assign lr_res->* to <sub_result>.
* DYNAMIC SELECT
select *
from (rtab_name) as rtab
where (sub_condition)
into table #<sub_result>.
* EXPORT RESULT TO A MEMORY ID, SO IT CAN BE RETRIEVED BY CALLER
export res = <sub_result> to memory id 'RES'.
Main program:
In this caller example some parameters are passed to the RFC.
KTOKD field (should be 4 chars long) is passed with a char10 value (producing your short dump).
If ANY Dump is triggered inside the function, we can now handle it.
If everything went fine, IMPORT result from the EXPORT statement inside the RFC
field-symbols <sub_result> type any table.
data: lr_res type ref to data.
create data lr_res type standard table of KNA1.
assign lr_res->* to <sub_result>.
data lv_msg type char255.
call function 'Z_DYN_SEL' destination 'NONE'
exporting
rtab_name = 'KNA1'
sub_condition = `KTOKD = 'D001xxxxxx'`
result_type = 'KNA1'
exceptions
system_failure = 1 message lv_msg.
if sy-subrc = 0.
import res = <sub_result> from memory id 'RES'.
else.
write: / lv_msg.
write : / sy-msgid, sy-msgno, sy-msgty, sy-msgv1, sy-msgv2, sy-msgv3, sy-msgv4.
endif.
RESULTS
After the RFC call in case of a short dump in the select statement, the program is not terminated and the following pieces of information are available
SY-SUBRC = 1
lv_msg is the error text (Data was lost while copying a value.)
Sy-msgid = 00
Sy-msgno = '341'
Sy-msgty = 'X'
Sy-msgv1 = 'SAPSQL_DATA_LOSS'

sqlAlchemy converts geometry to byte using ST_AsBinary

I have a sqlAlchemy model that has one column of type geometry which is defined like this:
point_geom = Column(Geometry('POINT'), index=True)
I'm using geoalchemy2 module:
from geoalchemy2 import Geometry
Then I make my queries using sqlAlchemy ORM, and everything works fine. For example:
data = session.query(myModel).filter_by(...)
My problem is that when I need to get the sql statement of the query object, I use the following code:
sql = data.statement.compile(dialect=postgresql.dialect())
But the column of type geometry is converted to Byte[], so the resulting sql statement is this:
SELECT column_a, column_b, ST_AsBinary(point_geom) AS point_geom
FROM tablename WHERE ...
What should be done to avoid the conversion of the geometry type to byte type?
I had the same problem when was working with Flask-Sqlalchemy and Geoalchemy2 and solved this as follows.
You just need to create a new subclass of GEOMETRY type.
If you look at documentations, the arguments of "GEOMETRY" type are given:
ElementType - which is the type of returned element, by default it's 'WKBElement' (Well-known-binary-element)
as_binary - the function to use, by default it's 'ST_AsEWKB' which in makes a problem on your case
from_text - the geometry constructor used to create, insert and update elements, by default it is 'ST_GeomFromEWKT'
So what I did? I have just created new subclass with required function, element and constructor and used "Geometry" type on my db models as always do.
from geoalchemy2 import Geometry as BaseGeometry
from geoalchemy2.elements import WKTElement
class Geometry(BaseGeometry):
from_text = 'ST_GeomFromText'
as_binary = 'ST_asText'
ElementType = WKTElement
As you can see I have changed only these 3 arguments of a base class.
This will return you a String with required column variables.
It think you can specify that in your query. Something like this:
from geoalchemy2.functions import ST_AsGeoJSON
query = session.query(ST_AsGeoJSON(YourModel.geom_column))
That should change your conversion. There are many conversion functions in the
geoalchemy documentation.

Setting lua table in redis

I have a lua script, which simplified is like this:
local item = {};
local id = redis.call("INCR", "counter");
item["id"] = id;
item["data"] = KEYS[1]
redis.call("SET", "item:" .. id, cjson.encode(item));
return cjson.encode(item);
KEYS[1] is a stringified json object:
JSON.stringify({name : 'some name'});
What happens is that because I'm using cjson.encode to add the item to the set, it seems to be getting stringified twice, so the result is:
{"id":20,"data":"{\"name\":\"some name\"}"}
Is there a better way to be handling this?
First, regardless your question, you're using KEYS the wrong way and your script isn't written according to the guidelines. You should not generate key names in your script (i.e. call SET with "item:" .. id as a keyname) but rather use the KEYS input array to declare any keys involved a priori.
Secondly, instead of passing the stringified string with KEYS, use the ARGV input array.
Thirdly, you can do item["data"] = json.decode(ARGV[1]) to avoid the double encoding.
Lastly, perhaps you should learn about Redis' Hash data type - it may be more suitable to your needs.

LINQ: select an object, but change some properties without creating a new object

I'm trying to select an object using values of another object in LINQ SQL,
I currently have this,
var result1 = (from s in pdc.ScanLogs
from ec in pdc.ExhibitsContacts
where s.ExhibitID == ec.ExhibitID
select ec.Contact);
I want to assign a value of ec.Contact.Note = ec.Comment;
Is there to a way to do this in LINQ SQL without writing multiple queries?
I read this blog article: http://blog.robvolk.com/2009/05/linq-select-object-but-change-some.html but it doesn't seem to work with LINQ SQL.
Basically you can't do this. LINQ is meant to be a query language, and what you want to do is mutate existing entities with your query. This means your query would have side effects and this is not something that is supported by LINQ to SQL.
While this won't work in a single query while returning LINQ to SQL entities, what will work is when you return simple DTO structues. For instance:
var result1 =
from s in pdc.ScanLogs
from ec in s.ExhibitsContacts
select new ContactDto
{
Id = ec.Contact.Id,
Note = ec.Comment,
SomeOtherFields = ec.Contact.SomeOtherFields
};
As a side note: also look at how I removed the where s.ExhibitID == ec.ExhibitID join from the query, by just using the ExhibitsContacts property of the ScanLog entity (which will be generated by LINQ to SQL for you when your database schema has the proper foreign keys defined).
Update:
When you need to return those DTO from several methods, you might consider centralizing the transformation from a collection of entities to a collection of DTO objects. What I tend to do is place this method on the DTO (which makes it easy to find). The code might look like this:
public class ContactDto
{
// Many public properties here
public static IQueryable<ContactDto> ToDto(
IQueryable<Contact> contacts)
{
return
from contact in contacts
select new ContactDto
{
Id = contact.Id,
Note = contact.ExhibitsContact.Comment,
ManyOtherFields = contact.ManyOtherFields
};
}
}
The trick with this static transformation method is that it takes an IQueryable and returns an IQueryable. This allows to to simply specify the transformation and let LINQ to SQL (or any other LINQ enabled O/RM) to efficiently execute that LINQ expression later on. The original code would now look like this:
IQueryable<Contact> contacts =
from s in pdc.ScanLogs
from ec in s.ExhibitsContacts
select ec.Contact;
IQuerable<ContactDto> result1 = ContactDto.ToDto(contacts);
the problem is that LINQ to SQL does not know how to interpret your extension method. The only way, other than using stored procedures from LINQ to SQL (which kind of defeats the ponit), is to get the object, update and then commit changes.