Using custom objects in JDBC insert statement - esb

I'll copy past a part from the mule website guide:
<jdbc:query key="outboundInsertStatement"
value="INSERT INTO TEST (ID, TYPE, DATA, ACK) VALUES (#[map-payload:ID],
#[map-payload:TYPE],#[map-payload:DATA], #[map-payload:ACK])"/>
I am trying to do something very close to this only I want to use a custom object and not the java.util.map which i understand is what is expected.
Could I get an explanation as to what does #[map-payload:ACK] exactly means? I dont understand the syntax.
Is map-payload some sort of default type?
Could I use that syntax to use a custom object I created? (Some MesssageObj class with some fields)

The syntax:
#[evaluator:expression]
is used by the Mule Expression Evaluation framework.
If you look in the table that lists all evaluators, you'll find map-payload among may other evaluators.
So the example you have above means that:
it is expected the in-flight message will have a payload of type java.util.Map,
the values for the ID, TYPE, DATA and ACK columns in the insert query will be extracted from the map payload under eponymous keys.
Of course, feel free to use any other evaluator that better match your in-flight message payload.

Related

Spring data Couchbase #n1ql.fields query

I'm trying to make a N1QL based query on Spring Data Couchbase. The documentation says
#n1ql.fields will be replaced by the list of fields (eg. for a SELECT clause) necessary to reconstruct the entity.
My repository implementation is this one:
#Query("#{#n1ql.fields} WHERE #{#n1ql.filter}")
List<User> findAllByFields(String fields);
And I'm calling this query as follows:
this.userRepository.findAllByFields("SELECT firstName FROM default");
I'm getting this error:
Caused by: org.springframework.data.couchbase.core.CouchbaseQueryExecutionException: Unable to execute query due to the following n1ql errors:
{"msg":"syntax error - at AS","code":3000}
After a little bit of researching, I also tryed:
#Query("SELECT #{#n1ql.fields} FROM #{#n1ql.bucket} WHERE #{#n1ql.filter}")
With this query, I don't get an error, I get all the documents stored but only the ID the other fields are set to null, when my query tries to get the firstName field.
this.userRepository.findAllByFields("firstName");
Anyone knows how to do such a query?
Thank you in advance.
You're misunderstanding the concept, I encourage you to give the documentation more time and see more examples. I'm not sure what exactly you're trying to achieve but I'll throw some examples.
Find all users (with all of their stored data)
#Query("#{#n1ql.selectEntity} WHERE #{#n1ql.filter}")
List<User> findAllUsers();
This will basically generate SELECT meta().id,_cas,* FROM bucket WHERE type='com.example.User'
Notice findAllUsers() does not take any parameters because there are no param placeholders defined in the #Query above.
Find all users where firstName like
#Query("#{#n1ql.selectEntity} WHERE #{#n1ql.filter} AND firstName like $1")
List<User> findByFirstNameLike(String keyword);
This will generate something like the above query but with an extra where condition firstName like
Notice this method takes a keyword because there is a param placeholder defined $1.
Notice in the documentation it says
#{#n1ql.selectEntity} WHERE #{#n1ql.filter} AND test = $1
is equivalent to
SELECT #{#n1ql.fields} FROM #{#n1ql.bucket} WHERE
#{#n1ql.filter} AND test = $1
Now if you don't want to fetch all the data for user(s), you'll need to specify the fields being selected, read following links for more info
How to fetch a field from document using n1ql with spring-data-couchbase
https://docs.spring.io/spring-data/couchbase/docs/2.2.4.RELEASE/reference/html/#_dto_projections
I think you should try below query, that should resolve the issue to get fields based parameter you have sent as arguments.
Please refer blow query.
#Query("SELECT $1 FROM #{#n1q1.bucket} WHERE #{#n1ql.filter}")
List findByFirstName(String fieldName);
Here, bucket name resolve to the User entity and and n1ql.filter would be a default filter.

Camel Blueprint specify parameter for prepared sql statement

I have a poll enrich which enriches a POJO with the result of an SQL query (from a MySQL database). It currently gets the brand from the POJO and then gets the name from the order matching the brand. I had to add quotes around the ${body.getBrand}, else the query would look for a column with the brand name instead of using the value. Currently it looks like this:
<pollEnrich id="_enrich1" strategyRef="merge" timeout="5000">
<simple>sql:SELECT name FROM orders WHERE brand= '${body.getBrand}'</simple>
</pollEnrich>
I want to change it because I'll probably need to create more sql queries and the current version does not work if the value contains quotes and thus is vulnerable to sql injection.
I thought prepared statements would do the trick and wanted to use a named parameter but I do not seem to be able to set the value of the parameter.
I have tried many different things like for example setting a header and change the query to have a named parameter:
<setHeader headerName="brand" id="brand">
<simple>${body.getBrand}</simple>
</setHeader>
<pollEnrich id="_enrich1" strategyRef="merge" timeout="5000">
<simple>sql:SELECT name FROM orders WHERE brand= :#brand</simple>
</pollEnrich>
but I keep getting
PreparedStatementCallback; bad SQL grammar [SELECT name FROM orders WHERE brand= ?]; nested exception is java.sql.SQLException: No value specified for parameter 1
I have also tried setting the useMessageBodyForSql option to true (since this seemed like something that might help?) but nothing I have tried seemed to work.
I have seen a lot of examples/solutions for people setting the routes with java, but I assume there must also be a solution for the blueprint xml?
If anyone got any suggestion or example that would be great.
In Camel version < 2.16, pollEnrich doesn't have access to the original exchange and therefore cannot read your header, hence the exception. This is documented here: http://camel.apache.org/content-enricher.html
Guessing from your example, a normal enrich should work too and it has access to the original exchange. Try changing 'pollEnrich' to 'enrich'.

Neo4j JSON APOC load - skip nulls

I'm trying to load some JSON from a REST API (using Neo4j 3.0.4 & APOC apoc-3.0.4.1-all) that has null values in it. This is throwing up this error:
"Cannot merge node using null property value"
The nulls can be spread across multiple keys and it varies which keys have null values. Hence I'd prefer to avoid specifying which individual keys to handle nulls for if possible.
I found the apoc.map.clean(map,[keys],[values]) procedure but not much info on how to use it. Is this the best procedure to use this for every key or is there an simpler way?
Thanks!
Thanks stdob - I managed to find another post you had written which helped me to understand solution. I need to substitute the first property for one that was never null.
MERGE (label:Label{key2: json.key2}) ON CREATE
SET label.key3 = json.key3, label.key1 = json.key1

Insert JSON into multiple tables on Database in Mule

I am trying to insert the contents of an JSON to a MySql database using Mule ESB. The JSON looks like:
{
"id":106636,
"client_id":9999,
"comments":"Credit",
"salesman_name":"Salvador Dali",
"cart_items":[
{"citem_id":1066819,"quantity":3},
{"citem_id":1066820,"quantity":10}
]
}
On mule I want to insert all data on a step like:
Insert INTO order_header(id,client_id,comments,salesman_name)
Insert INTO order_detail(id,citem_id,quantity)
Insert INTO order_detail(id,citem_id,quantity)
Currently i have come this far on Mule:
MuleSoft Flow
Use Bulk Execute operation of Database Connector.
You will insert into multiple tables.
for ex :
Query text
Insert INTO order_header(payload.id,payload.client_id,payload.comments,payload.salesman_name);
Insert INTO order_detail(payload.id,payload.cart_items[0].citem_id,payload.cart_items[0].quantity); etc..
There is an excellant article here http://www.dotnetfunda.com/articles/show/2078/parse-json-keys-to-insert-records-into-postgresql-database-using-mule
that should be of help. You may need to modify as you need to write the order_header data first and then use a collection splitter for the order_detail and wrap the whole in a transaction.
Ok. Since, you have already converted JSON into Object in the flow, you can refer individual values with their object reference like obj.id, obj.client_id etc.
Get a database connector next.
Configure your MySQL database in "Connector Configuration".
Operation: Choose "Bulk execute"
In "Query text" : Write multiple INSERT queries and pass appropriate values from Object (converted from JSON). Remember to separate multiple queries with semicolon (;) in Query text.
That's it !! Let me know if you face any issue. Hope it works for you..

JSON Queries - Failed to execute

So, I am trying to execute a query using ArcGIS API, but it should match any Json queries. I am kind of new to this query format, so I am pretty sure I must be missing something, but I can't figure out what it is.
This page allows for testing queries on the database before I actually implement them in my code. Features in this database have several fields, including OBJECTID and Identificatie. I would like to, for example, select the feature where Identificatie = 1. If I enter this in the Where field though (Identificatie = 1) an error Failed to execute appears. This happens for every field, except for OBJECTID. Querying where OBJECTID = 1 returns the correct results. I am obviously doing something wrong, but I don't get it why OBJECTID does work here. A brief explanation (or a link to a page documenting queries for JSON, which I haven't found), would be appreciated!
Identificatie, along with most other fields in the service you're using, is a string field. Therefore, you need to use single quotes in your WHERE clause:
Identificatie = '1'
Or to get one that actually exists:
Identificatie = '1714100000729432'
OBJECTID = 1 works without quotes because it's a numeric field.
Here's a link to the correct query. And here's a link to the query with all output fields included.