Neo4j creating multiple nodes using multiple params json - json

I am new to neo4j and still figuring out why this is failing.
Here is my json query that I am passing
FOREACH(p in {props} |
MERGE (n:Router {NodeId:p.NodeId})-[r:has_interface]->(I:Interface {IfIPAddress:p.IfIPAddress})
ON CREATE SET I = p
ON MATCH SET I = p)
props is a array of collection that I am passing in params.
Props has a property NodeId.
This is what I want to achieve
1) I have already created thousands nodes labelled Router with property NodeId
2) I want to create Interfaces for these nodes.
3) Now if the NodeId in the props collection matches the Router NodeId, I want to create Interface with relation to the Router node as -> (has_interface)
When I run this json query using curl and json it gives me exception saying
"message" : "Query not prepared correctly!",
"exception" : "InternalException"
What can be the issue. I checked the query many times and it seems to be correct

You are probably using version 2.0.0 which had a bug. Use 2.0.1 (or above) and you're fine, as your syntax is correct.

Related

How to capture incorrect (corrupt) JSON records in (Py)Spark Structured Streaming?

I have a Azure Eventhub, which is streaming data (in JSON format).
I read it as a Spark dataframe, parse the incoming "body" with from_json(col("body"), schema) where schema is pre-defined. In code it, looks like:
from pyspark.sql.functions import col, from_json
from pyspark.sql.types import *
schema = StructType().add(...) # define the incoming JSON schema
df_stream_input = (spark
.readStream
.format("eventhubs")
.options(**ehConfInput)
.load()
.select(from_json(col("body").cast("string"), schema)
)
And now = if there is some inconsistency between the incoming JSON's schema and the defined schema (e.g. the source eventhub starts sending data in new format without notice), the from_json() functions will not throw an error = instead, it will put NULL to the fields, which are present in my schema definition but not in the JSONs eventhub sends.
I want to capture this information and log it somewhere (Spark's log4j, Azure Monitor, warning email, ...).
My question is: what is the best way how to achieve this.
Some of my thoughts:
First thing I can think of is to have a UDF, which checks for the NULLs and if there is any problem, it raise an Exception. I believe there it is not possible to send logs to log4j via PySpark, as the "spark" context cannot be initiated within the UDF (on the workers) and one wants to use the default:
log4jLogger = sc._jvm.org.apache.log4j
logger = log4jLogger.LogManager.getLogger('PySpark Logger')
Second thing I can think of is to use "foreach/foreachBatch" function and put this check logic there.
But I feel both these approaches are like.. like too much custom - I was hoping that Spark has something built-in for these purposes.
tl;dr You have to do this check logic yourself using foreach or foreachBatch operators.
It turns out I was mistaken thinking that columnNameOfCorruptRecord option could be an answer. It will not work.
Firstly, it won't work due to this:
case _: BadRecordException => null
And secondly due to this that simply disables any other parsing modes (incl. PERMISSIVE that seems to be used alongside columnNameOfCorruptRecord option):
new JSONOptions(options + ("mode" -> FailFastMode.name), timeZoneId.get))
In other words, your only option is to use the 2nd item in your list, i.e. foreach or foreachBatch and handle corrupted records yourself.
A solution could use from_json while keeping the initial body column. Any record with an incorrect JSON would end up with the result column null and foreach* would catch it, e.g.
def handleCorruptRecords:
// if json == null the body was corrupt
// handle it
df_stream_input = (spark
.readStream
.format("eventhubs")
.options(**ehConfInput)
.load()
.select("body", from_json(col("body").cast("string"), schema).as("json"))
).foreach(handleCorruptRecords).start()

Insert into JSON using JSONiq

We are writing a JSONiq query to insert new properties into a JSON and return the updated JSON from the query.
Query:
jsoniq version "1.0";
let $users := {
"name" : "Deadbeat Jim",
"address" : "1 E 161st St, Bronx, NY 10451",
"risk tolerance" : "high"
}
insert json {"status" : "credit card declined"} into $users
return $users
users holds the input json, we are trying to add one more property using JSONiq insert command, as mentioned in JSONiq documentation here
We are getting below exception:
java.lang.RuntimeException: (no URI):13,1: static error [err:XPST0003]: invalid expression: syntax error, unexpected expression (missing comma "," between expressions?)
Questions :
Is the query correct ? if not, How to make it correct syntactically/logically ?
Are there any good resources available online for JSONiq with examples ?
Here are some more explanations:
The way JSONiq updates work is identical to the way XQuery updates work. JSONiq updates are declarative: a JSONiq update program returns, in addition to an empty sequence in the data model, what is called a pending update list (PUL), which is a list of updates (deletions, replacements, renamings, insertions, etc) to be applied to some documents.
JSONiq update has snapshot semantics, meaning that no side effects occur during the evaluation of the main expression. Instead, after the PUL has been computed, the engine may propagate the changes specified by the PUL to underlying storage (such as a file on disk, or a document store).
A syntactically correct version of the question's example would be:
jsoniq version "1.0";
let $users := {
"name" : "Deadbeat Jim",
"address" : "1 E 161st St, Bronx, NY 10451",
"risk tolerance" : "high"
}
return insert json {"status" : "credit card declined"} into $users
However, in this case, the PUL returned contains changes against a JSON object created on the fly, in memory. The lifetime of this object is only that of the evaluation of the query, so that this program simply has no visible effect.
If the collection function is in some way mapped to a database in a document store like Couchbase or MongoDB (that is, if the engine is documented and configured to do this), the following query will semantically apply an update to this document store.
jsoniq version "1.0";
let $users := collection("users")[$$.name eq "Jim"]
return insert json {"status" : "credit card declined"} into $users
A copy-modify-return expression (also called transform expression like in XQuery, see other answer on this page) provides a way to apply changes in memory without losing them, and without any persistent storage. It:
creates a JSON object (as a copy of another), or an XML node, etc
modifies that object by applying the PUL obtained from the modify expression (important: this has no visible side effects, as only a copy is being modified)
returns the modified copy.
For advanced users: in this case, the copy clause contains a constructor that builds a fresh object, so the optimizer can actually skip the copying.
This is the way to make it work json updates using JSONiq. We need to use copy-modify-return clauses:
jsoniq version "1.0";
copy $users := {
"name" : "Deadbeat Jim",
"address" : "1 E 161st St, Bronx, NY 10451",
"risk tolerance" : "high"
}
modify insert json {"status" : "credit card declined"} into $users
return $users
Hope this might helpful to someone

Check if value exists in Lua table

I am running Lua on ESP8266 Wifi module with NodeMCU firmware. My application is listening on TCP port for JSON requests. When I get the request I parse it using:
jsonRequest = json.decode(request)
So then I can access desired value with:
jsonRequest.object.state
Everything works perfectly until I send an invalid JSON (without "object"). When that happens I get this error: Lua API (attempt to index a nil value) and my program stops with execution.
MY PROBLEM: I would like to check if my table contains that key before accessing, but I can't find a way to do it.
I could do it with pairs function and loop through all keys and check if there is the right one, but that would require lots of code because I have multiple nested objects in my JSON.
Any ideas?
To check if the table jsonRequest contains the key "object", use:
if jsonRequest.object ~= nil then
If the values stored in the table won't be the boolean value false, you can also use:
if jsonRequest.object then

Handling JSON posts in Yesod

An AngularJS client is sending a JSON post to a Yesod server to update a person record. The post can contain the following fields each of which is optional - the client can send any subset of these:
firstName
lastName
...
active
To limit the discussion a bit lets assume the client, at the moment, only wants to toggle activity, so it will only send the active value (it specifically wants to keep the rest intact) and the message will be:
{
active: 0
}
On the server now, we know the id of the person from the URL (eg. /api/v1.0/person/1) but the client does not send a complete Person entity, so the usual:
person <- requireJsonBody :: Handler Person
_ <- runDB $ update personId ...
will not work here. It would seem a more flexible approach is needed. Maybe something along the lines of:
mapToUpdate :: PersonInfo -> [Update PersonInfo]
where PersonInfo is an instance of FromJSON and is defined to match Person but has all the fields of type Maybe a. However, that seems totally contrary to DRY.
So to wrap this up: how would one handle such a use case in Yesod nicely going back and assuming again the client can send any subset of a Person's fields?
You could imagine even more horrifying scenarios. For example one JSON post needing to be mapped to an update of multiple database entities (api entities do not have to map 1:1 to database entities).
I've never tried this, but here's a theoretical approach:
Grab the current value from the database
Serialize that value to an aeson Value by calling toJSON
Write some kind of "update" algorithm that merges two Values together, e.g. mergeValues :: Value -> Value -> Value
Merge the original entity with the value uploaded by the user
Try to parse the resulting value with parseJSON
If it succeeds, use replace to put it back into the database

Grails, create domain object from json-string with has-many relation

I'm trying to parse a grails parameter map to a Json String, and then back to a parameter map. (For saving html form entries with constraint-violations)
Everything is fine as long as there is no hasMany relationship in the parameter-map.
I'm using
fc.parameter = params as JSON
to save the params as JSON String.
Later I'm trying to rebuild the parameter map and create a new Domain-Object with it:
new Foo(JSON.parse(fc.parameter))
Everything is fine using only 1:1 relationships (states).
[states:2, listSize:50, name:TestFilter]
But when I try to rebuild a params-map with multi-select values (states)
[states:[1,2], listSize:50, name:TestFilter]
I'm getting this IllegalStateException:
Failed to convert property value of type org.codehaus.groovy.grails.web.json.JSONArray to required type java.util.Set for property states; nested exception is java.lang.IllegalStateException: Cannot convert value of type [java.lang.String] to required type [de.gotosec.approve.State] for property states[0]: no matching editors or conversion strategy found
I tried to use this, but without success:
JSON.use("deep") {
new Foo(JSON.parse(fc.parameter))
}
You can use JsonSlurper instead of the converters.JSON of grails, it maps JSON objects to Groovy Maps. I think this link also might help you.
Edit: Now, if the problem is binding the params map to your domain, you should try using bindData() method, like:
bindData(foo, params)
Note that this straightforward use is only if you're calling bindData inside a controller.
What seems to be happening in your case is that Grails is trying to bind a concrete type of List (ArrayList in the case of JsonSlurper and JSONArray in the case of converters.JSON) into a Set of properties (which is the default data structure for one-to-many associations). I would have to take a look at your code to confirm that. But, as you did substitute states: [1,2] for a method of your app, try another test to confirm this hypothesis. Change:
states:[1,2]
for
states:[1,2] as Set
If this is really the problem and not even bindData() works, take a look at this for a harder way to make it work using object marshalling and converters.JSON. I don't know if it's practical for you to use it in your project, but it sure works nicely ;)