PrototypeJS breaking SpahQL results - json

I'm having a hard time trying to use the SpahQL to query JSON in parallel with PrototypeJS.
The weird behavior is when I create a new instance of the SpahQL, the newly created object comes with several functions attached to it, but not by default, its attached by PrototypeJS.
Whatever a call select or any other method from SpahQL, those functions are interpreted as result objects and then added to the result set.
A simple example to explain my point:
`<script src="path/to/spahql-min.js" type="text/javascript"></script>`
`<script src="path/to/data.json" type="text/javascript"></script>`
`<script type="text/javascript">`
`var db = SpahQL.db(data);`
`sample = db.select("/*/*");`
`console.log(sample);`
Supose that data.json contains 50 entries, so the console.log will show an array of 50 objects.
But I include the PrototypeJS to the snippet the console.log will output an array of 1558 objects:
`<script src="path/to/prototype.js" type="text/javascript"></script>`
`<script src="path/to/spahql-min.js" type="text/javascript"></script>`
`<script src="path/to/data.json" type="text/javascript"></script>`
`<script type="text/javascript">`
`var db = SpahQL.db(data);`
`sample = db.select("/*/*");`
`console.log(sample);`

Related

In Python, how to concisely get nested values in json data?

I have data loaded from JSON and am trying to extract arbitrary nested values using a list as input, where the list corresponds to the names of successive children. I want a function get_value(data,lookup) that returns the value from data by treating each entry in lookup as a nested child.
In the example below, when lookup=['alldata','TimeSeries','rates'], the return value should be [1.3241,1.3233].
json_data = {'alldata':{'name':'CAD/USD','TimeSeries':{'dates':['2018-01-01','2018-01-02'],'rates':[1.3241,1.3233]}}}
def get_value(data,lookup):
res = data
for item in lookup:
res = res[item]
return res
lookup = ['alldata','TimeSeries','rates']
get_value(json_data,lookup)
My example works, but there are two problems:
It's inefficient - In my for loop, I copy the whole TimeSeries object to res, only to then replace it with the rates list. As #Andrej Kesely explained, res is a reference at each iteration, so data isn't being copied.
It's not concise - I was hoping to be able to find a concise (eg one or two line) way of extracting the data using something like list comprehension syntax
If you want one-liner and you are using Python 3.8, you can use assignment expression ("walrus operator"):
json_data = {'alldata':{'name':'CAD/USD','TimeSeries':{'dates':['2018-01-01','2018-01-02'],'rates':[1.3241,1.3233]}}}
def get_value(data,lookup):
return [data:=data[item] for item in lookup][-1]
lookup = ['alldata','TimeSeries','rates']
print( get_value(json_data,lookup) )
Prints:
[1.3241, 1.3233]
I don't think you can do it without a loop, but you could use a reducer here to increase readability.
functools.reduce(dict.get, lookup, json_data)

azure ADF - Get field list of .csv file from lookup activity

context: azure - ADF brief process description:
Get a list of the fields defined in the first row of a .csv(blobed) file. This is the first step, detect fields
then 2nd step would be a kind of compare with actual columns of an SQL table
3rd one a stored procedure execution to make the alter table task, finishing with a (customized) table containing all fields needed to successfully load the .csv file into the SQl table.
To begin my ADF pipeline, I set up a lookup activity that "querys" the first line of my blobed file, "First row only" flag = ON.As a second pipeline activity, an "Append Variable" task, there I would like to get all .csv fields(first row) retrieved from the lookup activity, as a list.
Here is where a getting the nightmare.
As far I know, with dynamic content I can get an array with all values (w/ format like {"field1_name":"field1_value_1st_row", "field2_name":"field2_value_1st_row", etc })
with something like #activity('Lookup1').output.firstrow.
Or any array element with #activity('Lookup1').output.firstrow.<element_name>,
but I can't figure out how to get a list of all field names (keys?) of the array.
I will appreciate any advice, many thanks!
I would save the part of LookUp Activity because it seems that you are familiar with it.
You could use Azure Function HttpTrigger to get the key list of firstrow JSON object. For example your json object like this as you mentioned in your question:
{"field1_name":"field1_value_1st_row", "field2_name":"field2_value_1st_row"}
Azure Function code:
module.exports = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
var array = [];
for(var key in req.body){
array.push(key);
}
context.res = {
body: {"keyValue":array}
};
};
Test Output:
Then use Azure Function Activity to get the output:
#activity('<AzureFunctionActivityName>').keyValue
Use Foreach Activity to loop the keyValue array:
#item()
Still based on the above sample input data,please refer to my sample code:
dct = {"field1_name": "field1_value_1st_row", "field2_name": "field2_value_1st_row"}
list = []
for key in dct.keys():
list.append(key)
print(list)
dicOutput = {"keys": list}
print(dicOutput)
Have you considered doing this in ADF data flow? You would map the incoming fields to a SQL dataset without a target schema. Define a new table name in the dataset definition and then map the incoming fields from your CSV to a new target table schema definition. ADF will write the rows to a new table using that file's schema.

Node-red payload convertion for mysql node

How to convert a msg.payload like 12345,67890 into "12345","67890" ?
Basically I receive a payload with many values and need to pass them to a mysql node.
Thanks in advance.
Sep
Depending on the exact type of input there are a couple options
First the core CSV node will split a payload into either an array or an object (if the first line of multiple lines contains the column names).
Second if you just have a single input then you can break it up with a function node using the normal NodeJS/Javascript functions for working with Strings. e.g.
var chunks = msg.payload.split(',');
msg.payload = {};
msg.paylaod.first = chunks[0];
msg.payload.second = chunk[1];
return msg;

How do I fix this Neo4j APOC query that creates nodes froma CSV file?

I want the query to read the CSV file and create a node for each row in the file.
Here's the query:
CALL apoc.load.csv('FILE:///C:/Temp/Test/Test/Neo4jTest/import/Neo4j_AttributeProvenance.csv',{sep:","}) YIELD map
CALL apoc.create.node(map.NodeType, {key:map.NodeID}) yield node return count(*)
Here's the error:
Can't coerce `RootNode` to List<String>
Here's the data file:
NodeType,NodeID,SchemaName,TableName,AttributeName,DataType,PreviousNodeID
RootNode,queryprocessingtest.q01.testfieldatablec ,queryprocessingtest,q01,testfieldatablec ,varchar,
Node,queryprocessingtest.qc.testfieldatablec ,queryprocessingtest,qc,testfieldatablec ,varchar,queryprocessingtest.q01.testfieldatablec
Node,queryprocessingtest.ttablec.testfieldatablec ,queryprocessingtest,ttablec,testfieldatablec ,varchar,queryprocessingtest.q01.testfieldatablec
The first argument of the apoc.create.node procedure must be an array.
So you need to convert the value of map.NodeType into an array:
CALL apoc.load.csv('FILE:///C:/Temp/Test/Test/Neo4jTest/import/Neo4j_AttributeProvenance.csv',{sep:","}) YIELD map
CALL apoc.create.node([map.NodeType], {key:map.NodeID}) yield node return count(*)

How to make a query in MongoDB using a JSON?

is there anyway to make a query in MongoDB using a JSON and returning a object if one field of the json matches with some in the database?
for example, I have the this object called keysArray
{ house: 'true', garden: 'false' }
and I would like to make a query in Mongo passing this object as a query field and return if some object in my database matches with at least one of those fields :
keysArray.forEach(function(key){
collection.find({keysArray}, function(err, propertyMatch){
console.log(propertyMatch)
})
})
I got no objects back, even if I have one object in my database that matches these fields.
Thanks in advance
...and I would like to make a query in Mongo passing this object as a
query field and return if some object in my database matches with at
least one of those fields.
It sounds like OR logic - if I understood it well.
On this specific case it's not possible to pass in JSON-like object to query as it would be a implicit AND logic condition.
So you should build first a OR expression and use it in collection.find(), something like this:
var myjson = {'status': 32, 'profile': {$exists: false}};
function build_logic_or(json) {
var orExpr = [];
for (var field in json) {
var expr = {};
expr[field] = json[field];
orExpr.push(expr);
}
return {'$or': orExpr};
}
It would build an expression like this:
{"$or":[{"status":32},{"profile":{"$exists":false}}]}
So:
db.collection.find(build_logic_or(myjson))