Evaluate a string term in prolog - json

I'm trying to create a prolog program which receives the queries to run as strings (via json) and then print the result (succeed or fails).
:- use_module(library(http/json)).
happy(alice).
happy(albert).
with_albert(alice).
does_alice_dance :- happy(alice),with_albert(alice),
format('When alice is happy and with albert, she dances ~n').
with_alice(albert).
does_albert_dance :- happy(albert),with_alice(albert),
format('When albert is happy and with alice, he dances ~n').
fever(martin).
low_appetite(martin).
sick(X):-fever(X),low_appetite(X).
main(json(Request)) :-
nl,
write(Request),
nl,
member(facts=Facts, Request),
format('Facts : ~w ~n',[Facts]),
atomic_list_concat(Facts, ', ', Atom),
format('Atom : ~w ~n',[Atom]),
atom_to_term(Atom,Term,Bindings),
format('Term : ~w ~n',Term),
write(Bindings).
After executing this query :
main(json([facts=['sick(martin)', 'does_alice_dance',
'does_albert_dance']])).
i had :
[facts=[sick(martin), does_alice_dance, does_albert_dance]]
Facts : [sick(martin),does_alice_dance,does_albert_dance]
Atom : sick(martin), does_alice_dance, does_albert_dance
Term : sick(martin),does_alice_dance,does_albert_dance
[]
true
What i would like to do is to evaluate Term. I tried to make it work using the is/2 and the call predicates but it doesn't seem to work.
using
call(Term)
(i added in the tail of the main), i had this error :
Sandbox restriction!
Could not derive which predicate may be called from
call(C)
main(json([facts=['sick(martin)',does_alice_dance,does_albert_dance]]))
using
Result is Term
(Result is a variable i added to store the result), i had this error :
Arithmetic: `does_albert_dance/0' is not a function
Is there ay solution to evaluate strings expressions in prolog please ?

As #David Tonhofer said in the first comment, The issue was that i'm testing my code on an online editor (which restrict some prolog features like the invocation of the call predicate). So after adding the call predicate to the tail of my program :
main(json(Request)) :-
nl,
write(Request),
nl,
member(facts=Facts, Request),
format('Facts : ~w ~n',[Facts]),
atomic_list_concat(Facts, ', ', Atom),
format('Atom : ~w ~n',[Atom]),
atom_to_term(Atom,Term,Bindings),
format('Term : ~w ~n',Term),
write(Bindings),
call(Term).
and testing it on my local machine. It works fine.

Related

Script-Fu Problem with a simple function that renames the selected layer

I am writing a simple script to rename the selected layer.
Here is the code:
(script-fu-register
"script-fu-renaming" ;code name
"Renaming Function" ;name
"This is for a question for Stack Overflow" ;description
"Me" ;author
"copyright 2020, Me" ;copyright
"Wednesday 8/Jul/2020" ;date
"" ;?
)
(define (script-fu-renaming)
(gimp-item-set-name (gimp-image-get-active-layer 1) "屈")
)
But when I execute it on the Script-Fu console, through this "(script-fu-renaming 0)", I got the following error: "Error: ( : 32595) Invalid type for argument 1 to gimp-item-set-name".
So my question would be what is the code to do what I explained above without getting errors?
Like most GIMP functions gimp-image-get-active-layer returns a list, so you need to extract the first element using car :
(gimp-item-set-name (car (gimp-image-get-active-layer 1)) "?")

Recommendation for storing and querying DataFactory run log?

I'd like to store and query the OUTPUT and ERROR data generated during a DataFactory run. The data is returned when calling Get-AzDataFactoryV2ActivityRun.
The intention is to use it to monitore possible pipeline execution error, duration, etc in a easy and fast way.
The data ressembles JSON format. What would be nice is to visualize the summary of each execution through some html. Should I store this log into a MongoDB?
Is there an easy and better way to centralize the log info of the multiple execution of different pipelines?
ResourceGroupName : Test
DataFactoryName : DFTest
ActivityRunId : 00000000-0000-0000-0000-000000000000
ActivityName : If Condition1
PipelineRunId : 00000000-0000-0000-0000-000000000000
PipelineName : Test
Input : {}
Output : {}
LinkedServiceName :
ActivityRunStart : 03/07/2019 11:27:21
ActivityRunEnd : 03/07/2019 11:27:21
DurationInMs : 000
Status : Succeeded
Error : {errorCode, message, failureType, target}
Activity 'Output' section:
"firstRow": {
"col1": 1
}
"effectiveIntegrationRuntime": "DefaultIntegrationRuntime (West Europe)"
This is probably not the best way how you can monitor your ADF pipelines.
Have you considered to use Azure Monitor?
Find out more:
- https://learn.microsoft.com/en-us/azure/data-factory/monitor-using-azure-monitor
- https://learn.microsoft.com/en-us/azure/azure-monitor/visualizations

Replace template smart tags <<tag>> to [tag] in mysql

I have an table name templateType, It has column name Template_Text.
The Template have many smart tags <> to [tag] using mysql, and I need to replace << to [ and >> with ].
Edit from OP's comments:
It is an template with large text and having multiple smart tags. As example : " I <<Fname>> <<Lname>>, <<UserId>> <<Designation>> of xyz organization, Proud to announce...."
Here I need to replace these << with [ and >> with ], so it will look like
" [Fname] [Lname], [UserId] ...."
Based on your comments, your MySQL version does not support Regex_Replace() function. So, a generic solution is not feasible.
Assuming that your string does not contain additional << and >>, other than following the <<%>> format, we can use Replace() function.
I have also added a WHERE condition, so that we pick only those rows which match our given substring criteria.
Update templateType
SET Template_Text = REPLACE(REPLACE(Template_Text, '<<', '['), '>>', ']')
WHERE Template_Text LIKE '%<<%>>%'
In case the problem is further complex, you may get some ideas from this answer: https://stackoverflow.com/a/53286571/2469308
A couple of replace calls should work:
SELECT REPLACE(REPLACE(template_text, '<<', '['), '>>', '])
FROM template_type

json get prolog predicate

I'm tryung to create this predicate in prolog:
The predicate json_get/3 can be defined as: json_get(JSON_obj, Fields, Result). which is true when Result is recoverable by following
the chain of fields in Fields (a list) starting from JSON_obj. A field
represented by N (with N a major number o equal to 0) corresponds to
an index of a JSON array.
Please help me to understand to follow the chain of fields.
Thanks
edit1:
Of course, so json object looks like this '{"name" : "Aretha", "surname" : "Franklin"}'.
if i call json_parse predicate to this object prolog show me this
json_obj([(”name”, ”Aretha”), (”surname”, ”Franklin”)]), we call this obj O.
with json_get i need to extract from O the name in this way, json_get(O, ["name"], R)
edit2:
with someone's help this is the predicate now:
json_get(json_obj(JSON_obj), Field, Result) :-
memberchk((Field,Result), JSON_obj).
json_get(JSON_obj, Fields, Result) :-
maplist(json_get(JSON_obj), Fields, Result).
so now the problem is nested list.
For example with this input
json_parse('{"nome" : "Zaphod",
"heads" : ["Head1", "Head2"]}', Z),
json_get(Z, ["heads", 1], R).
the output will should be R = "Head2" but the predicate doesn't extract the field and fail.
edit3:
this is the output of json_parse
json_obj([("nome", "Zaphod"), ("heads", json_array(["Head1", "Head2"]))]).
How about this
json_get(json_obj(Obj),[F|Fs],Res) :-
member((F,R),Obj),
json_get(R,Fs,Res).
json_get(json_array(Is),[N|Fs],Res) :-
nth1(N,Is,R),
json_get(R,Fs,Res).
json_get(Res,[],Res).
This produces Head1 not Head2 in your 2nd example. Please explain how that is supposed to work, if you did not just make a typo. (If it is zero-based you can just change nth1/3 to nth0/3.)

Difference Between Two Mongo Queries

what is the difference between two mongo queries.
db.test.find({"field" : "Value"})
db.test.find({field : "Value"})
mongo shell accepts both.
There is no difference in your example.
The problem happens when your field names contain characters which cannot be a part of an identifier in Javascript (because the query engine is run in a javascript repl/shell)
For example user-name because there is a hyphen in it.
Then you would have to query like db.test.find({"user-name" : "Value"})
For the mongo shell there is no actual difference, but in some other language cases it does matter.
The actual case here is presenting what is valid JSON, and with myself as a given example, I try to do this in responses on this forum and others as JSON is a data format that can easily be "parsed" into native data structures, where alternate "JavaScript" notation may not be translated so easily.
There are certain cases where the quoting is required, as in:
db.test.find({ "field-value": 1 })
or:
db.test.find({ "field.value": 1 })
As the values would otherwise be "invalid JavaScript".
But the real point here is adhering to the JSON form.
You can understand with example: suppose that you have test collection with two records
{
'_id': ObjectId("5370a826fc55bb23128b4568"),
'name': 'nanhe'
}
{
'_id': ObjectId("5370a75bfc55bb23128b4567"),
'your name': 'nanhe'
}
db.test.find({'your name':'nanhe'});
{ "_id" : ObjectId("5370a75bfc55bb23128b4567"), "your name" : "nanhe" }
db.test.find({your name:'nanhe'});
SyntaxError: Unexpected identifier