While using Octave in c++ code, if the octave value list contained a double matrix, I would retrieve it as out(0).matrix_value() (out is the octave_value_list). Is there something similar for symbolic variable/expression? I do not want to retrieve it as a string value, as I cannot use the string for further calculations.
Related
In Python it is common to marshal objects from JSON. I am seeking similar functionality in Prolog, either swi-prolog or scryer.
For instance, if we have JSON stating
{'predicate':
{'mortal(X)', ':-', 'human(X)'}
}
I'm hoping to find something like load_predicates(j) and have that data immediately consulted. A version of json.dumps() and loads() would also be extremely useful.
EDIT: For clarity, this will allow interoperability with client applications which will be collecting rules from users. That application is probably not in Prolog, but something like React.js.
I agree with the commenters that it would be easier to convert the JSON data to a .pl file in the proper format first and then load that.
However, you can load the predicates from JSON directly, convert them to a representation that Prolog understands, and use assertz to add them to the knowledge base.
If indeed the data contains all the syntax needed for a predicate (as is the case in the example data in the question) then converting the representation is fairly simple as you just need to concatenate the elements of the list into a string and then create a term out of the string. Note that this assumption skips step 2 in the first comment by Guy Coder.
Note that the Prolog JSON library is rather strict in which format it accepts: only double quotes are valid as string delimiters, and lists with singleton values (i.e., not key-value pairs) need to use the notation [a,b,c] instead of {a,b,c}. So first the example data needs to be rewritten:
{"predicate":
["mortal(X)", ":-", "human(X)"]
}
Then you can load it in SWI-Prolog. Minimal working example:
:- use_module(library(http/json)).
% example fact for testing
human(aristotle).
load_predicate(J) :-
% open the file
open(J, read, JSONstream, []),
% parse the JSON data
json_read(JSONstream, json(L)),
% check for an occurrence of the predicate key with value L2
member(predicate=L2, L),
% concatenate the list into a string
atomics_to_string(L2, S),
% create a term from the string
term_string(T, S),
% add to knowledge base
assertz(T).
Example run:
?- consult('mwe.pl').
true.
?- load_predicate('example_predicate.json').
true.
?- mortal(X).
X = aristotle.
Detailed explanation:
The predicate json_read stores the data in the following form:
json([predicate=['mortal(X)', :-, 'human(X)']])
This is a list inside a json term with one element for each key-value pair. The element has the syntax key=value. In the call to json_read you can already strip the json() term and store the list directly in the variable L.
Then member/2 is used to search for the compound term predicate=L2. If you have more than one predicate in the JSON file then you should turn this into a foreach or in a recursive call to process all predicates in the list.
Since the list L2 already contains a syntactically well-formed Prolog predicate it can just be concatenated, turned into a term using term_string/2 and asserted. Note that in case the predicate is not yet in the required format, you can construct a predicate out of the various pieces using built-in predicate manipulation functionality, see https://www.swi-prolog.org/pldoc/doc_for?object=copy_predicate_clauses/2 for some pointers.
I'm trying to set a scalar value that I get from script into a string through the patch editor to put it into a 2Dtext object. I can't do it via script because I need it to look like its counting up. does anyone know how to convert a scalarvalue to an int by only using the patch editor?image of what I'm doing / have tried.2
This question already has answers here:
How do I pass multiple parameters into a function in PowerShell?
(15 answers)
Closed 3 years ago.
I am trying to simplify a script I wrote by creating some functions, however, I can't seem to get them to work the way I want.
An example function will accept 2 or more parameters but only return 1 value. When I do this, it is returning every value, which in this case are all the parameters (2 in this case) which are passed to the function.
I understand from some research that Powershell returns more than is explicitly called on, which is a little confusing, so I have tried some suggestions about assigning those other values to $null but to no avail.
My function and its result when run looks like the following:
function postReq($path, $payload) {
Write-Host $path
}
postReq($url, $params)
> [path value (is correct)] [$payload (shouldn't be included here)]
Your syntax in how you are calling a function is not correct semantically. You write:
postReq($url, $params)
But that is not the correct syntax in PowerShell. (It's valid syntactically, but not semantically.) In PowerShell, functions are called without ( and ) and without ,, as in:
postReq $url $params
When you use ( ), you are passing a single argument to your function.
Solution
You're not invoking the function how you think you are. PowerShell functions are called without parentheses and the argument delimiter is whitespace, not a comma. Your function call should look like this:
postReq $url $params
What is wrong with using traditional C#-like syntax?
Calling it like you are above as postReq($url, $params) has two undesired consequences here:
The parentheses indicate a sub-expression, the code in the parentheses will run first before the outer code and be treated as a single argument. If you are familiar with solving algebraic equations, the order of operations is the same as in PEMDAS - parentheses first.
Whitespace (), and NOT commas (,) are the argument delimiter to Powershell functions. However, the commas do mean something in Powershell syntax - they signify a collection. [1, 2, 3, 4] is functionally the same as 1, 2, 3, 4 in Powershell. In your case above, you are rolling both parameters into a single array argument of [$url, $params], which the stream-writing cmdlets will do an array join with a , as the delimiter in the rendered string.
But what about object instance and static methods?
This can be confusing to some because object instance and class (RE: static) methods ARE called with the traditional C#-like syntax, where you DO need the parentheses to indicate parameter values, and commas are the delimiter. For example:
([DateTime]::Now).ToString()
returns the current local time, and runs the ToString() method on the returned DateTime object. If you used one of its overloads as shown below, you would separate each argument with a , and regardless of whether you need to pass in arguments or not, you still must specify the outer parentheses:
OverloadDefinitions
-------------------
string ToString()
string ToString(string format)
string ToString(System.IFormatProvider provider)
string ToString(string format, System.IFormatProvider provider)
string IFormattable.ToString(string format, System.IFormatProvider formatProvider)
string IConvertible.ToString(System.IFormatProvider provider)
If you were to omit the parentheses on the empty parameter overload above, you would get the preceding output showing the overload definitions, unlike the behavior when calling functions with no argument.
It's a little odd, but I remember that in any programming language, functions and methods are similar, but distinct, and it's no different in Powershell other than functions and methods are less alike than they are in other languages. I find a good rule of thumb for this is:
Invoke methods with C# syntax and functions with shell syntax.
According to http://www.cocos2d-x.org/wiki/Value,
Value can handle strings as well as int, float, bool, etc.
I'm confused when I have to make a choice between using
std::string
or
Value
In what circumstances should I use Value over std::string, and vice versa??
I think you have misunderstood the Value object. As written in the documentation you linked to:
cocos2d::Value is a wrapper class for many primitives ([...] and std::string) plus [...]
So really Value is an object that wraps a bunch of other types of variables, which allows cocos2d-x to have loosely-typed structures like the ValueMap (a hash of strings to Values - where each Value can be a different type of object) and ValueVector (a list of Values).
For example, if you wanted to have a configuration hash with keys that are all strings, but with a bunch of different values - in vanilla C++, you would have to create a separate data structure for each type of value you want to save, but with Value you can just do:
unordered_map<std::string, cocos2d::Value> configuration;
configuration["numEnemies"] = Value(10);
configuration["gameTitle"] = Value("Super Mega Raiders");
It's just a mechanism to create some loose typing in C++ which is a strongly-typed language.
You can save a string in a Value with something like this:
std::string name = "Vidur";
Value nameVal = Value(name);
And then later retrieve it with:
std::string retrievedName = nameVal.asString();
If you attempt to parse a Value as the wrong type, it will throw an error in runtime, since this is isn't something that the compiler can figure out.
Do let me know if you have any questions.
Machine learning algorithms in OpenCV appear to use data read in CSV format. See for example this cpp file. The data is read into an OpenCV machine learning class CvMLData using the following code:
CvMLData data;
data.read_csv( filename )
However, there does not appear to be any readily available documentation on the required format for the csv file. Does anyone know how the csv file should be arranged?
Other (non-Opencv) programs tend to have a line per training example, and begin with an integer or string indicating the class label.
If I read the source for that class, particularly the str_to_flt_elem function, and the class documentation I conclude that valid formats for individual items in the file are:
Anything that can be parsed to a double by strod
A question mark (?) or the empty string to represent missing values
Any string that doesn't parse to a double.
Items 1 and 2 are only valid for features. anything matched by item 3 is assumed to be a class label, and as far as I can deduce the order of the items doesn't matter. The read_csv function automatically assigns each column in the csv file the correct type, and (if you want) you can override the labels with set_response_index. Delimiter wise you can use the default (,) or set it to whatever you like before calling read_csv with set_delimiter (as long as you don't use the decimal point).
So this should work for example, for 6 datapoints in 3 classes with 3 features per point:
A,1.2,3.2e-2,+4.1
A,3.2,?,3.1
B,4.2,,+0.2
B,4.3,2.0e3,.1
C,2.3,-2.1e+3,-.1
C,9.3,-9e2,10.4
You can move your text label to any column you want, or even have multiple text labels.