is it possible to have a input argument as output argument in JNI ?
Suppose I have :
void average(jint n1,jint n2,jint av);
av as output and why not input.
Not in that fashion, no.
In order for the callee to modify an argument, it must have a reference (i.e. pointer) to that argument. jint and other primitives are passed by value, which means that the value of the argument is passed from one location (memory or register) to another. Any changes made by the callee to that argument are made to the last element in the chain, which obviously has no effect on the places it was copied from.
In order to have an argument "modified", you have to pass a reference (pointer) to it instead. The most straightforward way of doing this in Java is to pass a single-element primitive array, and have the callee replace the first element with the "returned" value.
It's usually just easier to return the value, or an aggregate object which contains multiple return values.
Related
I have an array of structs that has been decoded from a JSON file. Each struct has a stored property that is a variable-dimension array stored in an enum associated value.
Some of the structs have property Enum.array2D([[Float]]) and others have Enum.array3D([[[Float]]])
Is there a simple or elegant way to extract a variable-type associated value from the struct's enum property - maybe with a getter function? Currently, the only way I know how to do this is with an external switch anytime I want to access the underlaying value. For example, somewhere in external code I have to use this anytime I want to get these values and manipulate them:
switch structArray[index].enumProperty {
case .array2D(let array2Val):
// Do stuff with the 2D array
case .array3D(let array3Val):
// Do stuff with the 3D array
}
I have considered adding each of the two possible types as optionals and setting the correct one in the init function with a switch, but that seems inefficient as I’ll have the arrays stored in two places.
I am using Azure Data Factory. I'm trying to use a String variable to lookup a Key in a JSON array and retrieve its Value. I can't seem to figure out how to do this in ADF.
Details:
I have defined a Pipeline Parameter named "obj", type "Object" and content:
{"values":{"key1":"value1","key2":"value2"}}
Parameter definition
I need to use this pipeline to find a value named "key1" and return it as "value1"; "key2" and return it as "value2"... and so on. I'm planning to use my "obj" as a dictionary, to accomplish this.
Technically speaking, If i want to find the value for key2, I can use the code below, and it will be returned "value2":
#pipeline().parameters.obj.values.key2
What i can't figure out is how to do it using a variable (instead of hardcoded "key2").
To clear things out: I have a for-loop and, inside it, i have just a copy activity: for-each contents
The purpose of the copy activity is to copy the file named item().name, but save it in ADLS as whatever item().name translates to, according to "obj"
This is how the for-loop could be built, using Python: python-for-loop
In ADF, I tried a lot of things (using concat, replace...), but none worked. The simpliest woult be this:
#pipeline().parameters.obj.values.item().name
but it throws the following error:
{"code":"BadRequest","message":"ErrorCode=InvalidTemplate, ErrorMessage=Unable to parse expression 'pipeline().parameters.obj.values.item().name'","target":"pipeline/name_of_the_pipeline/runid/run_id","details":null,"error":null}
So, can you please give any ideas how to define my expression?
I feel this must be really obvious, but I'm not getting there.....
Thanks.
Hello fellow Pythonista!
The solution in ADF is actually to reference just as you would in Python by enclosing the 'variable' in square brackets.
I created a pipeline with a parameter obj like yours
and, as a demo, the pipeline has a single Set Variable activity that got the value for key2 into a variable.
This is documented but you need X-ray vision to spot it here.
Based on your comments, this is the output of a Filter activity. The Filter activity's output is an object that contains an array named value, so you need to iterate over the "output.value":
Inside the ForEach you reference the name of the item using "item().name":
EDIT BASED ON MORE INFORMATION:
The task is to now take the #item().name value and use it as a dynamic property name against a JSON array. This is a bit of a challenge given the limited nature of the Pipeline Expression Language (PEL). Array elements in PEL can only be referenced by their index value, so to do this kind of complex lookup you will need to loop over the array and do some string parsing. Since you are already inside a FOR loop, and nested FOR loops are not supported, you will need to execute another pipeline to handle this process AND the Copy activity. Warning: this gets ugly, but works.
Child Pipeline
Define a pipeline with two parameters, one for the values array and one for the item().name:
When you execute the child pipeline, pass #pipeline.parameters.obj.values as "valuesArray" and #item().name as "keyValue".
You will need several string parsing operations, so create some string variables in the Pipeline:
In the Child Pipeline, add a ForEach activity. Check the Sequential box and set the Items to the valuesArray parameter:
Inside the ForEach, start by cleaning up the current item and storing it as a variable to make it a little easier to consume.
Parse the object key out of the variable [this is where it starts to get a little ugly]:
Add an IF condition to test the value of the current key to the keyValue parameter:
Add an activity to the TRUE condition that parses the value into a variable [gets really ugly here]:
Meanwhile, back at the Pipeline
At this point, after the ForEach, you will have a variable (IterationValue) that contains the correct value from your original array:
Now that you have this value, you can use that variable as a DataSet parameter in the Copy activity.
This question already has answers here:
How do I pass multiple parameters into a function in PowerShell?
(15 answers)
Closed 3 years ago.
I am trying to simplify a script I wrote by creating some functions, however, I can't seem to get them to work the way I want.
An example function will accept 2 or more parameters but only return 1 value. When I do this, it is returning every value, which in this case are all the parameters (2 in this case) which are passed to the function.
I understand from some research that Powershell returns more than is explicitly called on, which is a little confusing, so I have tried some suggestions about assigning those other values to $null but to no avail.
My function and its result when run looks like the following:
function postReq($path, $payload) {
Write-Host $path
}
postReq($url, $params)
> [path value (is correct)] [$payload (shouldn't be included here)]
Your syntax in how you are calling a function is not correct semantically. You write:
postReq($url, $params)
But that is not the correct syntax in PowerShell. (It's valid syntactically, but not semantically.) In PowerShell, functions are called without ( and ) and without ,, as in:
postReq $url $params
When you use ( ), you are passing a single argument to your function.
Solution
You're not invoking the function how you think you are. PowerShell functions are called without parentheses and the argument delimiter is whitespace, not a comma. Your function call should look like this:
postReq $url $params
What is wrong with using traditional C#-like syntax?
Calling it like you are above as postReq($url, $params) has two undesired consequences here:
The parentheses indicate a sub-expression, the code in the parentheses will run first before the outer code and be treated as a single argument. If you are familiar with solving algebraic equations, the order of operations is the same as in PEMDAS - parentheses first.
Whitespace (), and NOT commas (,) are the argument delimiter to Powershell functions. However, the commas do mean something in Powershell syntax - they signify a collection. [1, 2, 3, 4] is functionally the same as 1, 2, 3, 4 in Powershell. In your case above, you are rolling both parameters into a single array argument of [$url, $params], which the stream-writing cmdlets will do an array join with a , as the delimiter in the rendered string.
But what about object instance and static methods?
This can be confusing to some because object instance and class (RE: static) methods ARE called with the traditional C#-like syntax, where you DO need the parentheses to indicate parameter values, and commas are the delimiter. For example:
([DateTime]::Now).ToString()
returns the current local time, and runs the ToString() method on the returned DateTime object. If you used one of its overloads as shown below, you would separate each argument with a , and regardless of whether you need to pass in arguments or not, you still must specify the outer parentheses:
OverloadDefinitions
-------------------
string ToString()
string ToString(string format)
string ToString(System.IFormatProvider provider)
string ToString(string format, System.IFormatProvider provider)
string IFormattable.ToString(string format, System.IFormatProvider formatProvider)
string IConvertible.ToString(System.IFormatProvider provider)
If you were to omit the parentheses on the empty parameter overload above, you would get the preceding output showing the overload definitions, unlike the behavior when calling functions with no argument.
It's a little odd, but I remember that in any programming language, functions and methods are similar, but distinct, and it's no different in Powershell other than functions and methods are less alike than they are in other languages. I find a good rule of thumb for this is:
Invoke methods with C# syntax and functions with shell syntax.
I faced some problem regarding defining the type of arguments for a function in Julia.
From one hand, the code would be faster to run if the type is defined: for example Int64 for an integer number. On the other hand, passing a simple number to the function would needs type cast every time I call the function, e.g. by calling:
convert(a, Int64)
That seems to be an overkill.
what is the advice for good style?
With Julia, it's not generally true that specifying the type for a function's argument(s) will make it faster. If the argument has no type (i.e. Any), or has just an abstract type (for example, Integer instead of Int64, Julia can generate methods for whatever concrete types are actually used to call the function, instead of having to do any conversion.
BTW, the syntax is actually convert(Int64, a), the type you wish to convert to comes first.
It's been shown (e.g. references in AS3: cast or "as") that it's 3-4x faster to use the "as" keyword versus parenthetic casting in AS3. This is because (cast) could better be described as interpolation, actually generating a new object rather than truly casting the old one. (cast) throws a type error if it fails where the "as" operator returns null. Alright.
Three questions here:
(1) What's happening when you pass a Number to a function that expects (int) -- or when you pass a Sprite to a function that expects DisplayObject? i.e. if the function expects a parent class. Is a new, locally scoped object being generated if the expected class is an ascendant of the object you pass as an argument? Is it significantly faster to cast your arguments before calling the function, using "as"?
(2) What's happening in a for each (var i:int in someNumberVector) or a for-each loop treating each Sprite as a DisplayObject, for example? Is every single one re-cast via the slow (not "as" but error-prone) method at each step in the loop?
(3) Does having a function or variable assignment expect an Interface-implementing class (or an Interface itself, e.g. IEventDispatcher) make any difference vs. having it expect a parent class, as far as which method (clone or cast) is used to "cast" the original variable?
When you pass an object of a subtype (or iterate a list of subtypes, etc.), no casting or conversion is required -- the object is that subtype, and can be passed directly. The reason for this is that the memory layout of objects with inheritance is usually implemented as the base object's data, then the next inheriting class's data, etc. plus a vtable pointer to allow method overriding/interface implementation:
// class Sub : Base
obj -> +----------+
|vtable ptr| -. vtable
|----------| `-> +---------+
|Base data | | Method1 |
|----------| +---------+
|Sub data | | Method2 |
+----------+ +---------+
When passing a Sub object, the underlying pointer points to the start of the object in memory -- which is exactly the same as the Base object's start in memory, so there is nothing to do when converting between Sub to Base internally; the only difference is how the variable is used (type checking semantics). The actual object value, and references to it, need absolutely no conversion (in a sane implementation, anyway).
Adding a cast would only slow things down (unless the compiler is smart enough to remove the cast, which it should be, but I don't have much confidence in the optimization capabilities of the AS3 compiler).
However, casting Numbers to ints and vice versa is completely different, as that requires converting the internal representation of the values -- which is slow.
Note the difference between references to objects (where the variable's actual value is just a handle to that object), and value types (where the variable's actual value is the value itself -- no indirection). For example, an int variable holds the actual integer variable, whereas a variable of type Object merely holds a reference to some object. This means that if you use variables of type Object (or untyped variables which are of Object type by default), and try to stick an integer in them, this will cause what's called a "boxing" operation, which takes the value and sticks it into a temporary object so that the variable can keep a reference to it. Manipulating it like an integer (or explicitly casting it), causes that boxed value to be unboxed, which obviously is not instant.
So you can see, casting objects to their base is not slow, because there's no work to do. Casting a base to a derived type is almost as fast (because nothing has to be done to the internal pointer there either), except that a check has to be done to ensure that the object really is of the derived type (resulting in an exception or null depending on what type of cast was done). Casting value types to object types and vice versa is not very fast, because work has to be done to box/unbox the value in addition to a runtime type check. Finally, casting one basic type to another is particularly slow, because the internal representations are completely different, and it takes correspondingly more effort to do the conversion.