On the Data Studio authentication documentation (https://developers.google.com/datastudio/connector/auth), there are two functions that need to be defined which I am confused by.
In the setCredentials function, it says to call a checkForValidCreds function:
// Optional
// Check if the provided username and token are valid through a
// call to your service. You would have to have a `checkForValidCreds`
// function defined for this to work.
var validCreds = checkForValidCreds(username, token);
Meanwhile, in the isAuthValid function, you are asked to define a similar function called validateCredentials:
// This assumes you have a validateCredentials function that
// can validate if the userName and token are correct.
return validateCredentials(userName, token);
Are these user-defined functions different from each other? If so, what are the differences I need to know when defining them?
The only function names that need to be constant are listed in the auth page. Related to your question, as long as you define isAuthValid(), you should be good to go. checkForValidCreds and validateCredentials are just two implementation of the same method used inside isAuthValid(). You can name these anything as long as they are correctly referenced within isAuthValid().
Related
I just started to work with Odata and I had an impression the OData querying is kind of flexible.
But in some cases I want to retrieve updated/newly calculated data on the fly. In my case this data is SalaryData values. At some point, I want them to be slightly tweaked with additional applied calculation function. And the critical point that this action must occur on the retrieval of the data with the general request query.
But I don't know, is that applicable to use function in this case?
Ideally, I want to have the similar request:
/odata/Employee(1111)?$expand=SalaryData/CalculculationFunction(40)
Here I want to apply CalculculationFunction with parameters on SalaryData.
Is that possible to do it in OData in this way? Or should I create an entity set of salary data and retrieve calculated data directly using the query something like
/odata/SalaryData(1111)/CalculculationFunction(40)
But this way is least preferable for me, because I don't want to use id of SalaryData in request
Current example of the function I created:
[EnableQuery(MaxExpansionDepth = 10, MaxAnyAllExpressionDepth = 10)]
[HttpGet]
[ODataRoute("({key})/FloatingWindow(days={days})")]
public SingleResult<Models.SalaryData> MovingWindow([FromODataUri] Guid key, [FromODataUri] int days)
{
if (days <= 0)
return new SingleResult<Models.SalaryData>(Array.Empty<Models.SalaryData>().AsQueryable());
var cachedSalaryData = GetAllowedSalaryData().FirstOrDefault(x => x.Id.Equals(key));
var mappedSalaryData = mapper.Map<Models.SalaryData>(cachedSalaryData);
mappedSalaryData = Models.SalaryData.FloatingWindowAggregation(days, mappedSalaryData);
var salaryDataResult = new[] { mappedSalaryData };
return new SingleResult<Models.SalaryData>(salaryDataResult.AsQueryable());
}
There is always an overlap between What is OData Compliant Routing vs What can I do with Routes in Web API. It is not always necessary to conform to the OData (V4) specification, but a non-conforming route will need custom logic on the client as well.
The common workaround for this type of request is to create Function endpoint bound to the Employee item that accepts the parameter input that will be used to materialize the data. The URL might look like this instead:
/odata/Employee(1111)/WithCalculatedSalary(40)?$expand=SalaryData
This method could then internally call the existing MovingWindow function from the SalaryDataController to build the results. You could also engineer both functions to call a common set based routine.
The reason that you you should bind this function to the EmployeeController is that the primary identifying resource that correlates the resulting data together is the Employee.
In this way OData v4 compliant clients would still be able to execute this function and importantly would be able to discover it without any need for customisations.
If you didn't need to return the Employee resource as part of the response then you could still serve a collection of SalaryData from the EmployeeController:
/odata/Employee(1111)/CalculatedSalary(days=40)
[EnableQuery(MaxExpansionDepth = 10, MaxAnyAllExpressionDepth = 10)]
[HttpGet]
[ODataRoute("({key})/FloatingWindow(days={days})")]
public IQueryable<Models.SalaryData> CalculatedSalary([FromODataUri] int key, [FromODataUri] int days)
{
...
}
builder.EntitySet<Employee>("Employee")
.EntityType
.Function("CalculatedSalary")
.ReturnsCollectionFromEntitySet<SalaryData>("SalaryData")
.Parameter<int>("days");
$compute and $search in ASP.NET Core OData 8
The OData v4.01 specification does have support for System Query Option $compute which was designed to enable clients to append computed values into the response structure, you could hijack this pipeline and define your own function that can be executed from a $compute clause, but the expectation is that system canonical functions are used with a combination of literal values and field references.
The ASP.Net implementation has only introduced support for this in the OData Lib v8 runtime, as yet I have not yet found a good example of how to implement custom functions, but syntactically it is feasible.
The same concept could be used to augment the $apply execution, if this calculation operates over a collection and effectively performs an aggregate evaluation, then $apply
It might be that your current CalculculationFunction can be translated directly into a $compute statement, otherwise if you promote some of the calculation steps (metadata) as columns in the schema (you might use SQL Computed Columns for this...) then $compute could be a viable option.
Suppose I want to define a "useful" function that takes a THREE.Vector2 as well as some scaler values as inputs. What's the best syntax for defining the function if I want other people to easily understand the types of the parameters that need to be passed into the function? Sample (that doesn't work):
export function clipToBox(v: THREE.Vector2, boxWidth, boxHeight) {
const clippedVector = new THREE.Vector2
// Do some clever clipping math...
return clippedVector
}
Example of what we want users of our function to see when editing
While trying some code, I found that if we provide a parameter to a function or procedure without type, it will not give us a compile-time error.
Why is this happening and please give some explanation as I am not able to find such a code anywhere?
procedure declaration:
Procedure TestProc(var objTest);
If we remove the keyword var, then compile time error is presented as Type required.
Can anyone please explain this?
Untyped parameters are usually used when the actual type of the parameter is irrelevant. One example would be the standard "FillChar" procedure that fills a variable - ANY variable - with a specified byte value. Instead of needing several (actually an infinite number of) overloaded procedures to be able to fill an arbitrary variable with a value, an untyped parameter is used.
An untyped parameter (like any other parameter) can be "input" (data going INTO the procedure/function) by using the CONST prefix, "output" (data coming OUT of the the procedure/function) using the OUT prefix, or both (data being sent into the procedure, modified, and sent back out) by using the VAR prefix.
As you may notice, the FillChar procedure uses a VAR prefix, although an OUT would be more correct. However, the FillChar procedure was "created" at a time, when OUT prefixes didn't exist in the language (only CONST and VAR existed, and of the two, VAR was the only one that allowed data to be going back out of the procedure, so VAR was used).
Like Victoria said it is a untyped parameter
If you would like to create a procedure or a function that can handle different types of parameter, you should use overload. Everytime you call a overloaded function it depends on your Inpout what exclusive function oder procedure would be used.
I've been reading a Concepts of Programming Languages by Robert W. Sebesta and in chapter 9 there is a brief section on passing a SubProgram to a function as a parameter. The section on this is extremely brief, about 1.5 pages, and the only explanation to its application is:
When a subprogram must sample some mathematical function. Such as a Subprogram that does numerical integration by estimating the area under a graph of a function by sampling the function at a number of different points. Such a Subprogram should be usable everywhere.
This is completely off from anything I have ever learned. If I were to approach this problem in my own way I would create a function object and create a function that accomplishes the above and accepts function objects.
I have no clue why this is a design issue for languages because I have no idea where I would ever use this. A quick search hasn't made this any clearer for me.
Apparently you can accomplish this in C and C++ by utilizing pointers. Languages that allow nested Subprograms such as JavaScript allow you do do this in 3 separate ways:
function sub1() {
var x;
function sub2() {
alert( x ); //Creates a dialog box with the value of x
};
function sub3() {
var x;
x = 3;
sub4( sub2 ); //*shallow binding* the environment of the
//call statement that enacts the passed
//subprogram
};
function sub4( subx ) {
var x;
x = 4;
subx();
};
x=1;
sub3();
};
I'd appreciate any insight offered.
Being able to pass "methods" is very useful for a variety of reasons. Among them:
Code which is performing a complicated operation might wish to provide a means of either notifying a user of its progress or allowing the user to cancel it. Having the code for the complicated operation has to do those actions itself will both add complexity to it and also cause ugliness if it's invoked from code which uses a different style of progress bar or "Cancel" button. By contrast, having the caller supply an UpdateStatusAndCheckCancel() method means that the caller can supply a method which will update whatever style of progress bar and cancellation method the caller wants to use.
Being able to store methods within a table can greatly simplify code that needs to export objects to a file and later import them again. Rather than needing to have code say
if (ObjectType == "Square")
AddObject(new Square(ObjectParams));
else if (ObjectType == "Circle")
AddObject(new Circle(ObjectParams));`
etc. for every kind of object
code can say something like
if (ObjectCreators.TryGetValue(ObjectType, out factory))
AddObject(factory(ObjectParams));
to handle all kinds of object whose creation methods have been added to ObjectCreators.
Sometimes it's desirable to be able to handle events that may occur at some unknown time in the future; the author of code which knows when those events occur might have no clue about what things are supposed to happen then. Allowing the person who wants the action to happen to give a method to the code which will know when it happens allows for that code to perform the action at the right time without having to know what it should do.
The first situation represents a special case of callback where the function which is given the method is expected to only use it before it returns. The second situation is an example of what's sometimes referred to as a "factory pattern" or "dependency injection" [though those terms are useful in some broader contexts as well]. The third case is commonly handled using constructs which frameworks refer to as events, or else with an "observer" pattern [the observer asks the observable object to notify it when something happens].
Consider the following function:
public function foo(bar1:int, bar2:uint, bar3:String, bar4:Boolean):void{}
What I want is to have the different types of data represented by custom named types which are essentially representing the original data types. In other words, I would like to proxy the data types and have a valid function as following:
public function foo(bar1:PAR_Bar1, bar2:PAR_Bar2, bar3:PAR_Bar3, bar4:PAR_Bar4):void{}
so PAR_Bar1 would proxy the int data type, PAR_Bar2 would proxy the uint data type, so on and so forth.
The reason I need this is because I'm using a debugger with a GUI that can run methods and allows changing function parameter values in real-time, the issue is that the debugger can't tell me what parameter I'm changing, it only displays the data type of a parameter. So if I need to change 10 different parameters all of type int, the debuggers display all of them as int and not by their names.
I think that if I use proxy types I can easily differentiate between parameters.
So, my question: Is it possible to proxy data types? I mean map specific data types to custom data types that would represent the base data types?
EDIT: I'm using the Monster Debugger and this is the window of a method called in real-time:
As you can see, I don't get the parameters' names but their type (int).
I would recommend you changed your debugger but since this is a proper question...
You can create a class just like any constant:
const PAR_Bar1:Class = uint;
Let's hope your debugger will identify this class and not its mother.
Not exactly sure what you are going to use this for but have you considered using an untyped variable definition?
public function foo(bar1:*, bar2:*, bar3:*, bar4:*):void{}
Then using this to get the class of the variables?
var PAR_Bar1:Class = Object(bar1).constructor;
EDIT: Ah ignore this one, re-read your question and realised this won't help you.
It seems that there are no ways of aliasing types.