All examples of Row Level Security in Power BI Embedded that I see on the internet are using Username() functions in RLS.
But I want to use a hash value to do RLS in my system. I want to create a role in my power bi report that compares a column value of a dimension to a Hash passed via iframe.
DimensionA:
[Column7] = 'hashValuePassedByIframe'
Is possible? There are articles/examples on the internet with this approach that I want? Or all RLS techniques of PowerBy are bound to User() functions?
Thank You
Related
I just started to work with Odata and I had an impression the OData querying is kind of flexible.
But in some cases I want to retrieve updated/newly calculated data on the fly. In my case this data is SalaryData values. At some point, I want them to be slightly tweaked with additional applied calculation function. And the critical point that this action must occur on the retrieval of the data with the general request query.
But I don't know, is that applicable to use function in this case?
Ideally, I want to have the similar request:
/odata/Employee(1111)?$expand=SalaryData/CalculculationFunction(40)
Here I want to apply CalculculationFunction with parameters on SalaryData.
Is that possible to do it in OData in this way? Or should I create an entity set of salary data and retrieve calculated data directly using the query something like
/odata/SalaryData(1111)/CalculculationFunction(40)
But this way is least preferable for me, because I don't want to use id of SalaryData in request
Current example of the function I created:
[EnableQuery(MaxExpansionDepth = 10, MaxAnyAllExpressionDepth = 10)]
[HttpGet]
[ODataRoute("({key})/FloatingWindow(days={days})")]
public SingleResult<Models.SalaryData> MovingWindow([FromODataUri] Guid key, [FromODataUri] int days)
{
if (days <= 0)
return new SingleResult<Models.SalaryData>(Array.Empty<Models.SalaryData>().AsQueryable());
var cachedSalaryData = GetAllowedSalaryData().FirstOrDefault(x => x.Id.Equals(key));
var mappedSalaryData = mapper.Map<Models.SalaryData>(cachedSalaryData);
mappedSalaryData = Models.SalaryData.FloatingWindowAggregation(days, mappedSalaryData);
var salaryDataResult = new[] { mappedSalaryData };
return new SingleResult<Models.SalaryData>(salaryDataResult.AsQueryable());
}
There is always an overlap between What is OData Compliant Routing vs What can I do with Routes in Web API. It is not always necessary to conform to the OData (V4) specification, but a non-conforming route will need custom logic on the client as well.
The common workaround for this type of request is to create Function endpoint bound to the Employee item that accepts the parameter input that will be used to materialize the data. The URL might look like this instead:
/odata/Employee(1111)/WithCalculatedSalary(40)?$expand=SalaryData
This method could then internally call the existing MovingWindow function from the SalaryDataController to build the results. You could also engineer both functions to call a common set based routine.
The reason that you you should bind this function to the EmployeeController is that the primary identifying resource that correlates the resulting data together is the Employee.
In this way OData v4 compliant clients would still be able to execute this function and importantly would be able to discover it without any need for customisations.
If you didn't need to return the Employee resource as part of the response then you could still serve a collection of SalaryData from the EmployeeController:
/odata/Employee(1111)/CalculatedSalary(days=40)
[EnableQuery(MaxExpansionDepth = 10, MaxAnyAllExpressionDepth = 10)]
[HttpGet]
[ODataRoute("({key})/FloatingWindow(days={days})")]
public IQueryable<Models.SalaryData> CalculatedSalary([FromODataUri] int key, [FromODataUri] int days)
{
...
}
builder.EntitySet<Employee>("Employee")
.EntityType
.Function("CalculatedSalary")
.ReturnsCollectionFromEntitySet<SalaryData>("SalaryData")
.Parameter<int>("days");
$compute and $search in ASP.NET Core OData 8
The OData v4.01 specification does have support for System Query Option $compute which was designed to enable clients to append computed values into the response structure, you could hijack this pipeline and define your own function that can be executed from a $compute clause, but the expectation is that system canonical functions are used with a combination of literal values and field references.
The ASP.Net implementation has only introduced support for this in the OData Lib v8 runtime, as yet I have not yet found a good example of how to implement custom functions, but syntactically it is feasible.
The same concept could be used to augment the $apply execution, if this calculation operates over a collection and effectively performs an aggregate evaluation, then $apply
It might be that your current CalculculationFunction can be translated directly into a $compute statement, otherwise if you promote some of the calculation steps (metadata) as columns in the schema (you might use SQL Computed Columns for this...) then $compute could be a viable option.
A question about Maximo 7.6.1.1 work orders:
In the service address tab, there are LONGITUDEX and LATITUDEY columns.
The columns can be populated a couple of different ways:
Automatically from the service address (linked to a GIS feature class).
Manually by a user (by right-clicking on the map and clicking Set record location).
Is there a way to determine what the source was for the LONGITUDEX and LATITUDEY columns?
For example, if the source was a user, then populate a custom field. Else, leave it null.
(Related keyword: Maximo Spatial)
Out of the box, I'm not aware of a way to know that.
If you have configured e-Audit for those attributes and that object, then you can query the audit table for the last record to find out which User made the change. Admittedly, it won't tell you how they made the change, but at least you can differentiate between service accounts and real users.
Other than that, I think you would need an autoscript with attribute.action launch points on those attributes that records the current user and whether the session was interactive (i.e. via the Maximo UI or not) in new xychangeby and xychangedinmx attributes on the woserviceaddress object.
I added a custom field to WOSERVICEADDRESS called XY_SOURCE.
And I created an automation script with an object launch point (Save; Add/Update; Before Save).
sa = mbo.getString("SADDRESSCODE")
x = mbo.getDouble("LONGITUDEX")
if sa and x: --Improved, as per Preacher's suggestion
mbo.setValue("XY_SOURCE", "Service Address")
elif x:
mbo.setValue("XY_SOURCE", "Manual")
else:
mbo.setValue("XY_SOURCE", None)
This seems to do the trick.
I'd be happy to hear if there are any flaws with this logic (or opportunity for improvement to the code).
While working on log Queries in an arm template, I stuck with how to pass parameter values or variable values in the log Query.
parameters:
{
"resProviderName":
{
"value": "Microsoft.Storage"
}
}
For Eg:
AzureMetrics | where ResourceProvider == **parameters('resProviderName')** | where Resource == 'testacc'
Here I am facing an error like, it was taking parameters('resProviderName') as a value and it was not reading value from that particular parameter "resProviderName" and my requirement is to take the values from parameters or variables and should not hardcode like what I did in the above query as Resource=='testacc'.
Do we have any option to read the values from either parameters or variables in the log query?
If so, please help on this issue.
The answer to this would depend on what this query segment is a part of, and how your template is structured.
It appears that you're trying to reference a resource in the query. It is best to use one of the many available template resource functions if you want the details of the resource like its resourceID (within the resources section). When referencing a resource that is deployed in the same template, provide the name of the resource through a parameter. When referencing a resource that isn't deployed in the same template, fetch the resource ID.
Also, I'd recommend referring to the ARM snippet given this example to understand how queries can be constructed as custom variables as opposed to the other way round. According to ARM template best practices, variables should be used for values that you need to use more than once in a template. If a value is used only once, a hard-coded value makes your template easier to read. For more info, please take a look at this doc.
Hope this helps.
This is an old question but I bumped into this while searching for the same issue.
I'm much more familiar with Bicep templates so what I did to figure out was to create a Bicep template, construct the query by using the variables and compile it. This will generate a ARM template and you can analyze it.
I figured out you can use both concat or format functions in order to construct your query using variables.
I preferred the format one since it looks more elegant and readable (also, Bicep build generates the string using the format function).
So based on this example, the query would be something like this:
query: "[format('AzureMetrics | where ResourceProvider == {0} | where Resource == ''testacc'' ', parameters('resProviderName') )]"
Don't forget to escape the ' in the ARM template which you do by doubling the single quote.
I hope this helps some people.
André
I have a dataset with multiple observations by participant. Participants are denoted by id. To account for this in the cross validation process, I add blocking = factor(id) to makeClassifTask() and blocking.cv = TRUE to makeResampleDesc(). However, if I leave id in the dataset, it will be used as a predictor. My question is: How do I correctly use blocking? My take would be to create a new variable, e.g. participant.id (outside of the dataset), next to remove id from the original dataset and then to use blocking = factor(participant.id), but I am not sure if this is the correct way to handle blocking.
Rather than supplying a variable for blocking you can provide a custom factor vector that specifies the observations which belong together. This is also shown in the tutorial.
This way you do not need to have the variable "participant.id" in the dataset.
Also make sure that you really want to use "blocking". Did you have a look at "grouping" already? The differences between both are also described in the linked tutorial section.
When working with IDT 4.1 and when making a query in business layer, is it possible to format the yielded numbers? What I mean is - the output looks something like "1.9982121921**E7**" (please noctice E7 part). I would like BO to display the whole number without any suffixes.
Additionally, it would be even better to add a delimiter after thousands, millions,...
Is 1.9982121921**E7** the value that is returned from the database? Thus not a number but a string (alphanumeric)? In that case, you'll have to change the select statement and use a database function to trim the non-numeric characters off (e.g. SUBSTR, MID, LEFT, …).
Once you have numeric data, you can use the Display Format function to change the layout of your object.
If you're not happy with the predefined formats to choose from, you can always define a custom format. The formatting options are described in the Information Design Tool User Guide (links to the documentation for IDT in BI 4.1 SP3), section 12.10.23 Creating and editing display formats for business layer objects.
Right-click and object and select Create Display Format… from the context menu.
Or click the Create Display Format… button in the object's properties (located in the Advanced tab).
Set the type to numeric and enable the high precision check mark.