How to fix: "Operation returned an invalid status code 'BadRequest’" when connecting PowerBI Embedded to an SSAS Multidimensional cube - json

I have an (on premise) SSAS (Multidimensional) cube with a live connection to Power BI. Then it has to be shown in a portal with Power BI Embedded. I used the method: 'App owns data' and with a 'master user' account. This part works.
But when i try to add Row Level Security(RLS), it keeps giving errors. The report will be shown to customers (outside the organization). Based on their login (the authentication is held by the portal itself), they need to see their own data.
I tried to connect, using JSON script, adding username, roles, datasets and customdata.
The username contains the actual active directory username which has permissions within SSAS.
The customdata contains the part i want to filter.
The role 'Test' is currently made for testing purposes.
The role 'Test' is setup in SSAS with read permissions and the specific 'company' dimension is setup with the following 'Allowed member set':
STRTOMEMBER('[Dim Company].[BK_Company].&[{'+CUSTOMDATA()+'}]')
This is based on another topic which used this as the solution.
I have tried using USERNAME() to be the filter of RLS, but it seems I can only use actual accountnames in this field. Our current active directory doesn't hold all customer names in it.
var rls = new EffectiveIdentity(#"domain\powerbiportal", new List { report.DatasetId }, new List { "Test" }, "19164");
var tokenRequest = new GenerateTokenRequest("view", identities: new List { rls });
var tokenResponse = client.Reports.GenerateTokenInGroupAsync("[ID]", report.Id, tokenRequest).Result;
Sending JSON
{
"accessLevel": "view",
"identities": [
{
"username": "domain\\powerbiportal",
"roles": [
"Test"
],
"datasets": [
"[dataset]"
],
"customData": "19164"
}
]
}
The error i get is the following:
Operation returned an invalid status code 'BadRequest’

After contacting Microsoft Support the problem was fixed.
First the solution was to uncheck the box 'enable read permissions' in the 'cell data' tab of the role. The cell was empty in my case, but for some reason it still created the problem.
Secondly the statement to filter had to be:
{STRTOMEMBER('[Dim Company].[BK_Company].&[' + CUSTOMDATA() + ']')}
instead of
STRTOMEMBER('[Dim Company].[BK_Company].&[{'+CUSTOMDATA()+'}]')

Related

How can I create an EMR cluster resource that uses spot instances without hardcoding the bid_price variable?

I'm using Terraform to create an AWS EMR cluster that uses spot instances as core instances.
I know I can use the bid_price variable within the core_instance_group block on a aws_emr_cluster resource, but I don't want to hardcode prices as I'd have to change them manually every time the instance type changes.
Using the AWS Web UI, I'm able to choose the "Use on-demand as max price" option. That's exactly what I'm trying to reproduce, but in Terraform.
Right now I am trying to solve my problem using the aws_pricing_product data source. You can see what I have so far below:
data "aws_pricing_product" "m4_large_price" {
service_code = "AmazonEC2"
filters {
field = "instanceType"
value = "m4.large"
}
filters {
field = "operatingSystem"
value = "Linux"
}
filters {
field = "tenancy"
value = "Shared"
}
filters {
field = "usagetype"
value = "BoxUsage:m4.large"
}
filters {
field = "preInstalledSw"
value = "NA"
}
filters {
field = "location"
value = "US East (N. Virginia)"
}
}
data.aws_pricing_product.m4_large_price.result returns a json containing the details of a single product (you can check the response of the example here). The actual on-demand price is buried somewhere inside this json, but I don't know how can I get it (image generated with http://jsonviewer.stack.hu/):
I know I might be able solve this by using an external data source and piping the output of an aws cli call to something like jq, e.g:
aws pricing get-products --filters "Type=TERM_MATCH,Field=sku,Value=8VCNEHQMSCQS4P39" --format-version aws_v1 --service-code AmazonEC2 | jq [........]
But I'd like to know if there is any way to accomplish what I'm trying to do with pure Terraform. Thanks in advance!
Unfortunately the aws_pricing_product data source docs don't expand on how it should be used effectively but the discussion in the pull request that added it adds some insight.
In Terraform 0.12 you should be able to use the jsondecode function to nicely get at what you want with the following given as an example in the linked pull request:
data "aws_pricing_product" "example" {
service_code = "AmazonRedshift"
filters = [
{
field = "instanceType"
value = "ds1.xlarge"
},
{
field = "location"
value = "US East (N. Virginia)"
},
]
}
# Potential Terraform 0.12 syntax - may change during implementation
# Also, not sure about the exact attribute reference architecture myself :)
output "example" {
values = jsondecode(data.json_query.example.value).terms.OnDemand.*.priceDimensions.*.pricePerUnit.USD
}
If you are stuck on Terraform <0.12 you might struggle to do this natively in Terraform other than the external data source approach you've already suggested.
#cfelipe put that ${jsondecode(data.aws_pricing_product.m4_large_price.value).terms.OnDemand.*.priceDimensions.*.pricePerUnit.USD}" in a Locals

Can you SQL populate a BigQuery table and set the table column modes in the same API call?

I'm using Google App Script to migrate data through BigQuery and I've run into an issue because the SQL I'm using to perform a WRITE_TRUNCATE load is causing the destination table to be recreated with column modes of NULLABLE rather than their previous mode of REQUIRED.
Attempting to change the modes to REQUIRED after the data is loaded using a metadata patch causes an error even though the columns don't contain any null values.
I considered working around the issue by dropping the table and recreating it again with the same REQUIRED modes, then loading the data using WRITE_APPEND instead of WRITE_TRUNCATE. But this isn't possible because a user wants to have the same source and destination table in their SQL.
Does anyone know if it's possible to define a BigQuery.Jobs.insert request that includes the output schema information/metadata?
If it's not possible the only alternative I can see is to use my original work around of a WRITE_APPEND but add a temporary table into the process, to allow for the destination table appearing in the source SQL. But if this can be avoid that would be nice.
Additional Information:
I did experiment with different ways of setting the schema information but when they didn't return an error message the schema seemed to get ignored.
I.e. this is the json I'm passing into BigQuery.Jobs.insert
jsnConfig =
{
"configuration":
{
"query":
{
"destinationTable":
{
"projectId":"my-project",
"datasetId":"sandbox_dataset",
"tableId":"hello_world"
},
"writeDisposition":"WRITE_TRUNCATE",
"useLegacySql":false,
"query":"SELECT COL_A, COL_B, '1' AS COL_C, COL_TIMESTAMP, COL_REQUIRED FROM `my-project.sandbox_dataset.hello_world_2` ",
"allowLargeResults":true,
"schema":
{
"fields":
[
{
"description":"Desc of Column A",
"type":"STRING",
"mode":"NULLABLE",
"name":"COL_A"
},
{
"description":"Desc of Column B",
"type":"STRING",
"mode":"REQUIRED",
"name":"COL_B"
},
{
"description":"Desc of Column C",
"type":"STRING",
"mode":"REPEATED",
"name":"COL_C"
},
{
"description":"Desc of Column Timestamp",
"type":"INTEGER",
"mode":"NULLABLE",
"name":"COL_TIMESTAMP"
},
{
"description":"Desc of Column Required",
"type":"STRING",
"mode":"REQUIRED",
"name":"COL_REQUIRED"
}
]
}
}
}
}
var job = BigQuery.Jobs.insert(jsnConfig, "my-project");
The result is that the new or existing hello_world table is truncated and loaded with the data specified in the query (so part of the json package is being read), but the column descriptions and modes aren't added as defined in the schema section. They're just blank and NULLABLE in the table.
More
When I tested the REST request above using Googles API page for BigQuery.Jobs.Insert it highlighted the "schema" property in the request as invalid. I think it appears the schema can be defined if you're loading the data from a file, i.e. BigQuery.Jobs.Load but it doesn't seem to support that functionality if you're putting the data in using an SQL source.
See the documentation here: https://cloud.google.com/bigquery/docs/schemas#specify-schema-manual-python
You can pass a schema object with your load job, meaning you can set fields to mode=REQUIRED
this is the command you should use:
bq --location=[LOCATION] load --source_format=[FORMAT] [PROJECT_ID]:[DATASET].[TABLE] [PATH_TO_DATA_FILE] [PATH_TO_SCHEMA_FILE]
as #Roy answered, this is done via load only. Can you output the logs of this command?

Retrieve custom attribute from user profile in Google API Scripts- Google Admin Directory

This is about G suite users.The following works in Google Admin Directory using Google Admin SDK. It retrieves email address and full name of user.
var myemail = Session.getActiveUser().getEmail();
var mycontact = AdminDirectory.Users.get(myemail);
var myname = mycontact.name.fullName;
There is a custom attribute in user profile named "Department". The following does NOT retrieve anything. It throws null
var mydept = mycontact.Department;
How can one retrieve custom attribute from user profile in G suite?
According to Directory Api - Users: get you need to set the projection to "custom".
projection - What subset of fields to fetch for this user.
Acceptable values are:
"basic": Do not include any custom fields for the user. (default)
"custom": Include custom fields from schemas requested in customFieldMask.
"full": Include all fields associated with this user.
Then you should define a Schema for the custom data
customFieldMask (string) A comma-separated list of schema names. All fields from these schemas are fetched. This should only be set when projection=custom.
So something like:
var mycontact = AdminDirectory.Users.get({
"userKey": myemail,
"projection": "full",
"customFieldMask": "Define Schema Here"
});
You can then Logger.log(mycontact); to see how to access the returned custom fields
For a custom schema, you can just use the full projection to get all custom schema fields.
For the standard department field, see user.organizations[0].department
https://developers.google.com/admin-sdk/directory/v1/reference/users
If you got an error :
Resource Not Found: userKey
Try this :
mycontact = AdminDirectory.Users.get(
myemail,{
projection: 'full'
});

Check if JSON string in Orbeon repeating grid contains a specific value

Working with the repeating grids through the form builder.
I have a custom control that has a string value represented in json.
{
"data": {
"type": "File",
"itemID": "12345",
"name": "Annual Summary",
"parentFolderID": "fileID",
"owner": "Owner",
"lastModifiedDate": "2016-10-17 22:48:05Z"
}
}
In the controls outside of the repeating grid, i need to check if name = "Annual Summary"
Previously, i had a drop down control and using Calculated Value $dropdownControl = "Annual Summary" it was able to return true if any of the repeated rows contained the value. My understanding is that using the = operator, it will validate against all rows.
Now with the json output of the control, I am attempting to use
contains($jsonStringValue, 'Annual Summary')
However, this only works with one entry and will be null if there are multiple rows.
2 questions:
How would validate whether "Annual Summary" (or any other text) is present within any of the repeated rows?
Is there any way to navigate the json or parse it to XML and navigate it?
Constraint:
within the Calculated Value or Visibility fields within form builder
manipulating the source that is generated by the form builder
You probably want to parse the JSON string first. See also this other Stackoverflow question.
Until Orbeon Forms 2016.3 is released, you would write:
(
for $v in $jsonStringValue
return converter:jsonStringToXml($v)
)//name = 'Annual Summary'
With the above, you also need to scope the namespace:
xmlns:converter="org.orbeon.oxf.json.Converter"
Once Orbeon Forms 2016.3 is released you can switch to:
$jsonStringValue/xxf:json-to-xml()//name = 'Annual Summary'

Make dynamic name text field in Postman

I'm using Postman to make REST API calls to a server. I want to make the name field dynamic so I can run the request with a unique name every time.
{
"location":
{
"name": "Testuser2", // this should be unique, eg. Testuser3, Testuser4, etc
"branding_domain_id": "52f9f8e2-72b7-0029-2dfa-84729e59dfee",
"parent_id": "52f9f8e2-731f-b2e1-2dfa-e901218d03d9"
}
}
In Postman you want to use Dynamic Variables.
The JSON you post would look like this:
{
"location":
{
"name": "{{$guid}}",
"branding_domain_id": "52f9f8e2-72b7-0029-2dfa-84729e59dfee",
"parent_id": "52f9f8e2-731f-b2e1-2dfa-e901218d03d9"
}
}
Note that this will give you a GUID (you also have the option to use ints or timestamps) and I'm not currently aware of a way to inject strings (say, from a test file or a data generation utility).
In Postman you can pass random integer which ranges from 0 to 1000, in your data you can use it as
{
"location":
{
"name": "Testuser{{$randomInt}}",
"branding_domain_id": "52f9f8e2-72b7-0029-2dfa-84729e59dfee",
"parent_id": "52f9f8e2-731f-b2e1-2dfa-e901218d03d9"
}
}
Just my 5 cents to this matter. When using randomInt there is always a chance that the number might eventually be present in the DB which can cause issues.
Solution (for me at least) is to use $timestamp instead.
Example:
{
"username": "test{{$timestamp}}",
"password": "test"
}
For anyone who's about to downvote me this post was made before the discussion in comments with the OP (see below). I'm leaving it in place so the comment from the OP which eventually described what he needs isn't removed from the question.
From what I understand you're looking for, here's a basic solution. It's assuming that:
you're developing some kind of script where you need test data
the name field should be unique each time it's run
If your question was more specific then I'd be able to give you a more specific answer, but this is the best I can do from what's there right now.
var counter = location.hash ? parseInt(location.hash.slice(1)) : 1; // get a unique counter from the URL
var unique_name = 'Testuser' + counter; // create a unique name
location.hash = ++counter; // increase the counter by 1
You can forcibly change the counter by looking in the address bar and changing the URL from ending in #1 to #5, etc.
You can then use the variable name when you build your object:
var location = {
name: unique_name,
branding_domain_id: 'however-you-currently-get-it',
parent_id: 'however-you-currently-get-it'
};
Add the below text in pre-req:
var myUUID = require('uuid').v4();
pm.environment.set('myUUID', myUUID);
and use the myUUID wherever you want
like
name: "{{myUUID}}"
It will generate a random unique GUID for every request
var uuid = require('uuid');
pm.globals.set('unique_name', 'testuser' + uuid.v4());
add above code to the pre-request tab.
this was you can reuse the unique name for subsequent api calls.
Dynamic variable like randomInt, or guid is dynamic ie : you donot know what was send in the request. there is no way to refer it again, unless it is send back in response. even if you store it in a variable,it will still be dynamic
another way is :
var allowed = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
var shuffled_unique_str = allowed.split('').sort(function(){return 0.5-Math.random()}).join('');
courtsey refer this link for more options