Fetching AWS Marketplace entitlements after SaaS subscription - aws-sdk

A customer subscribes to my SaaS product on AWS Marketplace and is redirected to my registration form
My registration form receives the registration token and exchanges it for the product code and customer identification via the ResolveCustomer method of the AWS Marketplace Metering Service
If I understood the AWS documentation correctly this means that now the subscription is active from the perspective of AWS Marketplace, right?
Does this also mean that if I fetch entitlements with the AWS Marketplace Entitlement Service for that same customer they will already be returned without an intermediate step?
The intermediate step being some kind of additional verification or confirmation procedure/API call.

The subscription starts already at the redirect (order confirmation).
Yes, entitlements are available right away, after the subscription starts.
There is no intermediate step.

Related

Is there any way to define a "Service Contributor" role per API?

I like to have User-A can contribute to the API-A but doesn't have access to the API-B.
When I look at the Azure APIM Built-in roles (link below) I am noticing that the API Management Service Contributor role is defined for all APIs.
Is it possible to to define a "Service Contributor" role per API as opposed with all APIS?
If not, is there any other technique that help me to achieve the same goal
AFAIK, you can restrict the user to specific set of APIs.
1) Through Product Level where you can add the APIs and allow all APIS to the specific set of users by keeping the Scope level to Product for the users.
Created 2 different APIs in APIM Instance like the below:
Open the New APIM Developer Portal after adding the APIs and publish the APIM instance > Portal Overview under Developer Portal.
In APIM Instance > Products > Added new product "Dotnet6FunctionAPIs" - Added the Net 6 Function App APIs > Checked the options "Requires Subscription", "Requires approval" and then published the product.
4. In APIM Instance > Users - created a user and in Products > Dotnet6FunctionAPIs Product > Access control > Allowed the access to Developers group.
Login to the developer portal with the new user login credentials https://<apiminstance_name>.portal.azure-api.net/ > Products > Dotnet6FunctionAPIs product > Click on Subscribe.
Here the Admin can approve the access of that product APIs to the user and can cancel the subscription whenever admin wants to.
After Subscription approval, the user can test the API present in the product.
If you observe here, I have allowed the users (under Developer group) to the product "Dotnet6FunctionAPIs" that contains specific APIs added to it.
This is one of the ways to restrict users from not accessing the other APIs by adding only specific APIs to the product and giving that product access to the users.
Updated Answer:
As Markus told, there are 3 built-in roles in APIM. API Management Service Contributor is for CRUD access to Complete APIM Instance (all APIS & Operations) and cannot be restricted to specific APIs.
I have seen the permissions given to API Management Service Contributor built-in role. Among those permissions, I believe we need to modify at API Policy Level which is
Write (Access) - Set API policy configuration (Permissions) - Creates or updates policy configuration for the API.

If I have a transaction_id of an authorization request in Magento, can I use same that transaction id and token to capture funds in Salesforce?

So I am using Salesforce and Magento for order management in the backend. My customers are placing orders on my marketplace in Magento and then Magento sends the orders to salesforce for reporting. I would like process the orders in salesforce; however, I don't want to be flagged to xss attack or middle man attack by Authorize.net. I want to know if I can use the transaction token and ID that I received for authorizing amount on the customer's card in Magento to capture the funds in Salesforce?
Magento makes api call to authorize.net to auth amount on customer's credit card.
Then Salesforce uses that same authorization token in capture funds.
Yes, you can. As long as you have the proper login ID, transaction ID, and CIM profiled IDs the payment can originate from any system you control.

How to assign multiple service account credentials to Google Cloud Functions?

I have three service accounts:
App engine default service account
Datastore service account
Alert Center API service account
My cloud functions uses Firestore in datastore mode for book keeping and invokes Alert Center API.
One can assign only one service account while deploying cloud functions.
Is there way similar to AWS where one can create multiple inline policies and assign it to default service account.
P.S. I tried creating custom service account but datastore roles are not supported. Also I do not want to store credentials in environment variables or upload credentials file with source code.
You're looking at service accounts a bit backwards.
Granted, I see how the naming can lead you in this direction. "Service" in this case doesn't refer to the service being offered, but rather to the non-human entities (i.e. apps, machines, etc - called services in this case) trying to access that offered service. From Understanding service accounts:
A service account is a special type of Google account that belongs to
your application or a virtual machine (VM), instead of to an
individual end user. Your application assumes the identity of the
service account to call Google APIs, so that the users aren't
directly involved.
So you shouldn't be looking at service accounts from the offered service perspective - i.e. Datastore or Alert Center API, but rather from their "users" perspective - your CF in this case.
That single service account assigned to a particular CF is simply identifying that CF (as opposed to some other CF, app, machine, user, etc) when accessing a certain service.
If you want that CF to be able to access a certain Google service you need to give that CF's service account the proper role(s) and/or permissions to do that.
For accessing the Datastore you'd be looking at these Permissions and Roles. If the datastore that your CFs need to access is in the same GCP project the default CF service account - which is the same as the GAE app's one from that project - already has access to the Datastore (of course, if you're OK with using the default service account).
I didn't use the Alert Center API, but apparently it uses OAuth 2.0, so you probably should go through Service accounts.

How to identify the Requests received in azure API management

we have an production issue where the order is submitted twice. Currently we have an API for order and we are exposing this to client using API management and in these we have policies for URL mapping for customer facing to actual .
Now , our actual API got 2 request so we thought customer submitted twice but they have confirmed that they have not submitted twice , so either there is issue with API management which fired 2 request.
How can i Identify the request received by the API management ?
Is there any chance that API management will fire the request twice ?
Appreciate any pointers
The only way to fire request twice in APIM would be by the means of Retry policy or manually using SendRequest. Otherwise it should be a client calling your API two times. Each request in APIM get it's own unique id accessible in policies as context.RequestId, this is the main way to track and identify them. But these ids are produced inside APIM itself thus are useful only if you're tracking a call from APIM and into backend.
Your best option now is to try to identify requests by client ip, method, uri, and time frame. APIM allows you to grab logs for certain periods of time (better if kept short) in JSON or CSV with data I mentioned above. To do that look into byRequest report (https://learn.microsoft.com/en-us/rest/api/apimanagement/reports#ReportByRequest), grab JSON/CSV and try to identify calls of interest,
For future you could look into onboarding your service to azure monitor (https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-use-azure-monitor) or log analytics those provide easier way to traverse logs.

Fiware KeyRock API bug: Membership of organisations not returned

As part of the FINISH accelerator we are using FIWARE KeyRock and Wirecloud. Currently we are using the Fiware labs global instance to investigate.
We want to restrict our system so that users can only view data that belongs to the organisations of which they are a member.
The following flow seems logical, but correct me if i am wrong:
A user logs into Wirecloud and is directed through a KeyRock login screen.
A Wirecloud Widget gets an access token from Wirecloud environment. The access token was created when the user logged in.
The Wirecloud widget looks up the organisations and roles that a user is member of. Based on this it adds organisation names to its query.
The Wirecloud widget queries a webservice (Orion or otherwise) using the query it just created.
We put the Wilma PEP proxy between the Wirecloud Widget and the webservice to validate that the user is a member of the organisations in the query.
PROBLEM:
We can query user information from KeyRock using the https://account.lab.fiware.org/user?access_token=XXXXXXXXXXX call. But that does not contain any information about the organisations that the user is a member of according to the KeyRock web interface. The organisations element is an empty array. We get a bunch of roles in the json response, but none of them is the "members" role that you assign to users from the "Manage your organization members" screen in KeyRock.
Some digging revealed that the Keystone instance running on Fiware labs contains the information (assuming that a Keystone project = KeyRock organisation). However the access token provided by KeyRock is somehow not valid on the Keystone API. The API we used was accessible here: http://cloud.lab.fiware.org:4730/v3/
Getting a new access token from the Keystone API is not what we want, because that would be a different access token than Wirecloud has obtained, which would require some kind of proxy to log in again and retrieve the organisation membership. That rather defeats the point of passing an access token.
This seems to be a bug in the KeyRock API on the fiware labs instance.
Or am i missing something here?
Or will this problem magically go away if we install keyrock on our own server?
Thanks for any help,
Robin
you have to follow the steps explained here but using the specific organization. Probably you have missed the "Authorize" step