Azure API Management REST call to create subscription for user (missing) - azure-api-management

I'm attempting to delegate product subscription from Azure API Management using the sample provided here. My prototype has a functioning user authentication delegation however the product subscription delegation is befuddling.
During user login delegation I receive a request from APIM to my delegation page and handle it according to the sample link above without issue. During delegation of product subscription, a call is made to my login page first; not the delegation page. This leads me to my first series of questions:
Can someone explain why delegation of product subscription would fundamentally flow differently than delegation of user authentication?
If the login delegation page (as per the sample referenced above) handles user authentication by checking User.Identity.IsAuthenticated, why can't product delegation do the same and why would it be sent to the login page and not the delegation page?
I've handled the above issue by using the login page to evaluate whether or not the user is authenticated first, then to redirect them to the returnUrl as follows:
if (User.Identity.IsAuthenticated)
{
return LocalRedirect(returnUrl);
}
The value of returnUrl, as provided by APIM, contains the following variables:
Path = /Identity/Account/Manage/Delegate
productId = [productId]
userId = [userId]
operation = Subscribe
salt = [salt]
sig = [sig]
Since these are ALL the variables provided in the returnUrl from APIM, I have the following questions:
Following the documentation about subscription using APIM REST API, how do you determine the following required properties:
subscriptionId
resourceGroupName
serviceName
sid
Additionally for the request body, how do you determine properties.scope as per this reference.
As a test, I set a breakpoint in code just before calling the PUT method on the endpoint containing the following line of code. I used Postman to test creating a subscription by copying out the Authorization header in VS2017 and all relevant header/body data. I was able to get back a 201 response indicating a subscription was created, however it doesn't show up in the APIM portal anywhere and I certainly didn't have many of the "required" properties as defined in the docs article:
response = await client.PutAsync("/subscriptions/" + subscriptionId + "?api-version=" + apiVersion, new StringContent(ApimSubscriptionJson, Encoding.UTF8, "text/json"));
Here is the body of my test call to the API:
{
"userId" : "/users/c22afea6-3e9c-4b85-87a6-2d5e97e259cf",
"scope" : "/products/ring-0-beta-access"
}
Based on this oddity, I have the following additional questions:
If the subscription to the product was indeed created, where would it be if not in the Azure APIM portal? It also doesn't show up in the user's profile.
How am I able to get a 201 response on the PUT method if I haven't given the APIM REST API all the 'required' parameters?

I found a solution and wanted to share.
I was okay to use the method explained in the Channel 9 video. I was simply using the wrong property. Instead of userId it should be ownerId. I noticed after running a GET on my subscriptions that I could see them all. They have no association to a user so they don't show up in the Azure APIM portal.
Another key miss was notifications. If you leave out the &notify=true query string parameter you won't get notified when someone subscribes to your API. This is particularly troublesome when your API requires approval.
This seems like a potential product bug as you shouldn't be able to create an 'owner-less' subscription. It makes it nearly impossible to find if you don't know where to look.

Related

[Adyen][POS][Local integration] Send metadata to the terminal

Currently, I try to send SaleToAcquirerData metadata to the terminal when the order to sync to Adyen Backend, I have checked at Adyen Backend but don't see metadata and my webhook cannot receive metadata
I need to send metadata to the terminal and receive metadata at my webhook
In this answer, I expect that you
Already receive webhooks events and that your webhook is well configured
That you receive core data from webhook events but that the POS related additional data is missing
Have you activated the POS additional data for this specific webhook?
You can do it in the Customer Area, Developer -> Webhook, select your webhook and then "Additional settings".
The UI looks like this :
Save, and exit. Your future webhooks events should contain POS metadata. Please note that you may still receive some events without the metadata that you except, because they were already generated at the time they were created.
EDIT : In case you want to use the API for this, you can also PATCH the existing webhook with additional settings using the new Management API.
While waiting for extra information, your question may be understood another way : You want to access the metadata field of the additional data section of a webhook event.
In that case, these metadata fields should be submitted at the time of payment in the POST /payments request.
You can find more information about this in the Webhooks documentation.

Get User's first and last name via Google API

Currently I am developing a Chrome-GMAIL extension which requires me to get the logged in user's first and last names. For experimentation, I have used the following goggle API (userinfo) and have successfully obtained the names I wanted:
https://www.googleapis.com/auth/userinfo.profile
However, using the userinfo APIs will cause a change in the OAuth2 scopes in my manifest. This change will in turn cause a permission-prompt to my existing users (if a domain wide delegation is not setup in place). Point being the idea of having more prompts in front of my user, or additional oauth scope is not really something I desire.
Currently our extensions use the following OAuth scopes and API :
Chrome's Identity API
Chrome's Storage API
GMAIL.modify
GMAIL.send
My question is, is it possible to get the first and last names using an API that is defined/allowed/provided for by any of the above scopes/permissions I listed? or is userinfo the only way to go?
Thank you very much.
Profile data like first name and last name is private data. You are corect that some Google apis give you access to some data that would normally require an extra scope. For email normally you would need to to request the email scope to get this back however the Gmail api does have an endpoint getprofile which will return the current users email address without you requesting the email scope.
However i am not aware of any apis that will give you access to the users first and last name without you requesting the profile or user.profile scope.
If you do decide to add the scope, I do recommend going though the people api rather then the userinfo endpoint as the data returned by the user info endpoint is not guaranteed to always return the name.

Cloud Schedule + Cloud Functions -> Gmail API watch() - WORKING NOW

This is my first post here. I am sorry if it's a repost, but I've been searching for more than one month for the answer to solve my problem in all websites and forums and until now... no answers!
My goal is to make a Gmail pub/sub watch() to make an action whenever I receive a new email.
To do so, according to the developer's website, I need to subscribe to Gmail watch() on a daily basis with the code:
request = {
'labelIds': ['INBOX'],
'topicName': 'projects/myproject/topics/mytopic'
}
gmail.users().watch(userId='me', body=request).execute()
Until now i have this a working scheduled task with a service account, with INVOKER Permissions. This part just works fine.
In my "initial autorization function" i have:
const {google} = require('googleapis');
// Retrieve OAuth2 config
const oauth2Client = new google.auth.OAuth2(
process.env.CLIENT_ID,
process.env.CLIENT_SECRET,
process.env.CALLBACK_URL
);
exports.oauth2init = (req, res) => {
// Define OAuth2 scopes
const scopes = [
'https://www.googleapis.com/auth/gmail.modify'
];
// Generate + redirect to OAuth2 consent form URL
const authUrl = oauth2Client.generateAuthUrl({
access_type: 'offline',
scope: scopes,
//prompt: 'none'// Required in order to receive a refresh token every time
});
return res.redirect(authUrl);
};
My issue now is that the access token is generated via (prompt) the first time and never updates to a new one ( the token expires after 1hour...) it means this code stops working after that period and a "manual" intervention is required. According with the documentation, i need to use "offline" method and on "prompt" i can omit (only requests permissions on the 1st time) or none (never asks), like is said here.
I managed how to make it work! tomorow i will continue with the process.
Should i post here my working code for reference?
Thanks!
I will rephrase the process you illustrated so that there is no ambiguity.
According the documentation you pushed:
You do not suscribed to watch(), you call watch()
watch() is an API call to the Gmail API that will enable automatic events publication on a pub/sub topic you define given conditions you specified. Who are you watching? On what events?
You suscribe to a Pub/Sub topic that is targeted by your previous watch() call
A process (e.g: Google cloud function) suscribes to the topic and will consume messages sent by the Gmail API
The call is to be renewed at least every seven days
Because Google needs to be sure you still need to monitor the targeted inbox, it needs a renewal from you. Another watch() call will act so.
Cloud scheduler will enable this periodic renewal
this service will trigger your renewal script you put in your question. To do so it needs to be authenticated to the platform that host the script. It is easier if your script is hosted in a google service (cloud function, cloud run,...) and the authent type depends on the target URL form. In all cases YOU DO NEED an authent token in your request header. The token is generated from a service account you created with the right permission to call your script (e.g: cloud run invoker). By default the scheduler has the right to generate a token from it
So far so good. Now comes the tricky part and you don't mention it in your question. How is authenticated your gmail api client? You cannot monitor someone inbox, unless this person gave you the permission to i.e you call the API with the right Oauth2 token. Indeed in the video you point they authenticat the user using this principe which is implemented in their code with Express-oauth2-handler.
So you will have a cloud function to init end user authent and watch to his/her inbox. The renewal should do so but problem is user will not be there for accepting the end user consent. Here comes the offline access but it is beyond the scope of your question. Finally a second functions will suscribe to the pubsub topic and consume the message as you need. See their implementation code which populate a spreadsheet.
The documentation you shared in the comments does not say that you can remove the token from the headers of the service account, also the gmail API documentation you also shared says that you only:
need to grant publish privileges to gmail-api-push#system.gserviceaccount.com. You can do this using the Cloud Pub/Sub Developer Console permissions interface following the resource-level access control instructions.
In order to achieve this basically what you will need is a setup of two cloud functions, the first scheduled function is responsible for setting up the watch(), and you can check this documentation for how to deploy a scheduled function, and the second function being triggered by the pubsub of gmail notifications, you can check this documentation for how to build an event triggered function. Both processes are similar.
NOTE: I have never user the Gmail API, so I am not sure if any extra steps are necessary but then again, the documentation implies that setting up the permissions of that service account is enough to make it work.
EDIT:
As per the information you have shared. The issue is likely that you are not properly setting the Service Account to authenticate with the Cloud Function. As per described in the documentation, you have to grant to the Service Account the role Cloud Functions Invoker in IAM.
Let me know if this fixed the issue.

Service now api how to comment as specific user

I'm working on a project that consumes Service Now API (Rest). To do so our client has registered us as a user in order to login and make all service calls we need to. This project has an interface where users can login once they have an account on Service Now as well, the username they type to log in has nothing to do with service now by the way, but later they associate theirs service now users to it. They can do some operations through this interface, where all of them are done using the integration user/pass not their service now users theirselves, even because they do not need to share their passwords with us. But it's needed to track the correct user to register on service now and I'm in trouble specifically about commenting on an incident. The endpoint to comment is the following :
http://hostname/api/now/table/incident/{sys_id}
where request body is a json object just as simple as :
{
"comments": "My comment is foo bar"
}
but when this comment is registered on Service Now it is under integration user instead the user which commented. Is there any way I could keep a specific user, considering I already have the user id on Service Now ready to inform it on the request the way it should be.
I tried reading Service Now documentation but had no clue how to solve it, altought I've found something about impersonate
This is happening because you're being proxied through the "Integration User" instead of your own account. As long as this is the case, your comments are going to be attributed to the Integration User.
I can think of two ways to fix this issue.
Ask the client to log you into their system directly as a user.
Implement a special API (Scripted REST API, available in Geneva or later) that allows you to identify the Incident and enter the comment, and then the script forges the comment on your behalf, attributing authorship correctly.
The first solution can be expensive due to possible additional licensing costs.
The second solution will require a willing client to devote 2-3 hours of development time, depending on the programmer.
Firstly, you need an integration user with suffient rights. Our integration user has suffient rights out of the box, but your story could be different. A quick check is to try impersonate as other user using menu.
Login as integration user to ServiceNow instance.
Go to https://{instance}.service-now.com/nav_to.do
Click on username at top right corner. This is a drop down.
There should be at least three menu items: "Profile", "Impersonate User", and "Logout". If you do not have "Impersonate User" in this menu, your integration user miss some permissions. Contact system administrator if you miss this menu item to configure appropriate permissions.
Then you need to find sys_id of user that you want to impersonate. For example:
https://{instance}.service-now.com/api/now/table/sys_user?sysparm_query=user_name={username}&sysparm_fields=sys_id
If you have suffient privileges, you could invoke the folling endpoint with sys id of user that you want to impersonate:
HTTP POST to https://{instance}.service-now.com/api/now/ui/impersonate/{user_sys_id} with body "{}" and content type "application/json". You need to provide HTTP basic authentication to this query as your integration user.
The response code on success is 200. The response body could be ignored. The interesting result of this response is a set of cookies for impersonated user in response headers. These cookies could be used for subsequent REST API calls until they expire. Use some HTTP rest client dependent method to capture them and to provide them to next calls.
For Apache HTTP Client (Java), I'm creating http client context using:
HttpClientContext context = HttpClientContext.create();
context.setCookieStore(new BasicCookieStore());
Pass thing context to impersonation request and to subsequent API calls until I get 401 reply, after that I'm reaquiring cookies. Setting new cookie store is important, as otherwise some default cookies store is used.
Two things to note:
This API looks like internal one, so it could change at any time. If it happens, look for what "Impresonate User" menu item does, and repeat it youselves.
ServiceNow permissions are quite fine-grained, so the target user could lack permissions to perform operation. In some cases, if there is no permission to update the field the operation PATCH on object returns reponse 200, but field is not updated. This introduces a surprising mode of failure when you use impersonation.

How to authorize with oauth 2.0 from appscript to Google APIs?

I'm playing around with AppScript and try to get an oAuth 2.0 access token.
Any sample out there how to get this working in AppScript?
I am working on a cleaner tutorialized version of this, but here is a simple Gist that should give you some sample code on how things would work -
https://gist.github.com/4079885
It still lacks logout, error handling and the refresh_token capability, but at least you should be able to log in and call a oAuth 2 protected Google API (in this case its a profile API).
You can see it in action here -
https://script.google.com/macros/s/AKfycby3gHf7vlIsfOOa9C27z9kVE79DybcuJHtEnNZqT5G8LumszQG3/exec
The key is to use oAuth 2 Web Server flow. Take a look at getAndStoreAccessToken function in the gist to get the key details.
I hope to have this published in the next few weeks but hopefully this will help in the mean time.
UPDATE - adding in info on redirect_uri
The client secret is tied to specific redirect URIs that the authorization code is returned to.
You need to set that at - https://code.google.com/apis/console/
The highlighted URI needs to match the published URI (ends in /exec). You get the published URI from the script editor under Publish -> Deploy as web app. Make sure you are saving new versions and publishing the new versions when you make changes (the published URI stays the same).
I've modified the example above to use the newish state token API and the CacheService instead of UserProperties, which is now deprecated. Using the state token API seems to make things a little more secure, as the callback url will stop accepting a state token after a timeout.
The same caveats apply. Your redirect URIs have to be added to your (script) project in the developer's console, meanwhile you have to yank the CLIENT_SECRET and CLIENT_ID from the console and paste them in. If you're working within a domain, there don't seem to be any guarantees on what URL will be returned by ScriptApp.getService().getUrl(), so I wound up basically having it get the address dynamically, then waiting for to fail on the the (second) redirect, and then hard-coded the resulting URI.
https://gist.github.com/mclaughta/2f4af6f14d6aeadb7611
Note that you can build an OAuth2 flow using this new API, but it's not a complete sample yet:
https://developers.google.com/apps-script/reference/script/script-app#newStateToken()
In particular, you should not pass 'state' directly to the /usercallback URL yourself, because the OAuth2 service provider is responsible for round-tripping the 'state' parameter. (Instead, you pass 'state' to the auth URL, and the service provider automatically attaches it to the callback URL.)