Long-term access token in API Management REST API - azure-api-management

I use API Management REST API to create and manage users, and I have to regenerate the access token every 30 days. It's a bit of a hassle to update my web applications every month.
Is there a better way?
I know I can do it programmatically, too, but I am not sure it's a healthy practice to do it every time.

Whatever you do via APIM SAS token you can do via ARM. With ARM you can create service identity and use it to manage your resources: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal?view=azure-cli-latest

There's a lot of approaches to handle the access token.
To be simplest, you can put the token to a cache system, and set the expire time (like using SETEX in Redis), which is 30 days after in your case. So after 30 days, your access token will be removed automatically by the cache system.
When you need access token but you found it is missing, it means that the access token has expired or you never generated the access token. Whatever, you can generate the access token in your program and put it to the cache system. After that, in next 30 days, you can always use that access token.
In addition, due to time differences between your machine and the servers, it's recommended that set the expire time in your cache system a little bit before the real expire time. So in your case, you can set the expire time with 29 days in your cache system, which guarantee the access token is absolutely valid in remote server.

Related

Trying to create a local copy of our google drive with rclone bringing down all the files, constantly hitting rate limits

As the title states, I'm trying to create a local copy of our entire google drive, we currently are using it as a file storage service which is obviously not the best use-case, but to migrate else where I of course need to get all the files, the entire google drive is around 800gb~.
I am using rclone specifically the copy command to copy the files FROM google drive TO the local server, however I am constantly running into user Rate Limit errors.
I am using a google service account to authenticate this as well, which I believe should provide more usage limits.
2021/11/22 07:39:50 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User
Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may
consider re-evaluating expected per-user traffic to the API and adjust project quota
limits accordingly. You may monitor aggregate quota usage and adjust limits in the API
Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?
project=, userRateLimitExceeded)
But I don't really understand since according to my usage it is not even coming close, I am just wondering what exactly can I do to either increase my rate limit (even if that means paying) or is there some sort of solution to this issue? Thanks
Error 403: User
Rate Limit Exceeded. Rate of requests for user exceed configured project quota.
User rate limit is the number of requests your user is making per second. its basically flood protection. You are flooding the server. It is unclear how google calculates this, beyond the 100 requests per user per second. If you are getting the error there is really nothing you can do besides slow down your code. Its also unclear from your question how you are running these requests.
If you could include the code we could see how the requests are being preformed. How ever as you state you are using something called rclone so there is no way of knowing how that works.
Your only option would be to slow your code down if you have any control over that though this third party application. If not you may want to contact the owner of the product for direction as to how to fix it.

AWS Cognito to authenticate App users and retrieve settings from MySQL database

I am doing some research for a mobile app I want to develop, and was wondering whether I could get feedback on the following architecture. Within my future app users should be able to authenticate and register themselves via the mobile app and retrieve and use their settings after a successful authentication.
What I am looking for is an architecture in which user accounts are managed by AWS Cognito, but all application related information is stored in a MySQL database hosted somewhere else.
Why host the database outside of AWS? Because of high costs / vendor lock-in / for the sake of learning about architecture rather than going all-in on AWS or Azure
Why not build the identity management myself? Because in the end I want to focus on the App and don't spent a lot of energy on something that AWS can already provide me with (yeah I know, not quite in line with my last argument above, but otherwise all my time goes into database AND IAM)
One of my assumptions in this design (please correct me if I am wrong) is that it is only possible to retrieve data from a MySQL database with 'fixed credentials'. Therefore, I don't want the app (the user's device) to make these queries (but do this on the server instead) as the credentials to the database would otherwise be stored on the device.
Also, to make it (nearly) impossible for users to run queries on the database with a fake identity, I want the server to retrieve the User ID from AWS Cognito (rather than using the ID token from the device) and use this in the SQL query. This, should protect the service from a fake user ID injection from the device/user.
Are there functionalities I have missed in any of these components that could make my design less complicated or which could improve the flow?
Is that API (the one in the step 3) managed by the AWS API Gateway? If so, your cognito user pool can be set as Authorizer in your AWS API Gateway, then the gateway will take care automatically of the token verification (Authorizers enable you to control access to your APIs using Amazon Cognito User Pools or a Lambda function).
You can also do the token verification in a Lambda if you need to verify something else in the token.
Regarding to the connection between NodeJS (assuming that is an AWS lambda) that will work fine, but keep in mind the security as your customers data will travel outside AWS, and try to use tools like AWS Secret Manager to keep your database passwords safe and rotate them from time to time in your lambda.

Alternative access to application files when server is down

I have an application that generates some reports at every hour. These reports are very critical (and sensitive) to the users and the only access is through the application (excel/pdf generation in memory with database) with previous user/password/role validation.
Last week the server that host the application shut down for several hours (hardware failure) and the users could not retrieve those reports (and i cant access to the db inmediatly).
My client needs to at least access the last generated reports. For example, if the failures occurs at 5 pm, he needs the report of the 4 pm.
So, i thought in store the reports in other place. The server/network administration is not my responsability. I dont have another server (and i cant avoid the network or hardware failures for ever), but i have a hard drive connected to the same server network (NAS).
Also i am thinking in storing the reports in Google Drive (client G suite with some encryption) or some other cloud service. But i am aware that i need permanent internet access.
¿What do you recommend me to do?
Have a nice day.
The best approach uses Nginx and creates multiple instances of the executable file and point to it if one instance stay down, the other instance will serve and the app will be live

Secure AND Stateless JWT Implementation

Background
I am attempting to implement token authentication with my web application using JSON Web tokens.
There are two things I am trying to maintain with whatever strategy I end up using: statelessness and security. However, from reading answers on this site and blog posts around the internet, there appears to be some folks who are convinced that these two properties are mutually exclusive.
There are some practical nuances that come into play when trying to maintain statelessness. I can think of the following list:
Invalidating compromised tokens on a per-user basis before their expiration date.
Allowing a user to log out of all of their "sessions" on all machines at once and having it take immediate effect.
Allowing a user to log out of the current "session" on their current machine and having it take immediate effect.
Making permission/role changes on a user record take immediate effect.
Current Strategy
If you utilize an "issued time" claim inside the JWT in conjunction with a "last modified" column in the database table representing user records, then I believe all of the points above can be handled gracefully.
When a web token comes in for authentication, you could query the database for the user record and:
if (token.issued_at < user.last_modified) then token_valid = false;
If you find out someone has compromised a user's account, then the user can change their password and the last_modified column can be updated, thus invalidating any previously issued tokens. This also takes care of the problem with permission/role changes not taking immediate effect.
Additionally, if the user requests an immediate log out of all devices then, you guessed it: update the last_modified column.
The final problem that this leaves is per-device log out. However, I believe this doesn't even require a trip to the server, let alone a trip to the database. Couldn't the sign out action just trigger some client-side event listener to delete the secure cookie holding the JWT?
Problems
First of all, are there any security flaws that you see in the approach above? How about a usability issue that I am missing?
Once that question is resolved, I'm really not fond of having to query the database each time someone makes an API request to a secure end point, but this is the only strategy that I can think of. Does anyone have any better ideas?
You have made a very good analysis of how some common needs break the stateleness of JWT. I can only propose some improvements on your current strategy
Current strategy
The drawback I see is that always is required a query to the database. And trivial modifications on user data could change last_modified and invalidate tokens.
An alternative is to maintain a token blacklist. Usually is assigned an ID to each token, but I think you can use the last_modified. As operations revocation of tokens probably are rare, you could keep a light blacklist (even cached in memory) with just userId, and last_modified.
You only need to set an entry after updating critical data on user (password, permissions, etc) and currentTime - maxExpiryTime < last_login_date. The entry can be discarded when currentTime - maxExpiryTime > last_modified (no more non-expired tokens sent).
Could not sign out the action just trigger some client-side event listener to delete the cookie secure holding the JWT?
If you are in the same browser with several open tabs, you can use the localStorage events to sync info between tabs to build a logout mechanism (or login / user changed). If you mean different browsers or devices, then a you would need to send some way of event from server to client. But it means maintain an active channel, for example a WebSocket, or sending a push message to a native mobile app
Are there any security flaws that you 'see in the above approach?
If you are using a cookie, note you need to set an additional protection against CSRF attacks. Also if you do not need to access cookie from client side, mark it as HttpOnly
How about a usability issue that i am missing?
You need to deal also with rotating tokens when the are close to expire.

Simple, secure API authentication system

I have a simple REST JSON API for other websites/apps to access some of my website's database (through a PHP gateway). Basically the service works like this: call example.com/fruit/orange, server returns JSON information about the orange. Here is the problem: I only want websites I permit to access this service. With a simple API key system, any website could quickly attain a key by copying the key from an authorized website's (potentially) client side code. I have looked at OAuth, but it seems a little complicated for what I am doing. Solutions?
You should use OAuth.
There are actually two OAuth specifications, the 3-legged version and the 2-legged version. The 3-legged version is the one that gets most of the attention, and it's not the one you want to use.
The good news is that the 2-legged version does exactly what you want, it allows an application to grant access to another via either a shared secret key (very similar to Amazon's Web Service model, you will use the HMAC-SHA1 signing method) or via a public/private key system (use signing method: RSA-SHA1). The bad news, is that it's not nearly as well supported yet as the 3-legged version yet, so you may have to do a bit more work than you otherwise might have to right now.
Basically, 2-legged OAuth just specifies a way to "sign" (compute a hash over) several fields which include the current date, a random number called "nonce," and the parameters of your request. This makes it very hard to impersonate requests to your web service.
OAuth is slowly but surely becoming an accepted standard for this kind of thing -- you'll be best off in the long run if you embrace it because people can then leverage the various libraries available for doing that.
It's more elaborate than you would initially want to get into - but the good news is that a lot of people have spent a lot of time on it so you know you haven't forgotten anything. A great example is that very recently Twitter found a gap in the OAuth security which the community is currently working on closing. If you'd invented your own system, you're having to figure out all this stuff on your own.
Good luck!
Chris
OAuth is not the solution here.
OAuth is when you have endusers and want 3rd party apps not to handle end user passwords. When to use OAuth:
http://blog.apigee.com/detail/when_to_use_oauth/
Go for simple api-key.
And take additional measures if there is a need for a more secure solution.
Here is some more info, http://blog.apigee.com/detail/do_you_need_api_keys_api_identity_vs._authorization/
If someone's client side code is compromised, they should get a new key. There's not much you can do if their code is exposed.
You can however, be more strict by requiring IP addresses of authorized servers to be registered in your system for the given key. This adds an extra step and may be overkill.
I'm not sure what you mean by using a "simple API key" but you should be using some kind of authentication that has private keys(known only to client and server), and then perform some kind of checksum algorithm on the data to ensure that the client is indeed who you think it is, and that the data has not been modified in transit. Amazon AWS is a great example of how to do this.
I think it may be a little strict to guarantee that code has not been compromised on your clients' side. I think it is reasonable to place responsibility on your clients for the security of their own data. Of course this assumes that an attacker can only mess up that client's account.
Perhaps you could keep a log of what ip requests are coming from for a particular account, and if a new ip comes along, flag the account, send an email to the client, and ask them to authorize that ip. I don't know maybe something like that could work.
Basically you have two options, either restrict access by IP or then have an API key, both options have their positive and negative sides.
Restriction by IP
This can be a handy way to restrict the access to you service. You can define exactly which 3rd party services will be allowed to access your service without enforcing them to implement any special authentication features. The problem with this method is however, that if the 3rd party service is written for example entirely in JavaScript, then the IP of the incoming request won't be the 3rd party service's server IP, but the user's IP, as the request is made by the user's browser and not the server. Using IP restriction will hence make it impossible to write client-driven applications and forces all the requests go through the server with proper access rights. Remember that IP addresses can also be spoofed.
API key
The advantage with API keys is that you do not have to maintain a list of known IPs, you do have to maintain a list of API keys, but it's easier to automatize their maintenance. Basically how this works is that you have two keys, for example a user id and a secret password. Each method request to your service should provide an authentication hash consisting of the request parameters, the user id and a hash of these values (where the secrect password is used as the hash salt). This way you can both authenticate and restrict access. The problem with this is, that once again, if the 3rd party service is written as client-driven (for example JavaScript or ActionScript), then anyone can parse out the user id and secret salt values from the code.
Basically, if you want to be sure that only the few services you've specifically defined will be allowed to access your service, then you only option is to use IP restriction and hence force them to route all requests via their servers. If you use an API key, you have no way to enforce this.
All of production of IP's security seems produces a giant bug to users before getting connected. Symbian 60s has the fullest capability to left an untraced, reliable and secure signal in the midst of multiple users(applying Opera Handler UI 6.5, Opera Mini v8 and 10) along with the coded UI's, +completely filled network set-up. Why restrict for other features when discoverable method of making faster link method is finally obtained. Keeping a more identified accounts, proper monitoring of that 'true account'-if they are on the track-compliance of paying bills and knowing if the users has an unexpired maintaining balance will create a more faster link of internet signal to popular/signatured mobile industry. Why making hard security features before getting them to the site, a visit to their accounts monthly may erase all of connectivity issues? All of the user of mobile should have no capability to 'get connected' if they have unpaid bills. Why not provide an 'ALL in One' -Registration/Application account, a programmed fixed with OS, (perhaps an e-mail account) instead with a 'monitoring capability' if they are paying or not (password issues concern-should be given to other department). And if 'not' turn-off their account exactly and their other link features. Each of them has their own interests to where to get hooked daily, if you'd locked/turn them off due to unpaid bills that may initiate them to re-subscribe and discipline them more to become a more responsible users and that may even expire an account if not maintained. Monthly monitoring or accessing of an identified 'true account' with collaboration to the network provider produces higher privacy instead of always asking for users 'name' and 'password', 'location', 'permissions' to view their data services. IP's marked already their first identity or 'finding the location of the users' so, it's seems unnessary to place it on browsers pre-searches, why not use 'Obtaining data' or 'Processing data.'