I'm considering crating a simple API to serve my app. Basically it would be a very simple monitoring system (sending a true/false value) and uploading it to database or BigQuery. According to the Firebase documentation, the Functions service stores the IP address temporarily (which is considered personal data) - https://firebase.google.com/support/privacy/.
Does it mean that I would need to ask the users for the consent to use this API to comply with GDPR?
Related
I have a google cloud function in Java.
Client will invoke the function using HTTP trigger URL.
But that is not secure. I have gone through some docs saying that you should pass a token or client ID and then verify it in server side.
Can anyone explain that in detail and please provide a code example if any.
My doubt is to authenticate the client while they invoke the function using Http trigger
This page explains quite well all the capacity that you have to authenticate a requester on Cloud Functions.
If you have users, the best way is to use Firebase Auth (our Google Cloud Identity Platform which is simply a more advance solution than Firebase Auth with more features)
However, you need to grant all you user with cloudfunction.invoker role, to allow them to invoke the Cloud Functions. It could be difficult. You can also perform the check on your side, but in this case you remove the security (filter) layer of google and you have to check all the traffic by yourselves (not really safe, in term of billing and in case of attack).
The latest solution, API keys, is not recommended, especially for the users. But for machine to machine it's sometime the only solution. However, there isn't out of the box solution and for this I wrote an article, that explains how to create a Cloud Endpoint (or now a Cloud API Gateway which is the serverless solution of Cloud Endpoint with ESPv2) to accept API Keys.
With this latest solution, if you change your security definition, you can also accept OAuth2 tokens coming from Firebase Auth (or Cloud Identity Platform), but this time, you don't need to grant all the users on your Cloud Functions IAM role. The token only need to be valid and it's the Cloud Endpoint service account which is used to perform the call (and thus which needs to be authorized on the Cloud Functions).
In addition, because you can accept OAuth2 token, you can also accept non Google token, and thus have your users in any IDP OAuth2 compliant (KeyCloak, Okta,...)
You could use external OAuth server like keycloack (https://github.com/keycloak/keycloak), or use somethging like Json Web Tokens -- https://jwt.io/ -- available for various languages, siutable for microservices.
I have three service accounts:
App engine default service account
Datastore service account
Alert Center API service account
My cloud functions uses Firestore in datastore mode for book keeping and invokes Alert Center API.
One can assign only one service account while deploying cloud functions.
Is there way similar to AWS where one can create multiple inline policies and assign it to default service account.
P.S. I tried creating custom service account but datastore roles are not supported. Also I do not want to store credentials in environment variables or upload credentials file with source code.
You're looking at service accounts a bit backwards.
Granted, I see how the naming can lead you in this direction. "Service" in this case doesn't refer to the service being offered, but rather to the non-human entities (i.e. apps, machines, etc - called services in this case) trying to access that offered service. From Understanding service accounts:
A service account is a special type of Google account that belongs to
your application or a virtual machine (VM), instead of to an
individual end user. Your application assumes the identity of the
service account to call Google APIs, so that the users aren't
directly involved.
So you shouldn't be looking at service accounts from the offered service perspective - i.e. Datastore or Alert Center API, but rather from their "users" perspective - your CF in this case.
That single service account assigned to a particular CF is simply identifying that CF (as opposed to some other CF, app, machine, user, etc) when accessing a certain service.
If you want that CF to be able to access a certain Google service you need to give that CF's service account the proper role(s) and/or permissions to do that.
For accessing the Datastore you'd be looking at these Permissions and Roles. If the datastore that your CFs need to access is in the same GCP project the default CF service account - which is the same as the GAE app's one from that project - already has access to the Datastore (of course, if you're OK with using the default service account).
I didn't use the Alert Center API, but apparently it uses OAuth 2.0, so you probably should go through Service accounts.
I am working with an API for automating tasks in a company I work for.
The software will run from a single server and there will only one instance of the sensitive data.
I have a tool that our team uses at the end of every day.
The token only needs to be requested once since it has a +-30 minute timeout.
Since I work with Salesforce API, the user has to enter his/her password either way since it relates the ticket to their account.
The API oAuth2 tokens and all of its sensitive components need to be secured.
I use PowerShell & a module called FileCryptograhy to produce an AES version of my config.json.
In my config file, I store all the component keys that need to be used to generate the token itself.
Steps
Base64 encode strings
Use FileCyptography module to encrypt the JSON file with a secret key into an AES file.
When API needs to produce a token, it works in reverse to get all the data.
Is this a valid way of securing sensitive API data, or is there a more efficient way?
P.S: I understand that nothing is very secure and can be reverse engineered, I just need something that will keep at least 90% of people away from this data.
I am using Azure API Management to provide API gateway for some APIs. To set up a policy for a particular Api, I have used a Property(Named Value) to restore user metadata and then I assign it into a Variable in incoming request body. When adding a new user I need to add metadata for the new user in to the json. The property value has grown and exceeded the limit now and I cannot add more info to it anymore. I am wondering what the best way is to restore my large metadata in order to be accessible in API Management policy?
Update1:
I have switched the Authentication process from Azure to Auth0 so I can add the user metadata to Auth0 app_metadata and then in Azure policies I validate JWT from Auth0 and obtain token claim(app_metadata) explained in this article. By doing so I can solve the large user metadata (json) issue however this doesn't solve other non-related user metadata stored in other Properties(Named Value) and moreover the API gateway inbound policies are growing and becoming a huge bunch of logic which is not easy to manage and maintain.
At this stage I am looking for a solution to handle all the API gateway inbound policies in a better way and more manageable environment i.e. C#. So my two cents is to implement the API gateway inbound policies in a new .net Api and call this new API in the existing API gateway inbound policies so that it can play a bridge role between Azure API gateway and existing API however I'm still not sure if this is acheivable and whether existing API can be called via new API directly or it should be called via Azure API gateway in some way!
At this point you have to either store it in multiple variables or hardcode it in policy directly.
After more research I ended up with this solution which basically suggests to restore user metadata in Azure Cosmos DB and call Cosmos API in API Management Policy to access to the metadata and also the Cosmos API call can be cached in the policy.
I have a web app and API for the app configured and completed, but work is now requesting more apps. The apps are web-heavy with a light API for mobile functionality. A monolithic apps seems out of the question, so I decided to make each one individually. Each app will have their own layout, database, and API. However, the one thing I want to share among all apps is the users' password, api token, and firebase messaging token. A separate app will be created just for authentication with IDnumber, password, api token, and fcm token. 4 simple fields. This single app will be the only one doing any writing to it's DB and this single table.
Creating requests to the auth app to verify every request to each API seems inefficient, so I was wondering if there was a way for the apps to tap into the auth database and verify tokens and passwords directly. There would be no joining of tables cross-apps and no cross-app creation/updating/deletion. Problems with keeping models and schemas synced make sense, but would read-only custom queries eliminate those issues?
Integration at DB level is messy - any change would need to be done on every application using it, and the security is a concerns too.
The typical solution for your problem (having several application share a unique authentication source) is OAuth - a way for the multiple apps to delegate authentication to your "Auth App". This is well supported by frameworks such as Devise.