OpenShift default service accounts create a duplicated secret-token - namespaces

For every new namespace OpenShift creates default service accounts with one registry & one api secret referenced.
But for some reason the namespace has three secrets for each service account, two API-token secrets without the second one being referenced and the registry secret. I compared the two token secrets and besides the actual token variable there is no difference. Is this a normal behavior and if so, why?

Related

Functions triggered by Eventhub

We have an existing solution where there is an Eventhub ingests real time event data from a source. We are using a SAS key for authentication and authorization. For additional security we have whitelisted the IPs of the source on the Eventhub. We also have a databricks instance within a VNET reading from this eventhub. The VNET has been whitelisted on the Eventhub as well.
We now have a new requirement to read off the Eventhub using Azure functions. The problem is since we have enabled IP whitelisting on the Eventhub, we need to whitelist the IPs of the functions as well and we can't figure out which Inbound IPs to whitelist on the Eventhub.
The documentation says that the inbound IPs remain mostly the same but can change for Consumption plan which is what we intend to use.
Does that mean the only other solution is that we need to whitelist the entire Azure region where our functions are hosted using the list in the link Azure service IPs?
Any other suggestions what we can try?
Does that mean the only other solution is that we need to whitelist
the entire Azure region where our functions are hosted? Any other
suggestions what we can try?
Yes, if you don't know the outbound ip address of azure function app, please add the ip region to the whitelist. You could get those here.
More realistic option: You can put your function app in a azure VNET and let the VNET to access the Event Hub. However, this requires a AppService Plan or Premium Consumption Plan Function.

How to assign multiple service account credentials to Google Cloud Functions?

I have three service accounts:
App engine default service account
Datastore service account
Alert Center API service account
My cloud functions uses Firestore in datastore mode for book keeping and invokes Alert Center API.
One can assign only one service account while deploying cloud functions.
Is there way similar to AWS where one can create multiple inline policies and assign it to default service account.
P.S. I tried creating custom service account but datastore roles are not supported. Also I do not want to store credentials in environment variables or upload credentials file with source code.
You're looking at service accounts a bit backwards.
Granted, I see how the naming can lead you in this direction. "Service" in this case doesn't refer to the service being offered, but rather to the non-human entities (i.e. apps, machines, etc - called services in this case) trying to access that offered service. From Understanding service accounts:
A service account is a special type of Google account that belongs to
your application or a virtual machine (VM), instead of to an
individual end user. Your application assumes the identity of the
service account to call Google APIs, so that the users aren't
directly involved.
So you shouldn't be looking at service accounts from the offered service perspective - i.e. Datastore or Alert Center API, but rather from their "users" perspective - your CF in this case.
That single service account assigned to a particular CF is simply identifying that CF (as opposed to some other CF, app, machine, user, etc) when accessing a certain service.
If you want that CF to be able to access a certain Google service you need to give that CF's service account the proper role(s) and/or permissions to do that.
For accessing the Datastore you'd be looking at these Permissions and Roles. If the datastore that your CFs need to access is in the same GCP project the default CF service account - which is the same as the GAE app's one from that project - already has access to the Datastore (of course, if you're OK with using the default service account).
I didn't use the Alert Center API, but apparently it uses OAuth 2.0, so you probably should go through Service accounts.

Azure ARM JSON template deployment logic clarificatrion

i have a simple question about ARM templates deployment logic.
I have 2 storage accounts (A and B) in my template and I can successfully deploy them to a single resource group.
Now, I remove the storage account B from the template and I deploy the template againt on the same resource group.
What actually happens? nothing? Or should I expect ARM to delete the storage account B keeping only A?
Thanks,
F
There are 2 deployment modes in the ARM paradigm: complete and incremental.
Complete will delete all the resources from your resource groups that are absent from the template, so if you only have 1 storage account in your template all the resources except this storage account will get removed.
Incremental will just créate\update the resources you are declaring in the ARM template. It wont delete anything.
You should expect the ARM template deployment to remove Storage Account B (as long as it doesn't have something dependent on it that would prevent it from being deleted) if you are doing a complete deployment. If an incremental deployment is being used Storage Account B will not be removed.

How to restrict automatic account creation in OpenShift Origin 1.3?

An OpenShift Origin instance can be configured with Google OAuth login with or without a hosted domain restriction. On first login an account is created for the user and then permissions can be assigned.
Is it possible to restrict automatic new account creation, i.e. disable it completely to only allow certain people on the instance?
You can start by choosing which hosted domain you want to use: https://docs.openshift.org/latest/install_config/configuring_authentication.html#Google . In addition, you can choose the lookup mapping method for users to identities: https://docs.openshift.org/latest/install_config/configuring_authentication.html#mapping-identities-to-users and tightly control who can and can't have a user on your cluster.

Refreshing Box API token from multiple servers

We are planning on migrating our Box v1 integration to v2.
Our integration implementation includes API calls accessing user box account and files from different servers at the same time.
With v2, and the introduction of refresh token, we would like to know whether multiple refresh token requests can be made concurrently from multiple servers over the same user account.
Moreover, and as a consequence of multiple refresh calls, we would also like to know whether it is possible to have more than one access key per user at any given time.
Thanks for the help
Assaf
We recommend that you use some sort of coordination between your servers to manage auth tokens and refresh tokens. While a user can have multiple access tokens for the same service, they will have to authenticate multiple times in order to get them. You can't mint extra auth tokens off a single auth-token/refresh-token pair.
Here's what we recommend.
Create a pair of encrypted columns in your database to store the auth token, the refresh token, a datetime for "token_expires", and a flag for "token_refresh_in_progress". Identify it by userID.
Write your code so that when you are about to make a call, if you are close (say, within a minute) of the token-expires datetime, instead of making your call, you check to see if the refresh-flag has been set, or if there's already a new token pair. If the flag hasn't been by some other server in the cluster, set the flag that you are doing the refresh. Make the refresh-grant call, and update the database with the new pair, and of course reset the flag.