Consul 0.8 ACL migration - how to migrate - acl

TLTR
How to migrate the pre 0.8 ACL permissions to 0.7.3?
Current setup
I am currently running an ACL enabled Consul 0.7.3 stack.
With Consul 0.8 ACLs will finally also include services and nodes, so that nodes / service (Consul) are not longer shown to anonymous users. This is exactly what I need. Today I tried to enable the new ACL "pre 0.8" using https://www.consul.io/docs/agent/options.html#acl_enforce_version_8
After doing so, my nodes could no longer authenticate against the master ( if authentication is the problem at all ).
I run the consul-network with gossip enabled, I have configured a acl_master_token:
"{acl_master_token":"<token>}"
and a token for the agents:
"{acl_token":"<token>}"
which all agents use / are configured with.
I have these ACL defaults:
{
"acl_datacenter": "stable",
"acl_default_policy": "deny",
"acl_down_policy": "deny"
}
and my Consul config looks like this:
{
"datacenter": "stable",
"data_dir": "/consul/data",
"ui": true,
"dns_config": {
"allow_stale": false
},
"log_level": "INFO",
"node_name": "dwconsul",
"client_addr" : "0.0.0.0",
"server": true,
"bootstrap": true,
"acl_enforce_version_8": true
}
What happens
When I boot, I cannot see my nodes/services using my token at all, neither the nodes/agents can register at the master,
Question
What is exactly needed to get the following:
All agents can see all nodes and all services and all KVs
Anonymous sees nothing, not KV, services or nodes (thats what is possible with 0.8 )
I looked at https://www.consul.io/docs/internals/acl.html "ACL Changes Coming in Consul 0.8" but I could not wrap my head around it. Should I now use https://www.consul.io/docs/agent/options.html#acl_agent_master_token instead of acl_token?
Thank you for any help. I guess I will not be the only one on this migration path and this particular interest, a lot of people are interested in this. You help all of them :)

It looks like the new node policy is preventing the nodes from registering properly. This should fix things:
On your Consul servers configure them with an acl_agent_token that has a policy that can write to any node, like this: node "" { policy = "write" }.
On your Consul agents, configure them with a similar one to the servers to keep things open, or you can give them a token with a more specific policy that only lets them write to some allowed prefix.
Note this gets set as the acl_agent_token which is used for internal registration operations. The acl_agent_master_token is used as kind of an emergency token to use the /v1/agent APIs if there's something wrong with the Consul servers, but it only applies to the /v1/agent APIs.
For "all agents can see all nodes and all services and all KVs" you'd add node read privileges to whatever token you are giving to your agents via the acl_token, so you'd add a policy like:
node "" { policy = "read" }
service "" { policy = "read" }
key "" { policy = "read" }
Note that this allows anyone with access to the agent's client interface to read all these things, so you want to be careful with what you bind to (usually only loopback). Or don't set acl_token at all and make callers pass in a token with each request.

Related

Unable to add Fine grain access for Elasticsearch service for Cloudformation

I am creating CloudFormation stack with Elasticsearch service, however it fails for AdvancedSecurityOptions, which works perfectly fine with aws es create-elasticsearch-domain
my JSON template snippet is below:
...
"AdvancedOptions": {
"rest.action.multi.allow_explicit_index": true
},
"AdvancedSecurityOptions": {
"Enabled": true,
"InternalUserDatabaseEnabled": false,
"MasterUserOptions": {
"MasterUserARN": "arn:aws:iam::1234567890:role/role_name"
}
},
"DomainName": {
"Ref": "ESDomainName"
}
...
I am unable to get this code working, any help related to fine grain access control would be really appreciated.
The AdvancedSecurityOptions is the latest addition to Amazon Elasticsearch service added recently as part of Fine Grained Access Control. This is available only via Console, CLI and API for now.
I am not sure if the thread is with outdated info, but according to the official AWS documentation on this link it should be possible to use the AdvancedSecurityOptions for Fine Grained Access Control. It even states that it is meant to be used for FGAC at the top of the page.
Continuing from DNakevski# answer above, for FGAC we need to ensure the following three settings in the CFN template are set to true since they serve as pre-requisites:
EncryptionAtRestOptions
NodeToNodeEncryptionOptions and
HTTPS.
Further, the important parameter for FGAC in the CFN template is AdvancedSecurityOptions and needs to be set to Enabled: true
AmazonES/Opendistro-for-ES provides two ways for security with FGAC. One is through using a IAM user as a master-user and other is through having basic auth.
If you need to take the IAM way then set the InternalUserDatabaseEnabled to false and only have the parameter *MasterUserARN: "IAM User ARN" under the MasterUserOptions field.
If you need to take the basic auth (username and password) approach set the InternalUserDatabaseEnabled to true and have the MasterUserName: "any-name" and the MasterUserPassword: "xxx"* Please have at least one lower case, one upper case, one digit and one special character for the password else the CFN template will rollback. However, the failure message is easily seen on the CFN console under events.
I have a simple working CFN yaml here doing the same just in case.

CAS Multifactor Authentication Provider Selection

I am working with cas-overlay-template project in version 6.1.4. I have implemented two mfa providers on my CAS, Google Authenticator and CAS Simple. Both are working, I have tested them separately and I have got the results I've expected.
Until now, I have been activating the mfa modifying the cas.properties file adding this properties: cas.authn.mfa.globalProviderId=mfa-gauth when I wanted to use Google, or cas.authn.mfa.globalProviderId=mfa-simple when I used the CAS itself.
Well, in CAS documentation is mentioned that is possible to enable a provider selection menu, if resolved more than one just by adding this propertie: cas.authn.mfa.provider-selection-enabled=true. So, my configuration is the following:
cas.authn.mfa.provider-selection-enabled=true
cas.authn.mfa.globalProviderId=mfa-gauth
cas.authn.mfa.globalProviderId=mfa-simple
But when I try to login with any user (I'm using the default one casuser:Mellon), CAS don't show me a menu in which I can select the following mfa provider, It directly goes to mfa-simple provider.
What am I doing wrong?
Well, in CAS documentation is mentioned that is possible to enable a provider selection menu, if resolved more than one just by adding this properties:
So far so good.
So, my configuration is the following:
That's the problem. You are not resolving/triggering more than just one provider. You start with mfa-gauth and then override it with mfa-simple. In CAS 6.1.x, the globalProviderId only accepts a single identifier. It's not a list or a container of any kind to accept more than one value. This has been addressed in the next coming release.
At the moment, to resolve more than one provider you will need to assign the MFA providers to a registered service definition. Like so:
{
"#class": "org.apereo.cas.services.RegexRegisteredService",
"serviceId": "^(https|imaps)://.*",
"name": "Example",
"id": 1,
"description": "This service definition defines a service.",
"evaluationOrder": 1,
"multifactorPolicy" : {
"#class" : "org.apereo.cas.services.DefaultRegisteredServiceMultifactorPolicy",
"multifactorAuthenticationProviders" : [ "java.util.LinkedHashSet", [ "mfa-duo", "mfa-gauth" ] ]
}
}
This means, provider selection can be enabled on a per-application basis. Alternatively, you can write a small groovy script to return more than one provider back to CAS, allowing the selection menu to display the menu items.
Read this post for full details.

How to use Openshift Exposing Object Fields?

The openshift documentation has a feature Exposing Object Fields that I am struggling to comprehend. When I load my secret I am exposing it as per the documentation. Yet it is unclear from the language of the documentation what are the actual mechanism to bind to the exposed variables. The docs state:
An example response to a bind operation given the above partial
template follows:
{ "credentials": {
"username": "foo",
"password": "YmFy",
"service_ip_port": "172.30.12.34:8080",
"uri": "http://route-test.router.default.svc.cluster.local/mypath" } }
Yet that example isn't helpful as its not clear what was bound and how it was bound to actually pick-up the exposed variables. What I am hoping it is all about is that the exposed values become ambient and that when I run some other templates into the same project (???) it will automatically resolve (bind) the variables. Then I can decouple secret creation (happening at product creation time) and secret usage (happening when developers populate their project). Am I correct that this feature creates ambient properties and that they are picked up by any template? Are there any examples of using this feature to decouple secret creation from secret usage (i.e. using this feature for segregation fo duties).
I am running Redhat OCP:
OpenShift Master:
v3.5.5.31.24
Kubernetes Master:
v1.5.2+43a9be4

HasMany relation entry point unauthorized on Loopback

New to Loopback, I tried to make a simple API with a user model and a todo model.
The user model, named Todoer,is based on the built-in User model. create a todoer, login, logout, etc. works like a charm.
The Todo model is based on PersistedModel with no special ACLs on it for the moment.
I made a Belongs To relation from Todo model to Todoer model to have an ownership.
I made also a HasMany relation from Todoer to Todo to be able to retrieve all the todos of a user through the endpoint GET /Todoer/{id}/todos
With a todoer logged in, with the good token and id, I can easily have responses from Todoer endpoints reserved for logged users, like GET /Todoer/{id} for example, so I'm sure the authentication mechanism is working well.
But each time I want to hit GET /Todoer/{id}/todos, I only obtain a error message telling I'm not authorized. I'm always sure I gave the good token and Todoer Id obtained at login.
Even if I make a big ACL telling OK to everything to all on the Todoer model, it happens the same.
What did I miss ? I can't figure it out...
Thank you for your help...
You need to take into account ACLs of a built-in Usermodel. You are actually running into its general DENY ACL rule. It takes priority over your general ALLOW ACL rule (docs on ACL rule precedence).
You can write more specific ACL rule to get pass it (docs on Accessing related models).
{
"accessType": "*",
"principalType": "ROLE",
"principalId": "$everyone",
"permission": "ALLOW",
"property": "__get__todos"
}
Another option, which might be in this case more convenient and safe, is to use a dynamic $owner role on Todo itself (docs on Dynamic roles).
{
"accessType": "*",
"principalType": "ROLE",
"principalId": "$owner",
"permission": "ALLOW"
}
If you want to understand what is happening regarding to ACLs in your application, it's very useful to set a DEBUG environment variable to loopback:security:* to enable quite extensive security logging.

Is there a way I can use more instances than the number of external IPs I have?

I have more than enough CPUs and memory to launch 100 instances but only 30 external IP addresses is there a way I can launch more instances despite that?
Chances are you don't need that many IPs at all. Only in very specific scenarios you would need all your nodes to be publicly accessible.
If you need that many instances, simply create them without public IPs. Then, create a NAT Gateway so your instances can use that to access outside your private network.
You will be able to accomplish 99% of usage scenarios this way. If you really need more IPs and you have used all of your ephemeral IPs you can request in the Form.
I guess it depends what you want to do, but the gcloud compute instances create tool has a flag --no-address which will allow you to launch an instance with no external IP address. Have a look at gcloud compute instances create --help to see if you think that would be useful.
If you wanted to use the API or instance templates, I think just leaving out the accessConfigs part of the networking section of the request body will do what you need. Compare this:
"networkInterfaces": [
{
"network": "https://www.googleapis.com/compute/v1/projects/your-project-here/global/networks/default",
"accessConfigs": [
{
"name": "External NAT",
"type": "ONE_TO_ONE_NAT"
}
]
}
Where I used the default option of "Ephemeral" for the external IP in the Google Cloud Developers console, with this:
"networkInterfaces": [
{
"network": "https://www.googleapis.com/compute/v1/projects/your-project-here/global/networks/default"
}
]
Where I selected "None" as the External IP.
To look at what the API body would look like, there is a link "View Equivalent REST" just below the Create button, it can be really useful for templates and things.