The openshift documentation has a feature Exposing Object Fields that I am struggling to comprehend. When I load my secret I am exposing it as per the documentation. Yet it is unclear from the language of the documentation what are the actual mechanism to bind to the exposed variables. The docs state:
An example response to a bind operation given the above partial
template follows:
{ "credentials": {
"username": "foo",
"password": "YmFy",
"service_ip_port": "172.30.12.34:8080",
"uri": "http://route-test.router.default.svc.cluster.local/mypath" } }
Yet that example isn't helpful as its not clear what was bound and how it was bound to actually pick-up the exposed variables. What I am hoping it is all about is that the exposed values become ambient and that when I run some other templates into the same project (???) it will automatically resolve (bind) the variables. Then I can decouple secret creation (happening at product creation time) and secret usage (happening when developers populate their project). Am I correct that this feature creates ambient properties and that they are picked up by any template? Are there any examples of using this feature to decouple secret creation from secret usage (i.e. using this feature for segregation fo duties).
I am running Redhat OCP:
OpenShift Master:
v3.5.5.31.24
Kubernetes Master:
v1.5.2+43a9be4
Related
I'm trying to implement a simple solution to send http request metrics to Stackdriver in GCP from my API hosted in a compute engine instance.
Using recent version of Spring Boot (2.1.5). I've also pulled in actuator and micrometer-registry-stackdriver packages, actuator works for health endpoint at the moment, but am unclear on how to implement metrics for this.
In the past (separate project, different stack), I mostly used the auto-configured elements with influx. Using management.metrics.export.influx.enabled=true, and some other properties in properties file, it was a pretty simple setup (though it is quite possible the lead on my team did some of the heavy lifting while I wasn't aware).
Despite pulling in the stackdriver dependency I don't see any type of properties for stackdriver. Documentation is all generalized, so I'm unclear on how to do this for my use case. I've searched for examples and can find none.
From the docs: Having a dependency on micrometer-registry-{system} in your runtime classpath is enough for Spring Boot to configure the registry.
I'm a bit of a noob, so I'm not sure what I need to do to get this to work. I don't need any custom metrics really, just trying to get some metrics data to show up.
Does anyone have or know of any examples in setting this up to work with Stackdriver?
It seems like the feature for enabling Stackdriver Monitoring for COS is currently in Alpha. If you are down to try GCE COS VM with the agent, you can request access via this form .Curiously, I was able to install monitoring agent during instance creation as a test. I used COS image : Container-Optimized OS 75-12105.97.0 stable.
Inspecting COS, collect d agent seems to be installed here :/etc/stackdriver/monitoring.config.d and
Inspecting my monitoring Agent dashboard, I can see activity from the VM (CPU usage, etc.). I'm not sure if this is what you're trying to achieve but hopefully it points you in the right direction.
From my understanding, you try to monitor a 3rd party software that you built and get the results in GCP Stackdriver? If that’s right, I would like to suggest you to implement the stackdriver monitoring agent [1] on your VM instance, including the Stackdriver API output plugin. This agent gathers system and 3rd party application metrics and pushes the information to a monitoring system like Stackdriver.
The Stackdriver Monitoring Agent is based on the open-source “collectd” daemon so let me share some more precious documentation from its website [2].
Prior to spring-boot 2.3 StackDriver is not supported out of the box, but it's not much configuration to make it work.
#Bean
StackdriverConfig stackdriverConfig() {
return new StackdriverConfig() {
#Override
public String projectId() {
return MY_PROJECT_ID;
}
#Override
public String get(String key) {
return null;
}
}
}
#Bean
StackdriverMeterRegistry meterRegistry(StackdriverConfig stackdriverConfig) {
return StackdriverMeterRegistry.builder(stackdriverConfig).build();
}
https://micrometer.io/docs/registry/stackdriver
I'm trying to create a GCE instance "with a container" (as supported by gcloud CLI) by using POST https://www.googleapis.com/compute/v1/projects/{project}/zones/{zone}/instances.
How can I pass the container image url in the request payload?
If you are trying to set the "Machine Type", then you can specify the URL following the syntax mentioned in this document.
In document:
Full or partial URL of the machine type resource to use for this instance, in the format: zones/zone/machineTypes/machine-type. This is provided by the client when the instance is created. For example, the following is a valid partial url to a predefined machine type:
zones/us-central1-f/machineTypes/n1-standard-1
To create a custom machine type, provide a URL to a machine type in the following format, where CPUS is 1 or an even number up to 32 (2, 4, 6, ... 24, etc), and MEMORY is the total memory for this instance. Memory must be a multiple of 256 MB and must be supplied in MB (e.g. 5 GB of memory is 5120 MB):
zones/zone/machineTypes/custom-CPUS-MEMORY
For example: zones/us-central1-f/machineTypes/custom-4-5120 For a full list of restrictions, read the Specifications for custom machine types.
If you instead, you want to create a container cluster, then the API method mentioned in this link can help you.
It doesn't seem that there's an equivalent REST API for gcloud compute instances create-with-container ..., however as suggested by #user10880591's comment, Terraform can help. Specifically, the container-vm module deals with the generation of metadata required for this kind of action.
Usage example can be found here.
TLTR
How to migrate the pre 0.8 ACL permissions to 0.7.3?
Current setup
I am currently running an ACL enabled Consul 0.7.3 stack.
With Consul 0.8 ACLs will finally also include services and nodes, so that nodes / service (Consul) are not longer shown to anonymous users. This is exactly what I need. Today I tried to enable the new ACL "pre 0.8" using https://www.consul.io/docs/agent/options.html#acl_enforce_version_8
After doing so, my nodes could no longer authenticate against the master ( if authentication is the problem at all ).
I run the consul-network with gossip enabled, I have configured a acl_master_token:
"{acl_master_token":"<token>}"
and a token for the agents:
"{acl_token":"<token>}"
which all agents use / are configured with.
I have these ACL defaults:
{
"acl_datacenter": "stable",
"acl_default_policy": "deny",
"acl_down_policy": "deny"
}
and my Consul config looks like this:
{
"datacenter": "stable",
"data_dir": "/consul/data",
"ui": true,
"dns_config": {
"allow_stale": false
},
"log_level": "INFO",
"node_name": "dwconsul",
"client_addr" : "0.0.0.0",
"server": true,
"bootstrap": true,
"acl_enforce_version_8": true
}
What happens
When I boot, I cannot see my nodes/services using my token at all, neither the nodes/agents can register at the master,
Question
What is exactly needed to get the following:
All agents can see all nodes and all services and all KVs
Anonymous sees nothing, not KV, services or nodes (thats what is possible with 0.8 )
I looked at https://www.consul.io/docs/internals/acl.html "ACL Changes Coming in Consul 0.8" but I could not wrap my head around it. Should I now use https://www.consul.io/docs/agent/options.html#acl_agent_master_token instead of acl_token?
Thank you for any help. I guess I will not be the only one on this migration path and this particular interest, a lot of people are interested in this. You help all of them :)
It looks like the new node policy is preventing the nodes from registering properly. This should fix things:
On your Consul servers configure them with an acl_agent_token that has a policy that can write to any node, like this: node "" { policy = "write" }.
On your Consul agents, configure them with a similar one to the servers to keep things open, or you can give them a token with a more specific policy that only lets them write to some allowed prefix.
Note this gets set as the acl_agent_token which is used for internal registration operations. The acl_agent_master_token is used as kind of an emergency token to use the /v1/agent APIs if there's something wrong with the Consul servers, but it only applies to the /v1/agent APIs.
For "all agents can see all nodes and all services and all KVs" you'd add node read privileges to whatever token you are giving to your agents via the acl_token, so you'd add a policy like:
node "" { policy = "read" }
service "" { policy = "read" }
key "" { policy = "read" }
Note that this allows anyone with access to the agent's client interface to read all these things, so you want to be careful with what you bind to (usually only loopback). Or don't set acl_token at all and make callers pass in a token with each request.
I need to get vulnerabilities by component at JSON format, but all I've get by using CVE Details API just single vulnerabilities where no components or something, only describe.
Here is an example of link
http://www.cvedetails.com/json-feed.php?numrows=10&vendor_id=0&product_id=0&version_id=0&hasexp=0&opec=0&opov=0&opcsrf=0&opfileinc=0&opgpriv=0&opsqli=0&opxss=0&opdirt=0&opmemc=0&ophttprs=0&opbyp=0&opginf=0&opdos=0&orderby=3&cvssscoremin=0
Here is an example of JSON:
{
"cve_id": "CVE-2016-4951",
"cwe_id": "0",
"summary": "The tipc_nl_publ_dump function in net/tipc/socket.c in the Linux kernel through 4.6 does not verify socket existence, which allows local users to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact via a dumpit operation.",
"cvss_score": "7.2",
"exploit_count": "0",
"publish_date": "2016-05-23",
"update_date": "2016-05-24",
"url": "http://www.cvedetails.com/cve/CVE-2016-4951/"
}
Are there any way to get vulnerabilities by name of component? (new and old)
Red Hat maintains a CVE API that can be searched by component, e.g.:
https://access.redhat.com/labs/securitydataapi/cve.json?package=kernel&after=2017-02-17
Documentation for the API can be found here.
Note that the data is probably limited to components in Red Hat products.
An alternative to vendor specific CVE API's is CIRCL's Common Vulnerabilities and Exposure Web Interface and API.
Its web interface can be found at https://cve.circl.lu/ and API documentation here https://cve.circl.lu/api/
Bit late for a proper reply, but maybe it'll still be useful. A while back I was a bit frustrated with the options available, so I built https://cveapi.com.
You can call GET https://v1.cveapi.com/.json and get the NIST json response for that CVE back.
It doesn't require any auth and is free to use.
I was wondering if Wirecloud offers complete support for object storage with FI-WARE Testbed instead of Fi-lab. I have successfully integrated Wirecloud with Testbed and have developed a set of widgets that are able to upload/download files to specific containers in Fi-lab with success. However, the same widgets do not seem to work in Fi-lab, as i get an error 500 when trying to retrieve the auth tokens (also with the well known object-storage-test widget) containing the following response:
SyntaxError: Unexpected token
at Object.parse (native)
at create (/home/fiware/fi-ware-keystone-proxy/controllers/Token.js:343:25)
at callbacks (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:164:37)
at param (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:138:11)
at pass (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:145:5)
at Router._dispatch (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:173:5)
at Object.router (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:33:10)
at next (/home/fiware/fi-ware-keystone-proxy/node_modules/express/node_modules/connect/lib/proto.js:195:15)
at Object.handle (/home/fiware/fi-ware-keystone-proxy/server.js:31:5)
at next (/home/fiware/fi-ware-keystone-proxy/node_modules/express/node_modules/connect/lib/proto.js:195:15)
I noticed that the token provided in the beggining (to start the transaction) is
token: Object
id: "%fiware_token%"
Any idea regarding what might have gone wrong?
The WireCloud instance available at FI-WARE's testbed is always the latest stable version while the FI-LAB instance is currently outdated, we're working on updating it as soon as possible. One of the things that changes between those versions is the Object Storage API, so sorry for the inconvenience as you will not be able to use widgets/operators using the Object Storage in both environments.
Anyway, the response you were obtaining seems to indicate the object storage instance you are accessing is not working properly, so you will need to send an email to one of the available mail lists for getting help (fiware-testbed-help or fiware-lab-help) telling what is happening to you (remember to include your account information as there are several object storage nodes and ones can be up and the others down).
Regarding the strange request body:
"token": {
id: "%fiware_token%"
}
This behaviour is normal, as the WireCloud client code has no direct access to the IdM token of the user. It's the WireCloud's proxy which replaces the %fiware_token% pattern with the correct value.