can aws parameter store programatically push changes to the clients? - aws-sdk

can aws parameter store programatically push changes to the clients?
once the configuration value is changed in aws parameter store, how do we ensure all the clients are now using the latest configuration value?

Have a look here: Setting Up Notifications and Events for Systems Manager Parameters. Maybe you can have your applications listen to an SNS notification?
I have services configured to use the latest credentials so when they restart they pick up new ones. When all services have restarted I invalidate the old credentials in whatever system they were for.
I do this in a manual fashion, but using SNS to restart could work for me.

Related

Are custom metadata values for GCE instance stored securely?

I was wondering if custom metadata for google compute engine VM instances was an appropriate place to store sensitive information for configuring apps that run on the instance.
So we use container-optimised OS images to run microservices. We configure the containers with environment variables for things like creds for db connections and other systems we integrate with.
The VMs are treated as ephemeral for each CD deployment and the best I have come up with so far is to create an instance template with config values loaded via a file I keep on my local machine into the VM custom metadata, which is then made available to a systemctl unit when the VM starts up (cloud-config).
The essence of this means environment variable values (some containing creds) are uploaded by me (which don't change very much) and are then pulled from the VM instance metadata server when a new VM is fired up. So I'm just wondering if there's any significant security concerns with this approach...
Many thanks for your help
According to the Compute Engine documentation :
Is metadata information secure?
When you make a request to get
information from the metadata server, your request and the subsequent
metadata response never leaves the physical host running the virtual
machine instance.
Since the request and response are not leaving the physical host, you will not be able to access the metadata from another VM or from outside Google Cloud Platform. However, any user with access the VM will be able to query the metadata server and retrieve the information.
Based on the information you provided, storing credentials for a test or staging environment in this manner would be acceptable. However, if this is a production system with customer or information important to the business, I would keep the credentials in a secure store that tracks access. The data in the metadata server is not encrypted, and accesses are not logged.

How to send logs to Zabbix from Dokku?

I would like to use Dokku for deploying my Rails apps. But I cannot find any method allows me to send the log to Zabbix? Does anyone have ideas? Thanks!
You can't send logs directly to Zabbix for it's not a log collector.
You need a Zabbix Agent installed to your app machine to analyze logs and trigger events or, if you are using a PaaS, you should implement web scenarios on your Zabbix Server to check specific URLs.
If you want to collect logs instead, you could implement a ELK stack.
Elastic search has its own alerting module but it's paid and IMHO Zabbix alerting is far better.

AWS authentication to Vault

We're using Vault to store our application secrets and config. When our app (Java) starts, a script does all the magic of getting the secrets and config from Vault and storing them locally for the application to read. The script is authenticating to Vault using AWS IAM role.
Now we're getting to a situation where the application needs to read secrets from Vault on the go, not just on startup. For that purpose, I need it to be able to do the authentication pretty much on every request. It's worth mentioning that the app might also run on the developer machine, so whatever authentication done - it needs to work on the EC2 instance as well as the local development environment.
I'm currently leaning towards creating a username and password, store them in Vault for the application to get when starting up. Then the application could use that username/password to authenticate to Vault when it needs.
I'm also considering AppRole, but can't really see any real advantage to it over simple user/password setup.
What's the best solution for this use-case? Any advise would be highly appreciated!
Thanks,
Yosi
The AWS recommendation for storing secrets is to use AWS Systems Manager Parameter Store.
Software running on an Amazon EC2 instance with an assigned Role can use those credentials to access the Parameter Store to retrieve application secrets.
The Parameter Store can also be used outside of EC2, but some AWS credentials will still be needed to authenticate to the Parameter Store.

Using zabbix_sender for host discovery

I'm writing an application which delivers data from remote devices over an HTTP API. These devices are on a mobile data connection and have limited resources.
I wish to receive custom monitoring data over the HTTP API, relying on the security model designed in the application, and push that data to Zabbix directly (or indirectly) from node.js. I do not wish to use Zabbix Agent on the remote devices.
I see that I can use zabbix_sender to send data to a Zabbix server containing a pre-configured host. This works great. I intend to deliver monitoring data over my custom API, and when received give this data to zabbix_sender inside the server network.
The problem is there are many devices in the field and more are being added all the time.
TL;DR:
When zabbix_sender provides a custom hostname which doesn't exist in Zabbix already, it fails.
I would like to auto-add discovered hosts, based upon new hostnames from zabbix_sender. How would I do this?
Also, extra respect if anyone can give examples of how to avoid zabbix_sender and send data directly from node.js to the Zabbix server. I mean: suggest an NPM package that you have experience using. (Update: Found working node.js package here: https://www.npmjs.com/package/node-zabbix-sender)
Zabbix configuration: I'm learning from Zabbix 2.4 installed in Docker, no custom configuration from this Dockerhub: https://hub.docker.com/r/zabbix/zabbix-2.4/
Probably the best would be to use the Zabbix API to create hosts directly.
Alternatively, you could set up an action and emulate active agent connection, which would make Zabbix create the host via the active agent auto-regstration.
You could also use low level discovery (LLD) to send in JSON, which would result in hosts/items being created, based on prototypes.
In all of these cases you have to wait for one minute (by default) for the hosts to appear in the Zabbix cache, then you can send the data.
Also note that Zabbix 2.4 is not supported anymore, it will receive no fixes - it is not a "long-term support" release.

openshift and let's encrypt certificates

Is there any integration for Let's Encrypt in OpenShift (or, is this planned)? Let's encrypt are going to issue certs that expire in 90 days[1] -- and a big part of their plan is to have automation setups via people who use their certs so that they're always updated with new certs. Given this, some integration from OpenShift would be necessary.
Thanks,
[1] https://letsencrypt.org/2015/11/09/why-90-days.html
Currently, the ability to automate ssl certificate renewals and installation on OpenShift Online is not possible because the ssl certificates are stored at the node level, and ssl connections are terminated by the node level proxy (Reference this). If you would like to see it included in future versions, you should vote here and get people to vote on it. It's possible that you could automate it locally somewhat (or build a module to do it) by using the OpenShift Online API. Another suggestion would be to get a free ssl certificate from StartSSL that lasts for a year and install it either using the command line, or the web console.