I have a Openshift Cluster setup in which I have aggregated Openshift logging.
Elasticsearch , fluentd and Kibana.
I have setup external elastic search on different server. I want to forward my Openshift cluster logs to my newly setup Elasticsearch.
Please help me in resolving this.
Thanks
I believe what you need is explained in https://docs.openshift.com/container-platform/3.11/install_config/aggregate_logging.html#sending-logs-to-an-external-elasticsearch-instance
From your question it's not clear what problem you are facing.
Related
I used to test installed mongodb-shard on kubernetes by helm,but I found that those helm charts are not really a qualified mongodb shard.These helm charts can correctly create Pods with names like mongos-1 mongod-server-1 mongod-shard-1,this seems to be a correct shard cluster configuration, but the appropriate mongos, mongod server instance is not created on the corresponding Pod.They just create a normal mongod instance on the corresponding pod, and there is no connection between them.Do I need to add scripts to execute commands similar to rs.addShard(config)?Encountered the same problem when installing mysql cluster using helm.
What I want to know is, is it not appropriate to install mysql/mongodb cluster on kubernetes in general scenarios?Is the database installed independently or deployed on Kubernetes?
Yes, you can deploy MongoDB instances on Kubernetes clusters.
Use standalone instance if you want to test and develop and replica set for production like deployments.
Also to make things easier you can use MongoDB Enterprise Kubernetes Operator:
The Operator enables easy deploys of MongoDB into Kubernetes clusters,
using our management, monitoring and backup platforms, Ops Manager and
Cloud Manager. By installing this integration, you will be able to
deploy MongoDB instances with a single simple command.
This guide has references to the official MongoDB documentation with more necessary details regarding:
Install Kubernetes Operator
Deploy Standalone
Deploy Replica Set
Deploy Sharded Cluster
Edit Deployment
Kubernetes Resource Specification
Troubleshooting Kubernetes Operator
Known Issues for Kubernetes Operator
So basically all you need to know in this topic.
Please let me know if that helped.
We have an openshift cluster (v3.11) with prometheus collecting metrics as part of the platform. We need long term storage of these metrics and our hope is to use our InfluxDB Time Series DB to store them.
The Telegraf agent (the T in the TickStack) has an input plugin for prometheus and an output plugin for InfluxDB so this would seem like a natural solution.
What I'm struggling with is how is the telegraf agent setup to scrape the metrics within Openshift, I think the config and docs relate to prometheus outside of openshift? I can't see any references to how to set this up with Openshift.
Does a telegraf agent need to reside on openshift itself or can this be setup to collect remotely via a published route?
If anyone has any experience setting this up or can provide some pointers I'd be grateful.
Looks like the easiest way to get metrics from OpenShift Prometheus using Telegraf is to use the default service coming with OpenShift. URL to scrape from is: https://prometheus-k8s-openshift-monitoring.apps.<your domain>/federate?match[]=<your conditions>
As Prometheus stays behind the openshift authentication proxy the only challange is authentication. You should add a new user into the prometheus-k8s-htpasswd secret and use his credentials for scraping.
To do this you should run htpasswd -nbs <login> <password> and then add output to the end of prometheus-k8s-htpasswd secret.
The other way is to disable authentication for /federate endpoint. To do this you should edit the command in the prometheus-proxy container inside prometheus stateful set and add -skip-auth-regex=^/federate option.
I made an API REST with Spring boot, connected to an existing MySQL database. This database is not hosted on my local.
The API works fine on my local but I want to deploy it on AWS.
Is it possible to use this remote MySQL database or do I need to use a new one hosted on AWS?
If it is possible, can you guys link any tutorial or documentation? I can't find anything related to this particular issue.
Thank you!
yes, AWS does not limit you to using only their RDS (Relational Database Services) offerings. Configuration of the DB will be the same (or similar if you want to use other instance than one used for your local development) as for your local environment.
Application hosted in aws can be connected to both cloud db and on-perm dB.only thing we need to check is security groups configured in ec2 along with other DB configurations.
I have setup OpenShift 3.2 cluster using ansible. Now I want to check master logs. Is there a way I can increase the log level to get more info? If yes how to do that?
thanks for help
In order to have a centralized place for Openshift services and projects logs, you can always deploy the EFK stack (which provides a Kibana UI to view any logs)
https://docs.openshift.com/enterprise/3.2/install_config/aggregate_logging.html
I tried to create two Memcache cluster's in Elastic Cache Using Elastic Beanstalk in AWS. Both got stuck in the same state saying 'creating' for the past 3 hrs.
Any help will be appreciated.
I also faced the same issue yesterday while creating 2 Redis clusters using terraform at the same time.
It got resolved when created one more Redis cluster in the same region using AWS console.
Note: There seems to be an issue with creating multiple Custers at the same time the same region/AZ.