I am new to OpenShift so apologies in advance if this question is not very clear.
I have a project starting in Openshift and will use the Elasticsearch provided docker image as a data store.
ElasiticSearch is bound only to local host by default when installed, and if I was running app on a server I would keep this configuration so as not to expose ElasticSearch interface as connectivity only required by the application, no need to expose outside of project.
If I make a route for Elasticsearch without changing it's default config, it is accessible to other Pods in project but also outside of the project, like the main application. Is it possible to make a route that is internal to the project only so that Elasticsearch interface is not accessible outside of the project or by other means ? Or a way to have a common local host address between pods/applications ?
I tried to group the services but still not available.
Any support to put me in right direction really appreciated.
Related
i developed an application using Angular, Spring Boot and MySQL database. I want do publish it into docker hub but im still confused if i should create different images for each (Angular, API SpringBoot and MySQL) or i should just put it all in one docker image
I tried dockerizing only the spring boot api but my doubs still remains about the whole app
The backend and frontend should be in the same image. Depending if the backend or frontend is shared with other services you can think to make seperate images. If they are not shared it doesnt make sense to make two images because your frontend is not working without your backend and vice verser.
The Database should be in a seperate image, it is not part of your application, it is part of your data storage and could be easily shared with other applications.
Good practice is putting them separately.
To make your application more flexible, you may define all accesses as an environment variable of the image.
That is to say, defining the base url of your backend as ENV, the access to your database as ENV
After that, you could leverage docker-compose to orchestrate it all
Have anyone deployed Spring Boot app to DigitalOcean droplet?
I have previously created app on Heroku.com, where I also ordered MySQL Database and deployed my Web API. Due to performance issues, I want to transfer my Spring Boot app to DigitalOcean, but there is a problem: I still want to use DB I ordered on Heroku. I have all the required credentials, but can't find a way to connect my droplet. In Heroku, there is very simple way to do that, all I need to do is to change config variable DATABASE_URL, but here I cannot find the same. I hope you understand my problem and provide simple solution.
Thank You in advance!
What you need is called Environment Variables, here is the docs from DO.
Specify app-level variables on the Environment screen when creating an app. For existing apps, go to the Apps section of the DigitalOcean Control Panel. Click your app, then click the Settings tab. Next to the App-Level Environment Variables heading, click the Edit link.
https://docs.digitalocean.com/products/app-platform/how-to/use-environment-variables/
As far as I know, while deploying your web application on Heroku (from github) you need to provide a requirement.txt file so that every library which is used can be installed. But you cannot install MySQL like that. I've used python and streamlit to create a web application. I used MySQL to store data. I don't want the local machine's data to be exported but want to store the data when it is deployed as web app and someone fill in the details (it's basically a Student DBMS).
How can I deploy such a web application that uses MySQL on heroku ?
I've read some docs and look around and found that PostgreSQL is more suitable but I want to use MySQL because this is school project.
Heroku has a add-ons called ClearDB for Mysql
https://devcenter.heroku.com/articles/cleardb
We have an openshift cluster (v3.11) with prometheus collecting metrics as part of the platform. We need long term storage of these metrics and our hope is to use our InfluxDB Time Series DB to store them.
The Telegraf agent (the T in the TickStack) has an input plugin for prometheus and an output plugin for InfluxDB so this would seem like a natural solution.
What I'm struggling with is how is the telegraf agent setup to scrape the metrics within Openshift, I think the config and docs relate to prometheus outside of openshift? I can't see any references to how to set this up with Openshift.
Does a telegraf agent need to reside on openshift itself or can this be setup to collect remotely via a published route?
If anyone has any experience setting this up or can provide some pointers I'd be grateful.
Looks like the easiest way to get metrics from OpenShift Prometheus using Telegraf is to use the default service coming with OpenShift. URL to scrape from is: https://prometheus-k8s-openshift-monitoring.apps.<your domain>/federate?match[]=<your conditions>
As Prometheus stays behind the openshift authentication proxy the only challange is authentication. You should add a new user into the prometheus-k8s-htpasswd secret and use his credentials for scraping.
To do this you should run htpasswd -nbs <login> <password> and then add output to the end of prometheus-k8s-htpasswd secret.
The other way is to disable authentication for /federate endpoint. To do this you should edit the command in the prometheus-proxy container inside prometheus stateful set and add -skip-auth-regex=^/federate option.
I'm working on an architecture to deploy my webapp. I would like to use Google Managed Instance Groups because I have some strict requirements. I was wondering:
which is the best Web container to be deployed in a distributed environment?
I'm familiar with Tomcat, it's Tomcat OK to be deployed in an instance group?
my Webapp running on tomcat will generate logs that will be stored in the current machine hosting tomcat. How should I handle distributed application logs.
I don't want to lose information and I would like to have a single view of all log of my webapp even if distributed, Is it that possible?
Thanks
I have used tomcat in GCP for over a year and it has worked without problems with the load balancer. To solve the issue of the logs you must use an agent to save the logs in stackdriver https://cloud.google.com/logging/docs/view/service/agent-logs