My question is quite simple, Can’t we create an openshift application using OC client from local if we don’t have a git reference to our local code base? If so, why is that ?
Related
I'm building an application in Python and using GitHub Actions to automate the testing 'on push'. However, I now want to connect my app to an existing MySql database.
From searching the Marketplace, Google and YouTube, I can see the following options:
use the MySql supplied with the GitHub Actions' Ubuntu virtual environment
setup a new MySql dB inside the GitHub Actions VM.
Setup MySql inside a Docker container and connect to it from another Docker container containing my app.
What I can't see is how to connect out of the GitHub Actions VM to an existing database on my network. Is it possible and should I expect to see a pre-built action to do this in the Marketplace.
Sorry for such an obtuse question: old, out-of-date programmer new to both CI/CD and containerisation. Thank you.
As far as I know, while deploying your web application on Heroku (from github) you need to provide a requirement.txt file so that every library which is used can be installed. But you cannot install MySQL like that. I've used python and streamlit to create a web application. I used MySQL to store data. I don't want the local machine's data to be exported but want to store the data when it is deployed as web app and someone fill in the details (it's basically a Student DBMS).
How can I deploy such a web application that uses MySQL on heroku ?
I've read some docs and look around and found that PostgreSQL is more suitable but I want to use MySQL because this is school project.
Heroku has a add-ons called ClearDB for Mysql
https://devcenter.heroku.com/articles/cleardb
how to deploy a basic html website on gke, what do i need other than the dockerfile and the .html application itself? i have tried deploying applications which already have all the yaml files included but i don't know how to start from scratch. i don't have a lot of experience and i haven't found anything online about this. can anyone provide a step by step tutorial? what do i do after creating the cluster? taken the website is called hey.html, is this dockerfile enough?
FROM nginx:alpine
RUN apt-get update
RUN apt-get install -y ngin
COPY hey.html/usr/share/nginx/html
EXPOSE 80
To deploy any application in GKE you will need some Kubernetes and GCP knowledge. You can start with official documentation, Coursera path about GKE and Kubernetes in Cloud, official documentation or this article which will introduce you to the basic concepts.
I can start from recommending a good tutorial from Kubernetes official documentation on how to deploy example PHP Guestbook application with Redis it should give you a practical example on how to deploy from scratch.
It also uses a service of a type LoadBalancer which will use a controller to tell GCP to create a LoadBalancer that will expose your application to Internet so you do not have to deal with anything to expose the app.
About your Docker file, the workflow will look something like this:
Push your Dockerfile to a registry (some useful materials here), you will put that docker image into a deployment for easier future management and then create a service because pods are mortal and replaceable and service will take care of traffic send to right pods even when they will be recreated, you might also need some persistent volume but this will be specific to your application. And here you will find another good how-to by Google.
Try this and if you will have issues just ask another question with details of the problems that occurred.
See below to make changes in dockerfile
FROM nginx:alpine
RUN apt-get update
COPY hey.html /usr/share/nginx/html
EXPOSE 80
I am new to OpenShift so apologies in advance if this question is not very clear.
I have a project starting in Openshift and will use the Elasticsearch provided docker image as a data store.
ElasiticSearch is bound only to local host by default when installed, and if I was running app on a server I would keep this configuration so as not to expose ElasticSearch interface as connectivity only required by the application, no need to expose outside of project.
If I make a route for Elasticsearch without changing it's default config, it is accessible to other Pods in project but also outside of the project, like the main application. Is it possible to make a route that is internal to the project only so that Elasticsearch interface is not accessible outside of the project or by other means ? Or a way to have a common local host address between pods/applications ?
I tried to group the services but still not available.
Any support to put me in right direction really appreciated.
I'd like to create a build chain for open source projects I'm working on. I'm currently using github, travis and coveralls. This is working fine but I'd like to add some kind of static code analyze.
I was thinking about hosting SonarQube on openshift, but problem is that openshift does not allow remote connection to database.
I have come to following solutions, but none of them seems to be easy to achieve:
Any REST API for sonar that could be used instead of raw db access
Any alternative for sonar that could be hosted on openshift
Migrate from travis to jenkins hosted on openshift and use this
Any other (free) alternative to openshift which would allow raw db access
Any other option
1 would be an ideal solution but I've searched all sonar plugins I could find and haven't found any :/
Am I missing something? There is no easy way to host sonar without exposing db access?
It looks like at least one person has gotten SonarQube running on OpenShift using the DIY cartridge:
http://majecek.wordpress.com/2013/12/06/how-to-run-sonarqube-4-0-on-openshift/
I was able to get SonarQube to start following those instructions.
EDIT: databases in OpenShift applications are only exposed publicly in scaled applications. You will want to create your sonar app with the -s option if you need to populate your database from outside OpenShift.