Approach of deploying pods/services on Kubernetes - json

I know the concepts of Kubernetes. I'm able to do some basic stuff als create a pod and create a service above it. I do this by using kubectl commands. When I'm searching for examples I see a log of .yaml and .json files.
Is it the global approach to create a .json or .yaml file which describes the setup of your pods/services (a bit like a docker-compose file)?
Maybe a strange question but for me it seems a bit weird that everyone who's using kubernetes is able to write his own .yaml-templates for their applications.

Creating pods and services, scaling pods etc can all be done from the kubectl using the appropriate commands. It will internally create the necessary config and create the resources.
This is great for trying out things, but not ideal for anything more serious. You want to version control, inspect and evolve your config. So it is better to start with a yaml or json config for what you want to create, and use that with kubectl.

Related

how to deploy an html website on kubernetes using gke?

how to deploy a basic html website on gke, what do i need other than the dockerfile and the .html application itself? i have tried deploying applications which already have all the yaml files included but i don't know how to start from scratch. i don't have a lot of experience and i haven't found anything online about this. can anyone provide a step by step tutorial? what do i do after creating the cluster? taken the website is called hey.html, is this dockerfile enough?
FROM nginx:alpine
RUN apt-get update
RUN apt-get install -y ngin
COPY hey.html/usr/share/nginx/html
EXPOSE 80
To deploy any application in GKE you will need some Kubernetes and GCP knowledge. You can start with official documentation, Coursera path about GKE and Kubernetes in Cloud, official documentation or this article which will introduce you to the basic concepts.
I can start from recommending a good tutorial from Kubernetes official documentation on how to deploy example PHP Guestbook application with Redis it should give you a practical example on how to deploy from scratch.
It also uses a service of a type LoadBalancer which will use a controller to tell GCP to create a LoadBalancer that will expose your application to Internet so you do not have to deal with anything to expose the app.
About your Docker file, the workflow will look something like this:
Push your Dockerfile to a registry (some useful materials here), you will put that docker image into a deployment for easier future management and then create a service because pods are mortal and replaceable and service will take care of traffic send to right pods even when they will be recreated, you might also need some persistent volume but this will be specific to your application. And here you will find another good how-to by Google.
Try this and if you will have issues just ask another question with details of the problems that occurred.
See below to make changes in dockerfile
FROM nginx:alpine
RUN apt-get update
COPY hey.html /usr/share/nginx/html
EXPOSE 80

how do one gets access to the MySQL-DB created by graphcool cli deploy

I am quite new to graphcool as a framework and i like that i can run it locally now. To get access from backend side to the cluster, what do i need to do?
gc deploy -dev
exposes some endpoints on localhost:60000.
and
lsof -i :60000
shows two containers. how could i possible access them to manipulate my data stored with gc from a backend point of view.
Please provide me manuals or useful links. i thank you in advance

Multiple containers in one pod

I am migrating an application from openshift 2 such consists of a Java(jetty) webserver and a mongo database.
Both the webserver and mongo need access to persistent storage, as well as the server accessing the database.
As the volume available to me can't (I believe) be accessed by two pods my current goal is to include both the server and dB into the same pod as separate containers.
I have tried copying the mongo container into the deploy config for the server but I just get an error saying the config is invalid with no description of why.
Is this an approach that could work and how can I find out why it isn't?
It is possible to do it if you really needed to, but not normally recommended for production systems.
In doing it, you are limited to a single replica and cannot scale your application, also, you can't use Rolling deployment strategy and must use Recreate.
For some examples of templates which deploy a database with front end together in same pod which you might adapt, see the 'testing' variants of the templates at:
https://github.com/openshift-evangelists/wordpress-quickstart/tree/master/templates
For those templates the build of the application image was done as separate manual step and they were just handling the deployment, so you will need to incorporate the build configuration into them yourself after you have copied and modified them for your own purposes.
UPDATE 1
Those templates do now include build configurations as have been tweaking the way they work.

Go+MySql: how easy is to migrate to GKE (Google Cloud Container Engine)?

My project is currently hosted by an independent cloud provider.
I am using 2 Virtual Machines, with Linux:
one hosts a Go application
one hosts a MySql database
I would now like to move to the Google Cloud Platform.
Do you think does it make sense to move to Google Cointainer Engine (GKE), rather than to the Google Compute Engine (which would have the same virtual machine model (IaaS) I am using with the current provider)?
I have never used Kubernetes and Docker. How easy would it be to make the migration? Am I going to complicate my life uselessly?
How difficult is the configuration for my simple model?
I have never used Kubernetes and Docker.
Moving to a platform that you have no experience with doesn't sound like a great idea. Instead, why not start by doing some tutorials about Docker and then Kubernetes?
After that, you might try Minikube (https://kubernetes.io/docs/getting-started-guides/minikube/) locally to start writing some manifests for the components (which sound like maybe a DaemonSet or single Pod with PersistentVolume for MySQL and a Deployment for the Go application).
Once you have the pieces working locally, then it would probably make more sense to think about migrating. You would have a much better understanding of what you are getting into and if it is something you would want to undertake.

Docker-compose Kubernetes ENV properties interoperability

I'm building my staging environment using docker-compose, with application that was previously ran in Google Cloud using Kubernetes.
My application was configured, using ENV properties provided inside Kubernetes container, and now after switching to docker-composite, I have different naming convention for linked services.
I can think of few solutions, for my problem:
Change my application, to support alternative configurations, so it would support both docker-composite & Kubernetes
Create aliases in docker-compose or Kubernetes so that configuration would always be available in single format in both environments, and I would not need to touch my application configurations.
Maybe some other way, which I don't see
I want to go with the 2nd solution, but I don't know how exactly to configure it. Have ideas?
You could use the environment section to define 'docker-compose' variables like PARAM1=${PARAM2}. In this case, docker-compose will have the same variables that Kubernetes has.