Entities in FIWARE orion disappear after some time passes - fiware

I created a project from https://fiware-tutorials.readthedocs.io/en/latest/time-series-data.html tutorial and just changed the entities name and type and everything work right. But after some time (usually a day) all entities in Orion disappears (although the data in Quantumleap persists) and I can not get the entities properties with this command :
curl -X GET \
--url 'http://localhost:1026/v2/entities?type=Temp'
What is the problem? is there some restriction in tutorial projects?

The tutorials have been written as an introduction to NGSI, not as a robust architectural solution. The idea is just to get something "quick and dirty" up and running on a developer's machine and various shortcuts have been taken. Indeed the docker-compose files all hold the following disclaimer:
WARNING: Do not deploy this tutorial configuration directly to a production environment
The tutorial docker-compose files have not been written for production deployment and will not
scale. A proper architecture has been sacrificed to keep the narrative focused on the learning
goals, they are just used to deploy everything onto a single Docker machine. All FIWARE components
are running at full debug and extra ports have been exposed to allow for direct calls to services.
They also contain various obvious security flaws - passwords in plain text, no load balancing,
no use of HTTPS and so on.
This is all to avoid the need of multiple machines, generating certificates, encrypting secrets
and so on, purely so that a single docker-compose file can be read as an example to build on,
not use directly.
When deploying to a production environment, please refer to the Helm Repository
for FIWARE Components in order to scale up to a proper architecture:
see: https://github.com/FIWARE/helm-charts/
Perhaps the most relevant factor here to answer your question, there is typically no Volume Persistence - the tutorials clean up after themselves where possible to avoid leaving data on a user's machine unnecessarily.
If you have lost all your entity data when connecting to Orion, my guess here is that the MongoDB database has exited and restarted for some reason. Since there is deliberately no persistent volume set up, this would mean that all previous entities are lost on the restart.
A solution on how to persist volumes and fix this behaviour can be found in answers to another question on this site - something like:
version: "3.9"
services:
mongodb:
image: mongo:4.4
ports:
- 27017:27017
volumes:
- type: volume
source: mongodb_data_volume
target: /data/db
volumes:
mongodb_data_volume:
external: true

Related

how to deploy an html website on kubernetes using gke?

how to deploy a basic html website on gke, what do i need other than the dockerfile and the .html application itself? i have tried deploying applications which already have all the yaml files included but i don't know how to start from scratch. i don't have a lot of experience and i haven't found anything online about this. can anyone provide a step by step tutorial? what do i do after creating the cluster? taken the website is called hey.html, is this dockerfile enough?
FROM nginx:alpine
RUN apt-get update
RUN apt-get install -y ngin
COPY hey.html/usr/share/nginx/html
EXPOSE 80
To deploy any application in GKE you will need some Kubernetes and GCP knowledge. You can start with official documentation, Coursera path about GKE and Kubernetes in Cloud, official documentation or this article which will introduce you to the basic concepts.
I can start from recommending a good tutorial from Kubernetes official documentation on how to deploy example PHP Guestbook application with Redis it should give you a practical example on how to deploy from scratch.
It also uses a service of a type LoadBalancer which will use a controller to tell GCP to create a LoadBalancer that will expose your application to Internet so you do not have to deal with anything to expose the app.
About your Docker file, the workflow will look something like this:
Push your Dockerfile to a registry (some useful materials here), you will put that docker image into a deployment for easier future management and then create a service because pods are mortal and replaceable and service will take care of traffic send to right pods even when they will be recreated, you might also need some persistent volume but this will be specific to your application. And here you will find another good how-to by Google.
Try this and if you will have issues just ask another question with details of the problems that occurred.
See below to make changes in dockerfile
FROM nginx:alpine
RUN apt-get update
COPY hey.html /usr/share/nginx/html
EXPOSE 80

Multiple containers in one pod

I am migrating an application from openshift 2 such consists of a Java(jetty) webserver and a mongo database.
Both the webserver and mongo need access to persistent storage, as well as the server accessing the database.
As the volume available to me can't (I believe) be accessed by two pods my current goal is to include both the server and dB into the same pod as separate containers.
I have tried copying the mongo container into the deploy config for the server but I just get an error saying the config is invalid with no description of why.
Is this an approach that could work and how can I find out why it isn't?
It is possible to do it if you really needed to, but not normally recommended for production systems.
In doing it, you are limited to a single replica and cannot scale your application, also, you can't use Rolling deployment strategy and must use Recreate.
For some examples of templates which deploy a database with front end together in same pod which you might adapt, see the 'testing' variants of the templates at:
https://github.com/openshift-evangelists/wordpress-quickstart/tree/master/templates
For those templates the build of the application image was done as separate manual step and they were just handling the deployment, so you will need to incorporate the build configuration into them yourself after you have copied and modified them for your own purposes.
UPDATE 1
Those templates do now include build configurations as have been tweaking the way they work.

Approach of deploying pods/services on Kubernetes

I know the concepts of Kubernetes. I'm able to do some basic stuff als create a pod and create a service above it. I do this by using kubectl commands. When I'm searching for examples I see a log of .yaml and .json files.
Is it the global approach to create a .json or .yaml file which describes the setup of your pods/services (a bit like a docker-compose file)?
Maybe a strange question but for me it seems a bit weird that everyone who's using kubernetes is able to write his own .yaml-templates for their applications.
Creating pods and services, scaling pods etc can all be done from the kubectl using the appropriate commands. It will internally create the necessary config and create the resources.
This is great for trying out things, but not ideal for anything more serious. You want to version control, inspect and evolve your config. So it is better to start with a yaml or json config for what you want to create, and use that with kubectl.

WordPress's database blue-green deployment on Azure

We have a WordPress's application on the Azure's WebApp.
We have two environments there - DEV and Prod. Prod have a Staging swap-slot.
DEV and PROD, obviously, use different MySQL database (we using MySQL In App now, but the same question related to ClearDB/MySQL setup).
So, question is - how to do Blue-green deployment? What to do with databases?
We have Travis configured to deploy code to different environments. But - Prod's database will be updated during its usage be visitors, and DEV will be updated with developers (and, sure - will not have visitor's changes from Prod).
Are there any solutions/practises to realize that?
P.S. And there is one more issue. "MySQL In App (preview)" does not allow to have separated database for WebApp and its Stage (swap-slot). This creates an additional headache for us.

Docker-compose Kubernetes ENV properties interoperability

I'm building my staging environment using docker-compose, with application that was previously ran in Google Cloud using Kubernetes.
My application was configured, using ENV properties provided inside Kubernetes container, and now after switching to docker-composite, I have different naming convention for linked services.
I can think of few solutions, for my problem:
Change my application, to support alternative configurations, so it would support both docker-composite & Kubernetes
Create aliases in docker-compose or Kubernetes so that configuration would always be available in single format in both environments, and I would not need to touch my application configurations.
Maybe some other way, which I don't see
I want to go with the 2nd solution, but I don't know how exactly to configure it. Have ideas?
You could use the environment section to define 'docker-compose' variables like PARAM1=${PARAM2}. In this case, docker-compose will have the same variables that Kubernetes has.