How to deploy reportportal into production environment - reportportal

Based on deployment instruction we need to deploy reportportal to the production environment
in the instruction mentioned the following:
For production usage we recommend to:
deploy MongoDB database at separate enviroment, and connect App to this server. MongoDB is mandatory part.
choose only required Bug Tracking System integration service. Exclude the rest
our question is:
how to connect first VM with dockerized reportportal to second one VM with hosted database
Maybe there is any environment variable which is pointing app to database?

There are couple of connection settings that should be applied to services which use database. Here is the list:
- rp.mongo.host=XXX
- rp.mongo.port=27017
- rp.mongo.dbName=reportportal
- rp.mongo.user=XXX
- rp.mongo.password=XXX
MongoDB is used by the following services: UAT (authorization), API, JIRA, RALLY. There is example of docker-compose YAML which contains all mentioned properties.

How i understand, mongo db container should be removed from docker-compose config, hence we should create second config with DB(mongo) container:
image: mongo:3.2
## Uncomment if needed
# ports:
# - "27017:27017"
volumes:
- reportportal-data:/data/db
restart: always
## Consider disabling smallfiles for production usage
command: --smallfiles
And set db settings to first docker-compose.yml file?

Related

In a ROOTLESS podman setup, how to communicate between containers in different pods

I read all I could find, but documentation on this scenario is scant or unclear for podman. I have the following (contrived) ROOTLESS podman setup:
pod-1 name: pod1
Container names in pod1:
p1c1 -- This is also it's assigned hostname within pod1
p1c2 -- This is also it's assigned hostname within pod1
p1c3 -- This is also it's assigned hostname within pod1
pod-2 name: pod2
Container names in pod2:
p2c1 -- This is also it's assigned hostname within pod2
p2c2 -- This is also it's assigned hostname within pod2
p2c3 -- This is also it's assigned hostname within pod2
I keep certain containers in different pods specifically to avoid port conflict, and to manage containers as groups.
QUESTION:
Give the above topology, how do I communicate between, say, p1c1 and p2c1? In other words, step-by-step, what podman(1) commands do I issue to collect the necessary addressing information for pod1:p1c1 and pod2:p2c1, and then use that information to configure applications in them so they can communicate with one another?
Thank you in advance!
EDIT: For searchers, additional information can be found here.
Podman doesn't have anything like the "services" concept in Swarm or Kubernetes to provide for service discovery between pods. Your options boil down to:
Run both pods in the same network namespace, or
Expose the services by publishing them on host ports, and then access them via the host
For the first solution, we'd start by creating a network:
podman network create shared
And then creating both pods attached to the shared network:
podman pod create --name pod1 --network shared
podman pod create --name pod2 --network shared
With both pods running on the same network, containers can refer to
the other pod by name. E.g, if you were running a web service in
p1c1 on port 80, in p2c1 you could curl http://pod1.
For the second option, you would do something like:
podman pod create --name pod1 -p 1234:1234 ...
podman pod create --name pod2 ...
Now if p1c1 has a service listening on port 1234, you can access that from p2c1 at <some_host_address>:1234.
If I'm interpreting option 1 correctly, if the applications in p1c1 and p2c1 both use, say, port 8080; then there won't be any conflict anywhere (either within the pods and the outer host) IF I publish using something like this: 8080:8080 for app in p1c1 and 8081:8080 for app in p2c1? Is this interpretation correct?
That's correct. Each pod runs with its own network namespace
(effectively, it's own ip address), so services in different pods can
listen on the same port.
Can the network (not ports) of a pod be reassigned once running? REASON: I'm using podman-compose(1), which creates things for you in a pod, but I may need to change things (like the network assignment) after the fact. Can this be done?
In general you cannot change the configuration of a pod or a
container; you can only delete it and create a new one. Assuming that
podman-compose has relatively complete support for the
docker-compose.yaml format, you should be able to set up the network
correctly in your docker-compose.yaml file (you would create the
network manually, and then reference it as an external network in
your compose file).
Here is a link to the relevant Docker documentation. I haven't tried this myself with podman.
Accepted answer from #larsks will only work for rootful containers. In other words, run every podman commands with sudo prefix. (For instance when you connect postgres container from spring boot application container, you will get SocketTimeout exception)
If two containers will work on the same host, then get the ip address of the host, then <ipOfHost>:<port>. Example: 192.168.1.22:5432
For more information you can read this blog => https://www.redhat.com/sysadmin/container-networking-podman
Note: The above solution of creating networks, only works in rootful mode. You cannot do podman network create as a rootless user.

SonarQube on Kubernetes with MySQL via AWS RDS

I'm trying to deploy an instance of SonarQube on a Kubernetes cluster which uses a MySQL instance hosted on Amazon Relational Database Service (RDS).
A stock SonarQube deployment with built-in H2 DB has already been successfully stood up within my Kubernetes cluster with an ELB. No problems, other than the fact that this is not intended for production.
The MySQL instance has been successfully stood up, and I've test-queried it with SQL commands using the username and password that the SonarQube Kubernetes Pod will use. This is using the AWS publicly-exposed host, and port 3306.
To redirect SonarQube to use MySQL instead of the default H2, I've added the following environment variable key-value pair in my deployment configuration (YAML).
spec:
containers:
- name: sonarqube2
image: sonarqube:latest
env:
- name: SONARQUBE_JDBC_URL
value: "jdbc:mysql://MyEndpoint.rds.amazonaws.com:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true"
ports:
- containerPort: 9000
For test purposes, I'm using the default "sonar/sonar" username and password, so no need to redefine at this time.
The inclusion of the environment variable causes "CrashLoopBackOff". Otherwise, the default SonarQube deployment works fine. The official Docker Hub for SonarQube states to use env vars to point to a different database. Am trying to do the same, just Kubernetes style. What am I doing wrong?
==== Update: 1/9 ====
The issue has been resolved. See comments below. SonarQube 7.9 and higher does not support MySQL. See full log below.
End of Life of MySQL Support : SonarQube 7.9 and future versions do not support MySQL.
Please migrate to a supported database. Get more details at
https://community.sonarsource.com/t/end-of-life-of-mysql-support
and https://github.com/SonarSource/mysql-migrator

connecting jenkins hosted on kubernetes to MySQL on Google Cloud Platform

I have hosted Jenkins on Kubernetes cluster which is hosted on Google Cloud Platform. I am trying a run a a python script though Jenkins. There is a need for the script to read a few values on MySQL. The MySQL instance is being run separately on one of the instances. I have been facing issues connecting Kubernetes to MySQL instance. I am getting the following error:
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on '35.199.154.36' (timed out)
This is the document I came across
According to the document, I tried connecting via Private IP address.
Generated a secret which has MySQL username and password and included the Host IP address in the following format taking this document as reference:
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
hostname:<MySQL external ip address>
kubectl create secret generic literal-token --from-literal user=<username> --from-literal password=<password>
This is the raw yaml file for the pod that I am trying to insert in the Jenkins Pod template.
Any help regarding how I can overcome this SQL connection problem would be appreciated.
You can't create a secret in the pod template field. You need to either create the secret before running the jobs and mount it from your pod template, or just refer to your user/password in your pod templates as environment variables, depending on your security levels

What IP address I should use for communicating in Docker?

I have a MySql container and another docker container with Jython app.
Inside Jython app - this is a connection string to connect to MySql (it works on host):
mysql_url_string jdbc:mysql://localhost/...
This does not work with 2 docker containers (1 Mysql, 2 Jython app).
What IP address I should use for connection string (instead of localhost)?
Thanks.
Instead of using an IP address (as they may change unless you specifically define network configuration), you can simply link the 2 containers together and refer to them by container name.
version: "3"
services:
mysql:
container_name: mysql
image: somethingsomething/mysql:latest
jython
container_name: jython
image: somethingsomething/jython:latest
links:
- mysql
environment:
jdbc_url: jdbc:mysql://mysql:3306
This linking can also be done via CLI (see: https://linuxconfig.org/basic-example-on-how-to-link-docker-containers)
If you simply must use IP addresses, you can obtain the IP address after linking by checking the /etc/hosts files inside the containers.
Edit Note:
There are alternative ways to approach this without 'linking' but without need more detailed information for how your containers are set up already it's difficult to provide this.
i.e. whether they are standalone containers on host network or bridged network, or created as a docker service with an overlay, or something else!
The different scenarios change the way addressing is created and used for inter container communication so the means of looking up the IP address won't be the same.

Configure a Wildfly 10 MSQL datasource in OpenShift v3

I have an application currently working on my local Dev machine. It uses Wildfly 10, MySQL 5.7 and Hibernate. My application looks for the 'AppDS' datasource from within Wildfly.
I've created a Wildfly 10 container and a MySQL container on OpenShift V3. Typically, I would log into Wildfly and configure a datasource, but all that configuration is lost when a container restarts. I thought it would be a matter of finding my connection environment settings, and using the pre-configured database connections, but I can't find what the variables should be set to, and the default connections don't work without them.
I downloaded and read OpenShift for Developers, but they side-step the issue by creating a direct database connection, rather than going through a datasource.
exporting the environment variables failed because 'no matches for apps.openshift.io/, Kind=DeploymentConfig'. Is the book out of date? Are they not using deployment config to store environment variables?
I would appreciate it greatly if someone could point me in the right direction.
I have a project running locally on my machine that uses Wildfly 10, Mysql 5.7 and Hibernate. I found the documentation to be incomplete. After a few days of working with it, I have figured out how to deploy a simple J2EE project with this stack.
I am updating my question with the step-by-step I wish I'd had. I hope this saves someone some time in the future.
create new openshift user
create project dbtest
add MySQL to dbtest project:
The following service(s) have been created in your project: mysql:
Username: test
Password: test
Database Name: testdb
Connection URL: mysql://mysql:3306/
add Wildfly to the project:
oc login https://api.starter-us-west-1.openshift.com
oc project dbtest
oc status
scale current wildfly pod to 0. (you won't have enough CPU to run 3 pods, and redeploy tries to start a new one and hot swap them)
From left menu: Applications->Deployments->(dbtest)Wildfly10 pod->environment(tab)-> add:
MYSQL_DATABASE=testdb
MYSQL_DB_ENABLED=true
MYSQL_USER:test
MYSQL_PASSWORD: test
push wildfly pod back to 1.
use terminal in Wildfly to run ./add-user.sh
oc port-forward wildfly10-6-rkr58 :9990 (replace wildfly10-6-rkr58 with your pod name, found by clicking on the running pod [circle with a 1 in it] and noting the pod name in the upper left corner])
login to Wildfly from 127.0.0.1: and test the MySQLDS. It should now connect.
Go through the environment variables mentioned here to get a better understanding.