I'm trying to create a Docker installation of Jahia CMS (Digital Experience Manager).
I need :
a MySQL container
a Jahia container (embedded Tomcat)
The trick is that during the Jahia container build (product installation using Expect), I need to access the MySQL container (connection check required).
MySQL Dockerfile :
FROM mysql:5.6
Jahia Dockefile :
FROM centos:centos6
# Install dependencies
RUN yum -y update && yum -y install ...
# Download Digital Experience Manager 7.1.1
RUN wget -q https://www.jahia.com/downloads/jahia/digitalexperiencemanager7.1.1/DigitalExperienceManager-EnterpriseDistribution-7.1.1.0-r53717.3663.jar -O /tmp/DigitalExperienceManager.jar
# Download MySQL connector (only needed for standalone db installation)
RUN wget -q http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.44/mysql-connector-java-5.1.44.jar -O /usr/lib/mysql-connector-java-5.1.44.jar
# Launch installation using Expect to automate user input
COPY jahia_conf.exp /tmp/configuration.exp
RUN expect /tmp/configuration.exp
# Start Jahia
/opt/DigitalExperienceManager-EnterpriseDistribution-7.1.1.0/tomcat/bin/catalina.sh jpda run
Expect script (jahia_conf.exp)
#!/bin/sh
#!/usr/bin/expect
spawn java -jar /tmp/DigitalExperienceManager.jar -console
# Installation directory
expect "Select target path"
send "/opt/DigitalExperienceManager-EnterpriseDistribution-7.1.1.0\r"
# MySQL connector JAR file
expect "Specify the path to the downloaded driver JAR file"
send "/usr/lib/mysql-connector-java-5.1.44.jar\r"
# Database configuration
expect "Database URL (*)"
send "jdbc:mysql://mysql:3306/jahia?useUnicode=true&characterEncoding=UTF-8&useServerPrepStmts=false\r"
Of course I get an error during image build because it checks the connection right after database URL input :
An error occurred while establishing the connection to the database
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure
The last packet sent successfully to the server was 0 milliseconds
ago. The driver has not received any packets from the server..
Indeed I'm just building the Jahia image, so the mysql container is not yet accessible (even if running).
How to deal with this kind of situation (when you need to access another container during build) ?
As MySQL server will also be in a container, I don't think you should configure it at build time, as you can't assume the database will be up.
Unfortunately, i don't know how 'expect' tool works, but ideally you should link the Database in Jahia container only at container startup. This can be done by injecting it through configuration (environment variable or something else you can inject when you start the container)
That means the MySQL container should have the DB installed in a separated process. On our side for example, we do this by running the sql scripted provided in jahia code directly on the database.
With this solution, you ensure that you don't need your database preinstalled while building.
edit: indeed, Jahia is doing some checks on the database at build time, but you can add an input so Jahia doesn't actually need to perform operation on the DB. It uses izpack autoplay file. That allows to replay the installation.
The DB setup part is the following:
<com.izforge.izpack.panels.UserInputPanel id="dbSettings">
<userInput>
<entry key="dbSettings.connection.url.mssql" value="jdbc:sqlserver://DB_SERVER;DatabaseName=DB_NAME;"/>
<entry key="dbSettings.dbms.createTables" value="false"/>
<entry key="dbSettings.connection.username" value="DB_USER"/>
<entry key="dbSettings.dbms.storeFilesInDB" value="false"/>
<entry key="dbSettings.connection.driver.mssql" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
<entry key="dbSettings.connection.password" value="DB_PASSWORD"/>
</userInput>
</com.izforge.izpack.panels.UserInputPanel>
This assume you have a DB server somewhere unfortunately. On our side we use a fake instance as we request not doing the installation during the build.
Try using docker commit. You may have to run the configuration.exp script to set up Jahia by exec'ing into your container. Then use docker commit to save the changes to the file system into a new image. That image should persist the initial configuration.
Be mindful that volumes are not included in a docker commit, as they live outside Docker's union file system. It doesn't look like you're declaring any volumes in your Dockerfile, so it probably won't be a problem for you.
This answer elaborates on docker commit and database volumes, but the premise is the same for any container.
Related
I have problem installing CYGNUS using docker as source, simply i cannot understand where i should map what specific agent.conf.
Image i am using is from here.
When i try to map agent.conf witch have my specific setup to container it starts and run but fail to copy, and not only that any change i made to file inside container wont stay it returns to previous default state.
While i have no issues with grouping_rules.conf using same approach.
I used docker and docker compose both same results.
Path on witch i try to copy opt/apache-flume/conf/agent.conf
docker run -v /home/igor/Documents/cygnus/agent.conf:/opt/apache-flume/conf/agent.conf fiware/cygnus-ngsi
Can some who managed to run it using his config tell me if i misunderstood location of agent.conf or something because this is weird, i used many docker images and never had issue where i was not able to copy from my machine to docker container.
Thanks in advance.
** EDIT **
Link of agent.conf
Did you copy the agent.conf file to your directory before start the container?
As you can see here, when you define a volume with "-v" option, docker copies the content of the host directory, inside the container directory using the mount point. Therefore, you must first provide the agent.conf file on your host.
The reason is that when using a "bind mounted" directory from the
host, you're telling docker that you want to take a file or directory
from your host and use it in your container. Docker should not modify
those files/directories, unless you explicitly do so. For example, you
don't want -v /home/user/:/var/lib/mysql to result in your
home-directory being replaced with a MySQL database.
If you do not have access to the agent.conf file, you can download the template in the source code from the official cygnus github repo here. You can also copy it once the docker container is running, using the docker cp option:
docker cp <containerId>:/file/path/within/container /host/path/target
Keep in mind, that you will have to edit the agent.conf file to configure it according to the database you are using. You can find in the official doc how to configure cygnus to use differents sinks like MongoDB, MySQL, etc.
I hope I have been helpful.
Best regards!
I am running my web server on Elastic Beanstalk, and using Papertrail for logging. I am using the official .ebextensions script to get papertrail set up during deployment, but I have a problem. I use environment variables as part of my hostname used as the sender when remote_syslog uploads logs to papertrail, and while this works fine during deployment, when the 01_set_logger_hostname container command is triggered, I run into problems whenever I change environment variables by modifying the environment's configuration, since it seems an eb config call will only restart the application server, not run any of the scripts run during deployment, including the ebextensions container commands.
"/tmp/set-logger-hostname.sh":
mode: "00555"
owner: root
group: root
encoding: plain
content: |
#!/bin/bash
logger_config="/etc/log_files.yml"
appname=`{ "Ref" : "AWSEBEnvironmentName" }`
instid=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`
myhostname=${SOME_VARIABLE}_${appname}_${instid}
if [ -f $logger_config ]; then
# Sub the hostname
sed "s/hostname:.*/hostname: $myhostname/" -i $logger_config
fi
As you can see, since my hostname depends on ${SOME_VARIABLE}, I need to refresh the hostname whenever ${SOME_VARIABLE} is modified following eb config.
Is there a way to trigger a script to be run whenever an eb config command is run, so that I can not only restart my web application but also reconfigure and restart remote_syslog with the updated hostname?
This is now possible on AWS Linux 2 based environments with Configuration deployment platform hooks.
For example, you can make a shell script .platform/confighooks/predeploy/predeploy.sh that will run on all configuration changes. Make sure that you make this file executable according to git, or Elastic Beanstalk will give you a permission denied error.
How can I to set enabled = "true" on datasource of standalone.xml of Openshift v3 Wildfly container like below.
<datasource jndi-name="java:jboss/datasources/MySQLDS" enabled="true" use-java-context="true" pool-name="MySQLDS" use-ccm="true">
I put the OPENSHIFT_MYSQL_ENABLED environment variable to "true" but nothing happended.
The answer reference site is the below URL:
https://developer.jboss.org/wiki/DataserviceBuilderOnOpenShiftV3Online
I was dealing with the same problem: the environment variable OPENSHIFT_MYSQL_ENABLED is being ignored by variable substitution process, so I had to activate the data source with my bare hands, and that's what I did:
(I'm going to assume you have the OC tools installed on your system)
log into OC: oc login
list all pods and find the WildFly instance: oc get pods
enter the container's SSH console: oc rsh <<pod-name>>
edit the standalone.xml file vi /wildfly/standalone/configuration/standalone.xml
search for the word "datasource" by typing /datasource on vi editor then press enter
find the attribute "enabled" of your data source and update its value from false to true (to do so, press i to go to vi insert mode)
save the file by pressing esc then :x
I'm using OpenShift community edition, so to restart the container is always a hassle: it takes a very long time to find resources available (like memory and CPU) and start the server again, however, you won't have your data source enabled unless you restart the server. In this regard, to do so, you don't need to restart the container, just reload WildFly by using jboss-cli.sh command line tools. (I didn't try to kill the process and start it again, so if you did try, please comment if it worked).
The following steps should be executed on container's terminal using oc rsh <<podname>> or using the terminal on web console.
Enter jboss-cli using the command /wildfly/bin/jboss-cli.sh
Type connect to log into the WildFly console, you'll be prompted for user and password. If you do not have credentials, exit this console and create a management user by executing the script /wildfly/bin/add-user.sh
Check your data source properties by typing data-source read-resource --name=<<YOUR_DATASOURCE_NAME>> --include-runtime=true --recursive=true and follow up on the "enabled" property.
If your data source is disabled, you should enable it by entering the command data-source enable --name=<<YOUR_DATASOURCE_NAME>>
reload WildFly by entering the reload command. Once WildFly reboots you'll need to access jboss-cli.sh and log into the console again.
test your data source connection using the command data-source test-connection-in-pool --name=<<YOUR_DATASOURCE_NAME>>. If the command output was true your data source is up and running.
Openshift v3 is based on docker containers, therefore I'm afraid if you do restart the container, this configuration will probably be lost. The most appropriated solution would be to include this actions on docker's script, which I don't know yet how it works along with Openshift platform.
Hope it helps!
I want to move containers from one host to another. The containers have updated data in their filesystem, so I do not want to move the original images (docker save) but containers (using docker export).
So I use
docker export l4bnode > l4bnode.tar
on the old host, copy the file to new host, and import image
cat l4bnode.tar | docker import - andi/l4bnode
on the new one. But.. it looks like all the configuration data I had in the Dockerfile (and that I also could specify/had specified in the command line when running the container) is lost. I tried
docker run andi/l4bnode
and get
docker: Error response from daemon: No command specified.
Using docker inspect, I see that all data on the imported image is empty, though it is set on the exported running container. I mainly am missing startup command, working directory, environment variables and exposed ports (some of which I have to change then due to the migration and new environment).
How can I apply the original configuration on the new host, or preferrably, migrate it properly?
You can commit the current container state as new image. Then use save/load on the new image.
That being said this is something you generally should try to avoid. Runtime data should be kept in volumes, any configuration changes should happen via Dockerfile rebuilds.
I have an application in Openshift free plan with only one gear. I want to change it to scalabe and take usage of all of 3 free gears.
I read this blog post from openshift and I found that there is a way to do it. I should clone my current application to a new one as a scalable which will use the 2 remaining gears and then I will delete the original application. Thus, the new one will have 3 free gears.
The way that blog suggest is: rhc create-app <clone> --from-app <existing> --scaling
I have the following error: invalid option --from-app
Update
After running the command gem update rhc, I don't have the error above but...A new application with the given name has created with the same starting package (Python 2.7) just like the existing one, but all the files are missing. It actually create a blank application and not a clone of the existing.
Update 2
Here is the structure of the folder:
-.git
-.openshift
-wsgi
---static
---views
---application
---main.py
-requirements.txt
-setup.py
From what we've talked on IRC, your problem was around missing SSH configuration on Windows machine:
Creating application xxx ... done
Waiting for your DNS name to be available ...done
Setting deployment configuration ... done
No system SSH available. Please use the --ssh option to specify the path to your SSH executable, or install SSH.
I've double checked it, and it appears to be working without any problem.
The only requirement is to have the latest rhc client and putty or any other
SSH client. I'd recommend going through this tutorial once again and double-check everything to make sure everything is working properly.
Make sure you are using the newest version of the rhc gem with "gem update rhc" to make sure that you have access to that feature from the command line.
The --from-app will essentially do a 'rhc snapshot save & snapshot restore` (amoung other things) as you can see here from the source:
if from_app
say "Setting deployment configuration ... "
rest_app.configure({:auto_deploy => from_app.auto_deploy, :keep_deployments => from_app.keep_deployments , :deployment_branch => from_app.deployment_branch, :deployment_type => from_app.deployment_type})
success 'done'
snapshot_filename = temporary_snapshot_filename(from_app.name)
save_snapshot(from_app, snapshot_filename)
restore_snapshot(rest_app, snapshot_filename)
File.delete(snapshot_filename) if File.exist?(snapshot_filename)
paragraph { warn "The application '#{from_app.name}' has aliases set which were not copied. Please configure the aliases of your new application manually." } unless from_app.aliases.empty?
end
However this will not copy over anything in your $OPENSHIFT_DATA_DIR directory so if you're storing files there, you'll need to copy them over manually.