How can I to set enabled = "true" on datasource of standalone.xml of Openshift v3 Wildfly container like below.
<datasource jndi-name="java:jboss/datasources/MySQLDS" enabled="true" use-java-context="true" pool-name="MySQLDS" use-ccm="true">
I put the OPENSHIFT_MYSQL_ENABLED environment variable to "true" but nothing happended.
The answer reference site is the below URL:
https://developer.jboss.org/wiki/DataserviceBuilderOnOpenShiftV3Online
I was dealing with the same problem: the environment variable OPENSHIFT_MYSQL_ENABLED is being ignored by variable substitution process, so I had to activate the data source with my bare hands, and that's what I did:
(I'm going to assume you have the OC tools installed on your system)
log into OC: oc login
list all pods and find the WildFly instance: oc get pods
enter the container's SSH console: oc rsh <<pod-name>>
edit the standalone.xml file vi /wildfly/standalone/configuration/standalone.xml
search for the word "datasource" by typing /datasource on vi editor then press enter
find the attribute "enabled" of your data source and update its value from false to true (to do so, press i to go to vi insert mode)
save the file by pressing esc then :x
I'm using OpenShift community edition, so to restart the container is always a hassle: it takes a very long time to find resources available (like memory and CPU) and start the server again, however, you won't have your data source enabled unless you restart the server. In this regard, to do so, you don't need to restart the container, just reload WildFly by using jboss-cli.sh command line tools. (I didn't try to kill the process and start it again, so if you did try, please comment if it worked).
The following steps should be executed on container's terminal using oc rsh <<podname>> or using the terminal on web console.
Enter jboss-cli using the command /wildfly/bin/jboss-cli.sh
Type connect to log into the WildFly console, you'll be prompted for user and password. If you do not have credentials, exit this console and create a management user by executing the script /wildfly/bin/add-user.sh
Check your data source properties by typing data-source read-resource --name=<<YOUR_DATASOURCE_NAME>> --include-runtime=true --recursive=true and follow up on the "enabled" property.
If your data source is disabled, you should enable it by entering the command data-source enable --name=<<YOUR_DATASOURCE_NAME>>
reload WildFly by entering the reload command. Once WildFly reboots you'll need to access jboss-cli.sh and log into the console again.
test your data source connection using the command data-source test-connection-in-pool --name=<<YOUR_DATASOURCE_NAME>>. If the command output was true your data source is up and running.
Openshift v3 is based on docker containers, therefore I'm afraid if you do restart the container, this configuration will probably be lost. The most appropriated solution would be to include this actions on docker's script, which I don't know yet how it works along with Openshift platform.
Hope it helps!
Related
I am running my web server on Elastic Beanstalk, and using Papertrail for logging. I am using the official .ebextensions script to get papertrail set up during deployment, but I have a problem. I use environment variables as part of my hostname used as the sender when remote_syslog uploads logs to papertrail, and while this works fine during deployment, when the 01_set_logger_hostname container command is triggered, I run into problems whenever I change environment variables by modifying the environment's configuration, since it seems an eb config call will only restart the application server, not run any of the scripts run during deployment, including the ebextensions container commands.
"/tmp/set-logger-hostname.sh":
mode: "00555"
owner: root
group: root
encoding: plain
content: |
#!/bin/bash
logger_config="/etc/log_files.yml"
appname=`{ "Ref" : "AWSEBEnvironmentName" }`
instid=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`
myhostname=${SOME_VARIABLE}_${appname}_${instid}
if [ -f $logger_config ]; then
# Sub the hostname
sed "s/hostname:.*/hostname: $myhostname/" -i $logger_config
fi
As you can see, since my hostname depends on ${SOME_VARIABLE}, I need to refresh the hostname whenever ${SOME_VARIABLE} is modified following eb config.
Is there a way to trigger a script to be run whenever an eb config command is run, so that I can not only restart my web application but also reconfigure and restart remote_syslog with the updated hostname?
This is now possible on AWS Linux 2 based environments with Configuration deployment platform hooks.
For example, you can make a shell script .platform/confighooks/predeploy/predeploy.sh that will run on all configuration changes. Make sure that you make this file executable according to git, or Elastic Beanstalk will give you a permission denied error.
I'm trying to create a Docker installation of Jahia CMS (Digital Experience Manager).
I need :
a MySQL container
a Jahia container (embedded Tomcat)
The trick is that during the Jahia container build (product installation using Expect), I need to access the MySQL container (connection check required).
MySQL Dockerfile :
FROM mysql:5.6
Jahia Dockefile :
FROM centos:centos6
# Install dependencies
RUN yum -y update && yum -y install ...
# Download Digital Experience Manager 7.1.1
RUN wget -q https://www.jahia.com/downloads/jahia/digitalexperiencemanager7.1.1/DigitalExperienceManager-EnterpriseDistribution-7.1.1.0-r53717.3663.jar -O /tmp/DigitalExperienceManager.jar
# Download MySQL connector (only needed for standalone db installation)
RUN wget -q http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.44/mysql-connector-java-5.1.44.jar -O /usr/lib/mysql-connector-java-5.1.44.jar
# Launch installation using Expect to automate user input
COPY jahia_conf.exp /tmp/configuration.exp
RUN expect /tmp/configuration.exp
# Start Jahia
/opt/DigitalExperienceManager-EnterpriseDistribution-7.1.1.0/tomcat/bin/catalina.sh jpda run
Expect script (jahia_conf.exp)
#!/bin/sh
#!/usr/bin/expect
spawn java -jar /tmp/DigitalExperienceManager.jar -console
# Installation directory
expect "Select target path"
send "/opt/DigitalExperienceManager-EnterpriseDistribution-7.1.1.0\r"
# MySQL connector JAR file
expect "Specify the path to the downloaded driver JAR file"
send "/usr/lib/mysql-connector-java-5.1.44.jar\r"
# Database configuration
expect "Database URL (*)"
send "jdbc:mysql://mysql:3306/jahia?useUnicode=true&characterEncoding=UTF-8&useServerPrepStmts=false\r"
Of course I get an error during image build because it checks the connection right after database URL input :
An error occurred while establishing the connection to the database
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure
The last packet sent successfully to the server was 0 milliseconds
ago. The driver has not received any packets from the server..
Indeed I'm just building the Jahia image, so the mysql container is not yet accessible (even if running).
How to deal with this kind of situation (when you need to access another container during build) ?
As MySQL server will also be in a container, I don't think you should configure it at build time, as you can't assume the database will be up.
Unfortunately, i don't know how 'expect' tool works, but ideally you should link the Database in Jahia container only at container startup. This can be done by injecting it through configuration (environment variable or something else you can inject when you start the container)
That means the MySQL container should have the DB installed in a separated process. On our side for example, we do this by running the sql scripted provided in jahia code directly on the database.
With this solution, you ensure that you don't need your database preinstalled while building.
edit: indeed, Jahia is doing some checks on the database at build time, but you can add an input so Jahia doesn't actually need to perform operation on the DB. It uses izpack autoplay file. That allows to replay the installation.
The DB setup part is the following:
<com.izforge.izpack.panels.UserInputPanel id="dbSettings">
<userInput>
<entry key="dbSettings.connection.url.mssql" value="jdbc:sqlserver://DB_SERVER;DatabaseName=DB_NAME;"/>
<entry key="dbSettings.dbms.createTables" value="false"/>
<entry key="dbSettings.connection.username" value="DB_USER"/>
<entry key="dbSettings.dbms.storeFilesInDB" value="false"/>
<entry key="dbSettings.connection.driver.mssql" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
<entry key="dbSettings.connection.password" value="DB_PASSWORD"/>
</userInput>
</com.izforge.izpack.panels.UserInputPanel>
This assume you have a DB server somewhere unfortunately. On our side we use a fake instance as we request not doing the installation during the build.
Try using docker commit. You may have to run the configuration.exp script to set up Jahia by exec'ing into your container. Then use docker commit to save the changes to the file system into a new image. That image should persist the initial configuration.
Be mindful that volumes are not included in a docker commit, as they live outside Docker's union file system. It doesn't look like you're declaring any volumes in your Dockerfile, so it probably won't be a problem for you.
This answer elaborates on docker commit and database volumes, but the premise is the same for any container.
I have a chrome extension that communicates with a native messaging host to get some data.
The issue is, when I launch the Chrome browser via the shortcut or via the pinned shortcut in the taskbar, the extension is not able to connect to the host. I always get the error:
Failed to start native messaging host.
However, if I launch the chrome.exe via command prompt, everything works fine.
Things I tried with no success:
The taskbar shortcut has no extra flags. The target field has the
following value: "C:\Program Files(x86)\Google\Chrome\Application\chrome.exe"
I tried with the registry under HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Google\Chrome\NativeMessagingHosts\com.company.extension
I tried with the registry under HKEY_LOCAL_MACHINE\SOFTWARE\Google\Chrome\NativeMessagingHosts\com.company.extension
I tried with the registry entry under
HKEY_CURRENT_USER\Software\Google\Chrome\NativeMessagingHosts\com.company.extension
Tried launching the chrome.exe as administrator from the file explorer.
Update:
I added the flag --enable-logging --v=1 to the shortcut to enable logging and when I launch it I get the following output in the console:
[11036:4160:0302/113902.866:ERROR:native_process_launcher_win.cc(140)] COMSPEC is not set
[11036:11856:0302/113902.882:ERROR:native_process_launcher_win.cc(140)] COMSPEC is not set
Update
Upon investigating the chrome.exe process via Process Monitor, I found that there is no COMSPEC environment variable available to it when it is spawned via explorer.
Is there anything else that I can try or something that I am missing here?
As mentioned in the log, Chrome stumbles to start an external process, since COMSPEC, an environment variable pointing normally to cmd, is unset:
[11036:4160:0302/113902.866:ERROR:native_process_launcher_win.cc(140)] COMSPEC is not set
The behavior is different for launching Chrome from cmd itself, since it sets the variable for itself (ans spawned processes).
This can be confirmed by inspecting the Chrome process with Process Explorer.
One can run rundll32 sysdm.cpl,EditEnvironmentVariables as admin (e.g. from admin command line) to open the environment variable settings.
Alternatively, the dialog can be navigated to from Control Panel > System and Security > System > Advanced system settings > Advanced > Environment Variables...
ComSpec is usually set in System variables to
C:\WINDOWS\system32\cmd.exe
Adjust as necessary for your system install. For this setting to apply, you need to log out and log back in, or better yet restart the system.
I am using behat+mink. I wrote some features and am now running tests.
How can I enable xdebug to to stop on breakpoints in phpstorm when running behat tests ?
I have not tried this with Mink yet, but this is configuration that allows me to step through debugging of behat (with behat running on a remote server):
Configure your server with x-debug
Of note, since this is commandline, you need to edit the cli config under /etc/php5/cli/conf.d/20-xdebug.ini.
Set remote_host to the ip of the computer you're using PHPSTORM from
Set autostart = 1
Disable connect_back, you will initiate debugging from the server so there is nothign to connect back to
You can also do this without editing your ini by exporting values as env variables, just remember to do this each time you start a new shell (or add to your .bash_profile file):
export XDEBUG_CONFIG="remote_host=<YOUR IP>"
Configure PHPStorm's Debugger
It seems by default, PHPStorm doesn't understand remote-cli scripts, so we need to add a configuration that tells it to expect a CLI script to trigger xdebug
Open the Run Menu and select "Edit Configurations"
Click the Green "+" to to add a new configuration and select "PHP Remote Debug"
Name the Configuration (E.G. MyServer-Behat)
Under the Servers: menu, select your remote server.
If you haven't configured your remote server yet, then do this by clicking the "..." button on the right
Click the Green "+" to add a server configuration. Give it a name (E.G. MyServer) and fill in it's address under Host
Configure it's Path Mappings. This is important if the path to your source files is different on your PHPStorm computer from your server. You can see in my example that i'm relating my local checkout (~/Work/Symfony/) to my server install (/var/www/). I specifically added mappings for src, bin, web, app, and vendor by clicking in the space to the right under "Absolute path on the server" and typing in the path. I had issues just mapping the root's, so I had to add these paths to get my debugger to work.
Debug!
Once that is setup, select your configuration from the drop down in the debugging tool bar and click the bug icon (you can also use the Run menu) to start the debugger listening. This is similar to the default Telephone Button (circled in yellow), but it tells PHPStorm to use your new configuration.
Now simply run behat like you normally would from your server and your debugger should connect and stop on any breakpoints you've placed.
If you're having doubts about if it's working or not, try toggling the "Break on First Line" in the Run menu, as this should make the debugger break immediately when you run behat (in the bin/behat file)
I have an application in Openshift free plan with only one gear. I want to change it to scalabe and take usage of all of 3 free gears.
I read this blog post from openshift and I found that there is a way to do it. I should clone my current application to a new one as a scalable which will use the 2 remaining gears and then I will delete the original application. Thus, the new one will have 3 free gears.
The way that blog suggest is: rhc create-app <clone> --from-app <existing> --scaling
I have the following error: invalid option --from-app
Update
After running the command gem update rhc, I don't have the error above but...A new application with the given name has created with the same starting package (Python 2.7) just like the existing one, but all the files are missing. It actually create a blank application and not a clone of the existing.
Update 2
Here is the structure of the folder:
-.git
-.openshift
-wsgi
---static
---views
---application
---main.py
-requirements.txt
-setup.py
From what we've talked on IRC, your problem was around missing SSH configuration on Windows machine:
Creating application xxx ... done
Waiting for your DNS name to be available ...done
Setting deployment configuration ... done
No system SSH available. Please use the --ssh option to specify the path to your SSH executable, or install SSH.
I've double checked it, and it appears to be working without any problem.
The only requirement is to have the latest rhc client and putty or any other
SSH client. I'd recommend going through this tutorial once again and double-check everything to make sure everything is working properly.
Make sure you are using the newest version of the rhc gem with "gem update rhc" to make sure that you have access to that feature from the command line.
The --from-app will essentially do a 'rhc snapshot save & snapshot restore` (amoung other things) as you can see here from the source:
if from_app
say "Setting deployment configuration ... "
rest_app.configure({:auto_deploy => from_app.auto_deploy, :keep_deployments => from_app.keep_deployments , :deployment_branch => from_app.deployment_branch, :deployment_type => from_app.deployment_type})
success 'done'
snapshot_filename = temporary_snapshot_filename(from_app.name)
save_snapshot(from_app, snapshot_filename)
restore_snapshot(rest_app, snapshot_filename)
File.delete(snapshot_filename) if File.exist?(snapshot_filename)
paragraph { warn "The application '#{from_app.name}' has aliases set which were not copied. Please configure the aliases of your new application manually." } unless from_app.aliases.empty?
end
However this will not copy over anything in your $OPENSHIFT_DATA_DIR directory so if you're storing files there, you'll need to copy them over manually.