Is it possible to convert the standalone.xml (or other configured Wildfly/JBoss profile XML file) to a series of commands or script that can be run by jboss-cli.sh? I have a Wildfly 11 instance that I've made config changes to. I'd like to be able to "templatize" it and have the configuration duplicated using shell scripts during my server deployment. Is there a way to export that config as jboss-cli.sh commands?
I haven't tried it on wildfly 11, but previously on wildfly 9 and 10 i've used https://github.com/tfonteyn/profilecloner to generate jboss-cli scripts for profile creation from scratch. The result still required manual intervention, because cli script sometimes broke order of added elements.
Also, due to bugs in Wildfly 10, adding some subsystems from scratch in jboss-cli was not possible - root element refused to be added without subelement, and vise versa (unfortunately i've lost a ticket number where issue was tracked).
Since in my environment we are using domain mode, we started to copy pre-configured profile with /profile=template-name:clone(to-profile=new-profile), but that's irrelevant in standalone case.
Related
I've been trying to follow the
Setting Up Stackdriver Debugger for Java applications on Google Compute Engine, but am running into issues with Stackdriver Debug.
I'm building my .war file from a separate build server, then deploying it to my GCE server. I added the agent to the start command via /etc/defaults, and my app appears in the https://console.cloud.google.com/debug control panel. The version I set in the run command matches the revision that shows up in the source-context(s).json files.
However when I click open the app, I see the message that
No source version information was provided by the deployed application
I connected the app's git repo as a mirrored cloud repository, and can browse the source files in the sidebar of the Stackdriver Debug page. But, If I browse to a file and add a breakpoint I get an error that the error "File was not found in the executable."
I have ran the gcloud preview app gen-repo-info-file command, which created two basic json files storing my git repo and revision. Is it supposed to do anything else?
I have tried running jetty using both normal and extracted modes. If I have jetty first extract the war file, I can see the source-context.json filesin the WEB-INF/classes directory.
What am I missing?
https://github.com/GoogleCloudPlatform/cloud-debug-java#extra-classpath mentions
you can update the agentPath showing your WEB-INF/class directory.
-agentpath:/opt/cdbg/cdbg_java_agent.so=--cdbg_extra_class_path=/opt/tomcat/webapps/myapp/WEB-INF/classes
For multiple class paths:
-agentpath:/opt/cdbg/cdbg_java_agent.so=--cdbg_extra_class_path=/opt/tomcat/webapps/myapp/WEB-INF/classes:/another/path/with/classes
There are a couple of things going on here.
First, it sounds like you are doing the correct thing with gen-repo-info-file. The debugger agent should pick up the json files from the WEB-INF/classes directory.
The debugger uses fuzzy matching to find source files, so as long as the name of the .java file matches a file in your executable, you should not get that error.
The most likely scenario given the information in your question is that you are attaching the debugger to a launcher process, rather than your actual application. Without further details, I can't absolutely confirm that, though.
If you send us more details at cdbg-feedback#google.com, we can look more closely at your case to see if we can understand exactly what's happening, and potentially improve our documentation, since it sounds like you followed the docs pretty closely.
I have been trying to generate code coverage for my web application using Cobertura and Junit but am running into problems. My webapp is a Java web application deployed on WebSphere Liberty Profile. I have followed the steps mentioned here: https://github.com/cobertura/cobertura/wiki/FAQ#using-cobertura-with-a-web-application
My steps are as follows:
Instrument classes using cobertura-instrument ant task.
Put cobertura.jar in the lib folder of my webapp (so that it is on the classpath)
Start Liberty
Run Junits (JUnit runs in a separate JVM other than the Liberty JVM)
The problem is that, cobertura.ser file is not generated when I stop liberty. I have tried the "hack" mentioned here:
https://github.com/cobertura/cobertura/wiki/FAQ#using-cobertura-with-a-web-application
It seemed to work (I actually got some coverage info), but I was seeing that the cobertura.ser file was repeatedly being initialized to zero size and then the size increased to some number, so I am a little hesitant to use this. Moreover, this requires change in the code itself, and depends on the logout code being called, which is not quite ideal for automation.
But I am more interested in a setting for Websphere Liberty such as the one described for JBoss
-Djboss.shutdown.forceHalt=false
In particular, a jvm setting that would allow Cobertura to detect the JVM shutdown hook called by Liberty profile. Is there such a setting for WebSphere/Liberty?
The liberty profile doesn't ever call Runtime.halt so all shutdown hooks should be called appropriately. I thought I'd take a look to reproduce and I think I managed to get it working (I say I think because none of the command line scripts worked so I may still have done something wrong)
Wrote a simple servlet war
Downloaded cobertura and put the cobertura-2.1.1.jar and all the jars in the downloads lib folder into the WEB-INF/lib of my war
Ran the java InstrumentMain class to instrument the classes in WEB-INF/classes (choosing to overwrite them)
Started the server
Accessed the application
Shutdown the server
at the end I looked in the WEB-INF/classes and there was a cobertura.ser file that was 1480 bytes (i.e. non-zero.). When I ran the report tool on this it said I had no coverage, so I deleted the file and went back to reproduce. The cobertura.ser file in WEB-INF/classes wasn't generated. Instead I looked in the server working directory (in case it was there instead) and it wasn't. When I generated the report on this I got coverage.
So some possible things to look for:
Is this the corbertura.ser you are looking for? Look in the usr/servers/ folder to see if there is one there.
Did the instrumented classes end up in the app prior to or instead of the non-instrumented ones
Was the cobertura dependencies available.
So, I'm enjoying using composer, but I'm struggling to understand how others use it in relation to a deployment service. Currently I'm using deployhq, and yes, I can set it to deploy and run composer when there is an update to the repo, but this doesn't make sense to me now.
My main composer repo, containing just the json file of all of the packages I want to include in my build, only gets updated when I add a new package to the list.
When I update my theme, or custom extension (which is referenced in the json file), there is no "hook" to update my deployment service. So I have to log in to my server and manually run composer (which takes the site down until it's finished).
So how do others manage this? Should I only run composer locally and include the vendor folder in my repo?
Any answers would be greatly appreciated.
James
There will always be arguments as to the best way to do things such as this and there are different answers and different options - the trick is to find the one that works best for you.
Firstly
I would first take a step back and look at how you are managing your composer.json
I would recommend that all of your packages in composer.json be locked down to the exact version number of the item in Packagist. If you are using github repo's for any of the packages (or they are set to dev-master) then I would ensure that these packages are locked to a specific commit hash! It sounds like you are basically there with this as you say nothing updates out of the packages when you run it.
Why?
This is to ensure that when you run composer update on the server, these packages are taken from the cache if they exist and to ensure that you dont accidentally deploy untested code if one of the modules happens to get updated between you testing and your deployment.
Actual deployments
Possible Method 1
My opinion is slightly controversial in that when it comes to Composer for many of my projects that don't go through a CI system, I will commit the entire vendor directory to version control. This is quite simply to ensure that I have a completely deployable branch at any stage, it also makes deployments incredibly quick and easy (git pull).
There will already be people saying that this is unnecessary and that locking down the version numbers will be enough to ensure any remote system failures will be handled, it clogs up the VCS tree etc etc - I won't go into these now, there are arguments for and against (a lot of it opinion based), but as you mentioned it in your question I thought I would let you know that it has served me well on a lot of projects in the past and it is a viable option.
Possible Method 2
By using symlinks on your server to your document root you can ensure that the build completes before you switch over the symlink to the new directory once you have confirmed the build completed.
This is the least resistance path towards a safe deployment for a basic code set using composer update on the server. I actually use this method in conjunction with most of my deployments (including the ones above and below).
Possible Method 3
Composer can use "artifacts" rather than a remote server, this will mean that you will basically be creating a "repository folder" of your vendor files, this is an alternative to adding the entire vendor folder into your VCS - but it also protects you against Github / Packagist outages / files being removed and various other potential issues. The files are retrieved from the artifacts folder and installed directly from the zip file rather than being retrieved from a server - this folder can be stored remotely - think of it as a poor mans private packagist (another option btw).
IMO - The best method overall
Set up a CI system (like Jenkins), create some tests for your application and have them respond to push webhooks on your VCS so it builds each time something is pushed. In this build you will set up the system to:
run tests on your application (If they exist)
run composer update
generate an artifact of these files (if the above items succeed)
Jenkins can also do an actual deployment for you if you wish (and the build process doesn't fail), it can:
push the artifact to the server via SSH
deploy the artifact using a script
But if you already have a deployment system in place, having a tested artifact to be deployed will probably be one of its deployment scenarios.
Hope this helps :)
We are trying to configure a deployment of ASP.NET application using Octopus deploy.
All is working fine, but sometimes the step fails while trying to overwrite files saying the the file is already locked by some other process.
We already stop IIS before the deployment starts, so not sure what we can try here.
Sometimes the error is in the application customlog folder(txt files), sometimes its in the bin folder for some dll etc.
Exact error is:
*Unable to copy the package to the specified directory 'D:\Apps\XYZ_Stage'. One or more files in the directory may be locked by another process. You could use a PreDeploy.ps1 script to stop any processes that may be locking the file. Error details follow.
Access to the path 'D:\Apps\XYZ_Stage\bin\XYZ.Business.dll' is denied.
System.UnauthorizedAccessException: Access to the path 'D:\Apps\XYZ_Stage\bin\ACA.Business.dll' is denied.*
Any suggestions?
If you're using Octopus 2.0 or higher, you can leverage the "IIS web site and application pool" deployment option which causes Octopus Deploy to handle all the complexities of deploying to IIS without you performing manual steps.
Here's some information: http://docs.octopusdeploy.com/display/OD/IIS+Websites+and+Application+Pools
I have an application in Openshift free plan with only one gear. I want to change it to scalabe and take usage of all of 3 free gears.
I read this blog post from openshift and I found that there is a way to do it. I should clone my current application to a new one as a scalable which will use the 2 remaining gears and then I will delete the original application. Thus, the new one will have 3 free gears.
The way that blog suggest is: rhc create-app <clone> --from-app <existing> --scaling
I have the following error: invalid option --from-app
Update
After running the command gem update rhc, I don't have the error above but...A new application with the given name has created with the same starting package (Python 2.7) just like the existing one, but all the files are missing. It actually create a blank application and not a clone of the existing.
Update 2
Here is the structure of the folder:
-.git
-.openshift
-wsgi
---static
---views
---application
---main.py
-requirements.txt
-setup.py
From what we've talked on IRC, your problem was around missing SSH configuration on Windows machine:
Creating application xxx ... done
Waiting for your DNS name to be available ...done
Setting deployment configuration ... done
No system SSH available. Please use the --ssh option to specify the path to your SSH executable, or install SSH.
I've double checked it, and it appears to be working without any problem.
The only requirement is to have the latest rhc client and putty or any other
SSH client. I'd recommend going through this tutorial once again and double-check everything to make sure everything is working properly.
Make sure you are using the newest version of the rhc gem with "gem update rhc" to make sure that you have access to that feature from the command line.
The --from-app will essentially do a 'rhc snapshot save & snapshot restore` (amoung other things) as you can see here from the source:
if from_app
say "Setting deployment configuration ... "
rest_app.configure({:auto_deploy => from_app.auto_deploy, :keep_deployments => from_app.keep_deployments , :deployment_branch => from_app.deployment_branch, :deployment_type => from_app.deployment_type})
success 'done'
snapshot_filename = temporary_snapshot_filename(from_app.name)
save_snapshot(from_app, snapshot_filename)
restore_snapshot(rest_app, snapshot_filename)
File.delete(snapshot_filename) if File.exist?(snapshot_filename)
paragraph { warn "The application '#{from_app.name}' has aliases set which were not copied. Please configure the aliases of your new application manually." } unless from_app.aliases.empty?
end
However this will not copy over anything in your $OPENSHIFT_DATA_DIR directory so if you're storing files there, you'll need to copy them over manually.