CICS Bundle and JSON webservice - json

I am doing a POC for a client account ,where I am trying to setup and Request -response model of JSON web-service where CICS acts as client. I have created 2 bundles separately for placing request jsbinds and response jsbind files. now the problem is that only one of my bundle is active (either request or response) and every time i have to discard one bundle and need to install the other one . is there a way i can install multiple bundles simultaneously in a CICS region ? or can the bundle be discarded and another bundle be installed by the application program dynamically it self

You can absolutely install multiple CICS bundles simultaneously in a CICS region.
The first thing to check is the CICS regions job log for messages explaining why the second bundle failed to install (or failed to enable). The messages will likely start with DFHRL.
If you have installed each of the bundles successfully (albeit independently), then it could be something as simple as a naming clash. Make sure each bundle has a unique name.
This Redbooks publication (especially chapter 11) should be useful:
Implementing IBM CICS JSON Web Services for Mobile Applications

Also, make sure the bundle-id is unique. The bundle-id is generated from the bundle directory name, and can be found inside the META-INF/cics.xml file.
The CICS region job log will mention "The CICS resource lifecycle manager has failed to create the BUNDLE resource ", but does not give a reason as to why creation has failed.
There is, however, a line stating "BUNDLE resource is being created with BUNDLEID and version .". You could check to see if the bundle-ids are the same for both bundles.

Related

Failed to pull image "image-registry.openshift-image-registry.svc:5000/..." OpenShift CRC

I am trying to install a sample application using the git option in OpenShift 4.7.2 (CodeReady containers 1.24) and I keep getting the below error while openshift tries to build the image to be deployed.
Failed to pull image
"image-registry.openshift-image-registry.svc:5000/employee-ecosys/person-service:latest": rpc error:
code = Unknown
desc = Error reading manifest latest in image-registry.openshift-image-registry.svc:5000/employee-ecosys/person-service:
manifest unknown: manifest unknown
The application person-service is a simple crud application build using spring-boot and uses in-memory h2 as its database. Github repo is here
Some checks to perform:
Are the image registry pods running?
oc get pods -n openshift-image-registry
Is your specific image created?
oc get images | grep "person-service"
Do you get any images?
oc get images
"latest" is kind of a special tag. You should never manually tag an image as "latest". Openshift will consider the "latest" tag to be the newest image, regardless of what tag it has.
I am not familiar with the git deploy method. I have personally very little experience with any s2i builds. I normally use a git repo for the openshift/kubernetes resources and a git repo for the code (they can be the same but separated in the tree by folder structure) and use a pipeline or manually build the image and push it to a registry somewhere and then let openshift pull it from there.

Karaf how to exclude specified bundle from log

I would like to exclude my bundle from root karaf log. The JSON sended by this bundle is too large, and the log is no more readeable.
I suppose that I should change osgi:* in line :
rootLogger=INFO,out,osgi:*
Which value I should put there ?
Edit, the problem is more complicated that I thought.
The JSON sont injected in logs by org.apache.cxf.cxf-rt-features-logging. It is used also by other bundle. I vould like remove only JSON sending and receiving by my bundle.
How can I do this ?
If you want to exclude a specific bundle from logging, just turn off logging for the bundle within the pax logging config.
log4j.logger.mybundle = OFF
If you want to fine tune CXF message logging please check http://cxf.apache.org/docs/message-logging.html.
Some things to note:
The logger name is .. karaf by default only cuts it to just the type.
A lot of the details are in the MDC values
You need to change your pax logging config to make these visible.
You can use the logger name to fine tune which services you want to log this way. For example set the debug level to WARN for noisy
services to avoid that they are logged or log some services to another
file.

Managing composer and deployment

So, I'm enjoying using composer, but I'm struggling to understand how others use it in relation to a deployment service. Currently I'm using deployhq, and yes, I can set it to deploy and run composer when there is an update to the repo, but this doesn't make sense to me now.
My main composer repo, containing just the json file of all of the packages I want to include in my build, only gets updated when I add a new package to the list.
When I update my theme, or custom extension (which is referenced in the json file), there is no "hook" to update my deployment service. So I have to log in to my server and manually run composer (which takes the site down until it's finished).
So how do others manage this? Should I only run composer locally and include the vendor folder in my repo?
Any answers would be greatly appreciated.
James
There will always be arguments as to the best way to do things such as this and there are different answers and different options - the trick is to find the one that works best for you.
Firstly
I would first take a step back and look at how you are managing your composer.json
I would recommend that all of your packages in composer.json be locked down to the exact version number of the item in Packagist. If you are using github repo's for any of the packages (or they are set to dev-master) then I would ensure that these packages are locked to a specific commit hash! It sounds like you are basically there with this as you say nothing updates out of the packages when you run it.
Why?
This is to ensure that when you run composer update on the server, these packages are taken from the cache if they exist and to ensure that you dont accidentally deploy untested code if one of the modules happens to get updated between you testing and your deployment.
Actual deployments
Possible Method 1
My opinion is slightly controversial in that when it comes to Composer for many of my projects that don't go through a CI system, I will commit the entire vendor directory to version control. This is quite simply to ensure that I have a completely deployable branch at any stage, it also makes deployments incredibly quick and easy (git pull).
There will already be people saying that this is unnecessary and that locking down the version numbers will be enough to ensure any remote system failures will be handled, it clogs up the VCS tree etc etc - I won't go into these now, there are arguments for and against (a lot of it opinion based), but as you mentioned it in your question I thought I would let you know that it has served me well on a lot of projects in the past and it is a viable option.
Possible Method 2
By using symlinks on your server to your document root you can ensure that the build completes before you switch over the symlink to the new directory once you have confirmed the build completed.
This is the least resistance path towards a safe deployment for a basic code set using composer update on the server. I actually use this method in conjunction with most of my deployments (including the ones above and below).
Possible Method 3
Composer can use "artifacts" rather than a remote server, this will mean that you will basically be creating a "repository folder" of your vendor files, this is an alternative to adding the entire vendor folder into your VCS - but it also protects you against Github / Packagist outages / files being removed and various other potential issues. The files are retrieved from the artifacts folder and installed directly from the zip file rather than being retrieved from a server - this folder can be stored remotely - think of it as a poor mans private packagist (another option btw).
IMO - The best method overall
Set up a CI system (like Jenkins), create some tests for your application and have them respond to push webhooks on your VCS so it builds each time something is pushed. In this build you will set up the system to:
run tests on your application (If they exist)
run composer update
generate an artifact of these files (if the above items succeed)
Jenkins can also do an actual deployment for you if you wish (and the build process doesn't fail), it can:
push the artifact to the server via SSH
deploy the artifact using a script
But if you already have a deployment system in place, having a tested artifact to be deployed will probably be one of its deployment scenarios.
Hope this helps :)

Clone Openshift application in scalable

I have an application in Openshift free plan with only one gear. I want to change it to scalabe and take usage of all of 3 free gears.
I read this blog post from openshift and I found that there is a way to do it. I should clone my current application to a new one as a scalable which will use the 2 remaining gears and then I will delete the original application. Thus, the new one will have 3 free gears.
The way that blog suggest is: rhc create-app <clone> --from-app <existing> --scaling
I have the following error: invalid option --from-app
Update
After running the command gem update rhc, I don't have the error above but...A new application with the given name has created with the same starting package (Python 2.7) just like the existing one, but all the files are missing. It actually create a blank application and not a clone of the existing.
Update 2
Here is the structure of the folder:
-.git
-.openshift
-wsgi
---static
---views
---application
---main.py
-requirements.txt
-setup.py
From what we've talked on IRC, your problem was around missing SSH configuration on Windows machine:
Creating application xxx ... done
Waiting for your DNS name to be available ...done
Setting deployment configuration ... done
No system SSH available. Please use the --ssh option to specify the path to your SSH executable, or install SSH.
I've double checked it, and it appears to be working without any problem.
The only requirement is to have the latest rhc client and putty or any other
SSH client. I'd recommend going through this tutorial once again and double-check everything to make sure everything is working properly.
Make sure you are using the newest version of the rhc gem with "gem update rhc" to make sure that you have access to that feature from the command line.
The --from-app will essentially do a 'rhc snapshot save & snapshot restore` (amoung other things) as you can see here from the source:
if from_app
say "Setting deployment configuration ... "
rest_app.configure({:auto_deploy => from_app.auto_deploy, :keep_deployments => from_app.keep_deployments , :deployment_branch => from_app.deployment_branch, :deployment_type => from_app.deployment_type})
success 'done'
snapshot_filename = temporary_snapshot_filename(from_app.name)
save_snapshot(from_app, snapshot_filename)
restore_snapshot(rest_app, snapshot_filename)
File.delete(snapshot_filename) if File.exist?(snapshot_filename)
paragraph { warn "The application '#{from_app.name}' has aliases set which were not copied. Please configure the aliases of your new application manually." } unless from_app.aliases.empty?
end
However this will not copy over anything in your $OPENSHIFT_DATA_DIR directory so if you're storing files there, you'll need to copy them over manually.

After Java7u45 java-web-start gives JARSigningException for some jars in my jnlp

I am completely new in jarSigning issue but for a few days i am in trouble about this and searched/learned a lot.
(This topic is not related to my problem: Why does Java Web Start say a signed jar file is unsigned? )
I use org.apache.commons packages in my java-web-start project and it worked with charm for years. Last week, after the java 7 update 45, our users were unable to run our application that runs via JNLP. The error they receive is "JARSigningException" related to some jars which belong to Apache. I removed the lines that correspond to these 6 jars from jnlp, then we were able to start the webstart application but in runtime we had some exceptions.
All jars are self-signed using same certificate (I myself have not signed them, but since we did not have a problem since a couple of days ago, they should have been signed with the same certificate)
When I verify a problematic jar file as below :
jarsigner -verify commons-digester-1.7.jar
I receive below message :
jar verified.
Warning:
This jar contains unsigned entries which have not been integrity-checked.
This jar contains entries whose signer certificate has expired.
This jar contains entries whose certificate chain is not validated.
Re-run with the -verbose and -certs options for more details.
If the Jar file is verified, why do i still receive the JARSigningException when I try to run the jnlp file?
Any help would be appreciated,
Thanks in advance,
See http://www.oracle.com/technetwork/java/javase/7u45-relnotes-2016950.html for more information on the security changes in Java7 Update45. The changes in U51 are starting to close the door on the security problems even more. http://java.com/en/download/faq/release_changes.xml
The geneeral rule was that all jars within our downloaded JNLP application needed to be signed with the same certificate but like I said, I think there are tighter restrictions now about self-signed certificates.
Also found this blog https://blogs.oracle.com/java-platform-group/entry/updated_security_baseline_7u45_impacts
The security slider setting (in Windows - control panel -> Java then Security tab) will prevent self-signed and unsigned applications when the setting is at High or Very High.