Legstar module for mainframe connectivity - esb

We don't have CICS Tx Gateway installed but do have CICS Tx Server. I want to know if Legstar can connect from an ESB like (Mule,Camel,JBossESB,WSO2) to the z/OS mainframe using this module to CICS Transaction Server. Alternatively, if there are other options to connect to the mainframe, feel free to suggest.
TIA.

I think I saw in one of your several questions on this topic that you are using CICS TS 3.2; there is a section in the documentation germane to the discussion called Connecting CICS to the Web. Documentation for later versions and releases of CICS TS are available here.
As Joe Zitzelberger indicates, you can use raw TCP/IP sockets (we have done this since CICS TS version 1 more than a decade ago), REST/POX (we have done this since version 2.1 which we started using in 2006), and SOAP (available via a support pack in 2.3 and built into the product since 3.1).
You can also talk to CICS via MQ.
More discussion here.
Update per comment from OP: These are realtime synchronous interactions. In our case, response time is subsecond.

You can do a direct connection to CICS/TS with SOAP over HTTP to execute your CICS transactions and to return your data.

Related

Is there documentation regarding exceptions thrown by kubernetes api server, it would be good to have in java but any language will do

We have a use case to monitor kubernetes clusters and I am trying to find the list of exceptions thrown by kubernetes to reflect the status of the k8s server (in a namespace) while trying to submit a job on the UI.
Example: if k8s server throws ClusterNotFound exception that means we cannot submit any more jobs to that api server.
Is there such a comprehensive list?
I came across this in Go Lang. Will this be it? Does java has something like this?
The file you are referencing is a part of Kubernetes library used by many Kubernetes components for API requests fields validations. As all Kubernetes components are written in Go and I couldn't find any plans to port Kubernetes to Java, it's unlikely to have a Java version of that file.
However, there is an officially supported Kubernetes client library, written in Java, so you can check for the proper modules to validate API requests and process API responses in the java-client repostiory or on the javadoc site.
For example, objects that are used to contain proper or improper HTTP replies from Kubernetes apiserver: V1Status and ApiExceptions, (repository link)
Please consider to check java-client usage examples for better understanding.
Detailed Kubernetes RESTful API reference could be found on the official page
For example: Deployment create request
If you are really interested in Kubernetes cluster monitoring and logging aspects, please consider to read the following articles at the beginning:
Metrics For Kubernetes System Components
Kubernetes Control Plane monitoring with Datadog
How to monitor Kubernetes control plane
Logging Architecture
A Practical Guide to Kubernetes Logging

Couchbase CLI/REST to get cluster current settings

Im trying to find a way to monitor couchbase cluster settings such as memory, email configurations, etc.
Ideally it would be a cli/REST command that describes entire cluster configurations or its particular components.
Couchbase version: 4.5.1- Community Edition
Will appreciate for any advice.
In current CB versions, you can get the email info using http://hostname:8091/settings/alerts and memory info using http://hostname:8091/pools/nodes
For some reason, I cannot seem to access the CB archived documentation to confirm this. Try it out and see if these APIs are available in 4.5.x. The pools API should be available. Not sure on the alerts API.

Fiware: Can we use cygnus on Raspberry Pi

Can we install cygnus on RaspberryPi?
Thinking of using it with Cepheus to add persistance on the gateway level.
Thanks in advance for your help!
Never tried, nevertheless it is a Java application, so having a JVM and the proper libraries should work. Most probably the memory is a problem, so it is probably you could not handle a high througput of notifications. In any case, as said, it is a matter of trying.
The keypoint for using it with Cepheus is if Cepheus notifies in the same format than Orion Context Broker. If not, Cygnus will not understand the notifications.
Another important thing is the storage aimed to be used for persistence. I don't know if any of the supported storages in Cygnus, for instance MySQL or MongoDB, can be run wihtin a Raspberry Pi! In this case, the best option may be to install the storage in a remote machine.

Issue when trying to connect to the cluster after updating the version of Java SDK

We are experiencing the issue when trying to connect to the cluster after updating the version of Java SDK.
The setup of the system is as follows:
We have a web application that is using Java SDK and a Couchbase cluster. In between we have a VIP (Virtual IP Address). We realise that isn’t ideal but we’re not able to change that immediately since VIP was mandated by Tech Ops. VIP is basically only there to reroute the initial request on application startup. That way we can make modifications on the cluster and ensure that when application starts it can find the cluster regardless of the actual nodes in the cluster and their IPs.
Prior to the issue we used JAVA SDK version 1.4.4. Our application would start and Java SDK would initiate a request on port 8091 to VIP. Please note that port 8091 is the only port open on VIP. VIP would reroute the request to one of the node cluster currently in use the cluster would respond to Java SDK. At that point Java SDK would discover all the nodes in the cluster and application would run fine. During up time if we would add, remove a node from the cluster Java SDK would update automatically and everything would run without the issue.
In the last sprint we updated the Java SDK to version 2.1.3. Our application would start and Java SDK would initiate a request on port 11210 to VIP. Since this port is not open the request would fail and Java SDK would throw an exception:
Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:93)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:108)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:99)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:89)
No further request would be made on any port.
It appears the order in which port are being used has been changed between versions. Could somebody please confirm, or dispute, that the order in which ports are being used for cluster discovery has been changed between versions. Also could somebody please provide some advice on how we could resolve the issue. We are trying to understand the clients behavior, if we could open all those ports on the VIP would the client still then function correctly and at full performance?
The issue is happening on our production environment which we cannot use for testing out potential solutions since it will interfere with our products.
In v2.x of the Java SDK, it defaults to 11210 to get the cluster map to bootstrap the application. This is a huge improvement actually as now the map comes from the managed cache and not the cluster manager (8091). The SDK should use 8091 as a fall back if it cannot get the map on 11210 though. Regardless, you really want to get that map from 11210, trust me. It cleans up a lot of problems.
To resolve this long term and follow Couchbase best practices, upgrade to the Java 2.2.x SDK, get rid of the VIP entirely and go with a DNS SRV record instead. That gives you one DNS entry for the SDK connection object and you just manage the node list in DNS. It works great. I say SDK 2.2 as the DNS SRV record solution is fully supported there, in 2.1 it is experimental. VIPs are specifically recommended against by Couchbase these days. In older versions of the SDKs it was fine to do this and it helped with limiting the number of connections from the app to the DB nodes, but that is no longer necessary and can actually be a bad thing.
in addition to Kirk's long term answer (which I also advise you to follow), a shorter term solution may be to deactivate the 11210 bootstraping (carrier bootstrap) through the CouchbaseEnvironment by calling bootstrapCarrierEnabled(false) on the builder.
I don't guarantee that it'll work with a vIP even after that, but that may be worth a try if you're in a hurry.

How to connect to my MQTT Broker in Openshift

Following these two tutorials (https://www.anavi.org/article/182/ and http://wei-meilin.blogspot.tw/2014/05/red-hat-openshift-xpaas-simple-mqtt.html) I have installed a MQTT Broker using JBoss Fuse.
Although my mqtt-container disappears after a while (I don't know why) I can make a port-forwarding and test the broker.
But I would like to know how to connect directly to the broker. Do you know how to do it?
I have tried this tutorial (http://training.runcloudrun.com/advanced/16-Network-and-Protocols.md.html - AMQ Example) but I don't have access to "/var/lib/openshift/.httpd.d/sniproxy.cfg"
I am the author of the first tutorial that you pointed out. If you want to use MQTT without local port forwarding please have a look at the remark at the article on my blog and the AMQ cartridge that demonstrates the SNI features:
The port forwarding is not convenient for real life cases, especially if the MQTT clients are working on embedded devices such as microcontrollers and it is recommended to use a SNI Proxy as explained here: http://training.runcloudrun.com/advanced/16-Network-and-Protocols.md.html
I was using Online Openshift and that feature is only available for Enterprise edition.
Why doesn't Openshift have this feature (complete) in the Online mode?
One way to work around is to use the mqtt over websocket feature with a DIY cartridge. See the SO question "How can I access socket through Openshift" for some pointers to further details about how to run websocket on openshift.
The mosquitto seems to have implemented the websocket feature though I have not verified by testing it out.