IBM MQ doesn't run as mqm on Openshift 4 - openshift

Hi guys.
I've an IBM MQ image deployed on Openshift 4 and for some reason, the processes don't use the user mqm but the one randomly generated by Openshift itself.
As a result, I've a Java application that tries to connect to the queues and it fails because the authentication fails since it uses mqm as user.
The same exact image running on Openshift 3 behaves as expected. For more details:
Custom image:
FROM ibmcom/mq
ENV HOME /root
COPY config.mqsc /etc/mqm/
and, in the config.mqsc:
DEFINE CHANNEL(PASSWORD.SVRCONN) CHLTYPE(SVRCONN)
SET CHLAUTH(PASSWORD.SVRCONN) TYPE(BLOCKUSER) USERLIST('nobody') DESCR('Allow privileged users on this channel')
SET CHLAUTH('*') TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(NOACCESS) DESCR('BackStop rule')
SET CHLAUTH(PASSWORD.SVRCONN) TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(CHANNEL) CHCKCLNT(REQUIRED)
ALTER AUTHINFO(SYSTEM.DEFAULT.AUTHINFO.IDPWOS) AUTHTYPE(IDPWOS) ADOPTCTX(YES)
REFRESH SECURITY TYPE(CONNAUTH)
DEFINE QLOCAL(MYQUEUE.IN ) DEFPSIST(YES) MAXDEPTH(500000)
DEFINE QLOCAL(MYQUEUE.OUT ) DEFPSIST(YES) MAXDEPTH(500000)
DEFINE QLOCAL(CS.ERROR) DEFPSIST(YES) MAXDEPTH(500000)
ALTER QMGR CHLAUTH(DISABLED) CONNAUTH(' ')
ALTER CHANNEL('SYSTEM.DEF.SVRCONN') CHLTYPE(SVRCONN) MCAUSER('mqm')
REFRESH SECURITY TYPE(CONNAUTH)
The process running on Openshift 4 looks like
1000790+ 232 0.0 0.1 2308688 45776 ? Ssl 09:39 0:00 /opt/mqm/bin/amqzxma0 -m QM1 -x -u 1000790000
but in the Openshift 3 it looks like
1000100+ 152 0.0 0.0 2324200 33812 ? Ssl May03 0:06 /opt/mqm/bin/amqzxma0 -m QM1 -x -u mqm
Another difference are the "capabilties" and the security attributes that the MQ container has on startup.
On Openshift 3:
Capabilities (bounding set): chown,dac_override,fowner,fsetid,setpcap,net_bind_service,net_raw,sys_chroot,audit_write,setfcap
Process security attributes: system_u:system_r:container_t:s0:c0,c15
On Openshift 4:
Capabilities (bounding set): chown,dac_override,fowner,fsetid,setpcap,net_bind_service,net_raw,sys_chroot
Process security attributes: system_u:system_r:container_t:s0:c17,c28
Stacktrace produced by the application:
Caused by: org.springframework.jms.JmsSecurityException: JMSWMQ2013: The security authentication was not valid that was supplied for QueueManager 'QM1' with connection mode 'Client' and host name 'my-mq(1414)'.; nested exception is com.ibm.msg.client.jms.DetailedJMSSecurityException: JMSWMQ2013: The security authentication was not valid that was supplied for QueueManager 'QM1' with connection mode 'Client' and host name 'my-mq(1414)'.
Please check if the supplied username and password are correct on the QueueManager to which you are connecting.; nested exception is com.ibm.mq.MQException: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2035' ('MQRC_NOT_AUTHORIZED').
at org.springframework.jms.support.JmsUtils.convertJmsAccessException(JmsUtils.java:286)
at org.springframework.jms.support.JmsAccessor.convertJmsAccessException(JmsAccessor.java:185)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:507)
at org.springframework.jms.core.JmsTemplate.browseSelected(JmsTemplate.java:1029)
at org.springframework.jms.core.JmsTemplate.browse(JmsTemplate.java:991)
... 78 more
Caused by: com.ibm.msg.client.jms.DetailedJMSSecurityException: JMSWMQ2013: The security authentication was not valid that was supplied for QueueManager 'QM1' with connection mode 'Client' and host name 'my-mq(1414)'.
Please check if the supplied username and password are correct on the QueueManager to which you are connecting.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:531)
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:215)
at com.ibm.msg.client.wmq.internal.WMQConnection.<init>(WMQConnection.java:424)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createV7ProviderConnection(WMQConnectionFactory.java:8475)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createProviderConnection(WMQConnectionFactory.java:7815)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl._createConnection(JmsConnectionFactoryImpl.java:303)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl.createConnection(JmsConnectionFactoryImpl.java:236)
at com.ibm.mq.jms.MQConnectionFactory.createCommonConnection(MQConnectionFactory.java:6016)
at com.ibm.mq.jms.MQQueueConnectionFactory.createQueueConnection(MQQueueConnectionFactory.java:111)
at com.ibm.mq.jms.MQQueueConnectionFactory.createConnection(MQQueueConnectionFactory.java:187)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:196)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:494)
... 80 more
Caused by: com.ibm.mq.MQException: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2035' ('MQRC_NOT_AUTHORIZED').
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:203)
... 90 more
Any idea on what the issue could be?

To ensure compliance with security constraints required in a multi-tenant containerized environment, the IBM MQ certified containers, do not support the use of IDs that are defined on the operating system libraries inside a container. There is no mqm user ID or group defined in the container.
For more details read User authentication and authorization for IBM MQ in containers

Related

GCP deployment fails on "Updating service"

I have asp.net core application hosted on GCP App Engine. When I try to deploy the application it fails on last step:
Updating service [name] (this may take several minutes)... ...failed
ERROR: (gcloud.app.deploy) Error Response: [9] An internal error occurred while processing task /app-engine-flex/flex_await_healthy/flex_await_healthy>blablabla.wm.1
The exception stack trace show that service running in background couldn't find MySQL table (that table obviously exists).
my app.yaml file:
service: XXX
runtime: custom
env: flex
automatic_scaling:
max_concurrent_requests: 80
min_num_instances: 1
max_num_instances: 1
resources:
cpu: XXX
memory_gb: XXX
beta_settings:
cloud_sql_instances: "XXX:XXXX:XXXX=tcp:3306"
It looks like the application is deployed properly despite the error. This is the only error and backgroud service desn't throw any exceptions at later point. In fact it works properly and can connect to the database.
My guess was that maybe GCP is checking health while the application is not connected do database. So I tried to add liveness_check and readiness_check to app.yaml and configured dedicated /healthcheck endpoint in my application but it didn't make any change.
Any ideas how to fix it and what might be a cause?
Deploying app with new version fixed the issue

Issue in making Remote powershell connection from windows 7

I am trying to make persistant remote powershell connection from Windows7(32-bit) to exchange Server 2013,such that I can run some powershell commands from My windows7
machine as i am running it on server.
I am following steps from this article https://technet.microsoft.com/en-us/library/dd335083(v=exchg.150).aspx
I had already installed
.net Framework Version 4.5,
Windows Framework 4.0,
and windows already has SP1 installed.
Now, the issue is in running this command
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri http://ServerName.domain.com/PowerShell/ -Authentication Kerberos -Credential $UserCredential
Everytime it results into an error that is
New-PSSession : [ex13r.corp.local] Connecting to remote server ex13r.corp.local failed with the following error
message : WinRM cannot process the request. The following error with errorcode 0x80090311 occurred while using
Kerberos authentication: There are currently no logon servers available to service the logon request.
Possible causes are:
-The user name or password specified are invalid.
-Kerberos is used when no authentication method and no user name are specified.
-Kerberos accepts domain user names, but not local user names.
-The Service Principal Name (SPN) for the remote computer name and port does not exist.
-The client and remote computers are in different domains and there is no trust between the two domains.
After checking for the above issues, try the following:
-Check the Event Viewer for events related to authentication.
-Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or
use HTTPS transport.
Note that computers in the TrustedHosts list might not be authenticated.
-For more information about WinRM configuration, run the following command: winrm help config. For more
information, see the about_Remote_Troubleshooting Help topic.
At D:\path.ps1:1 char:10
+ $session=New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri http ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [New-PSSession], PSRemotin
gTransportException
+ FullyQualifiedErrorId : AuthenticationFailed,PSSessionOpenFailed
NOTE
Client machine(Windows7) is not connected to any domain
I added the server to the list of trustedHosts
Set-ExecutionPolicy to remotesigned
Winrm service is also running.
Windows firewall is turned off on both the systems.
My Powershell Configuration after framework 4.0 installation is as follows
Name Value
---- -----
PSVersion 4.0
WSManStackVersion 3.0
SerializationVersion 1.1.0.1
CLRVersion 4.0.30319.18408
BuildVersion 6.3.9600.16406
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0}
PSRemotingProtocolVersion 2.2
My efforts are going in no direction,the error remains the same.Any help would greatly be appreciated.
Thanks!

go-ethereum - geth - puppeth - ethstat remote server : docker: command not found

I'm trying to setup a private ethereum test network using Puppeth (as Péter Szilágyi demoed in Ethereum devcon three 2017). I'm running it on a macbook pro (macOS Sierra).
When I try to setup the ethstat network component I get an "docker configured incorrectly: bash: docker: command not found" error. I have docker running and I can use it fine in the terminal e.g. docker ps.
Here are the steps I took:
What would you like to do? (default = stats)
1. Show network stats
2. Manage existing genesis
3. Track new remote server
4. Deploy network components
> 4
What would you like to deploy? (recommended order)
1. Ethstats - Network monitoring tool
2. Bootnode - Entry point of the network
3. Sealer - Full node minting new blocks
4. Wallet - Browser wallet for quick sends (todo)
5. Faucet - Crypto faucet to give away funds
6. Dashboard - Website listing above web-services
> 1
Which server do you want to interact with?
1. Connect another server
> 1
Please enter remote server's address:
> localhost
DEBUG[11-15|22:46:49] Attempting to establish SSH connection server=localhost
WARN [11-15|22:46:49] Bad SSH key, falling back to passwords path=/Users/xxx/.ssh/id_rsa err="ssh: cannot decode encrypted private keys"
The authenticity of host 'localhost:22 ([::1]:22)' can't be established.
SSH key fingerprint is xxx [MD5]
Are you sure you want to continue connecting (yes/no)? yes
What's the login password for xxx at localhost:22? (won't be echoed)
>
DEBUG[11-15|22:47:11] Verifying if docker is available server=localhost
ERROR[11-15|22:47:11] Server not ready for puppeth err="docker configured incorrectly: bash: docker: command not found\n"
Here are my questions:
Is there any documentation / tutorial describing how to setup this remote server properly. Or just on puppeth in general?
Can I not use localhost as "remote server address"
Any ideas on why the docker command is not found (it is installed and running and I can use it ok in the terminal).
Here is what I did.
For the docker you have to use the docker-compose binary. You can find it here.
Furthermore, you have to be sure that an ssh server is running on your localhost and that keys have been generated.
I didn't find any documentations for puppeth whatsoever.
I think I found the root cause to this problem. The SSH daemon is compiled with a default path. If you ssh to a machine with a specific command (other than a shell), you get that default path. This does not include /usr/local/bin for example, where docker lives in my case.
I found the solution here: https://serverfault.com/a/585075:
edit /etc/ssh/sshd_config and make sure it contains PermitUserEnvironment yes (you need to edit this with sudo)
create a file ~/.ssh/environment with the path that you want, in my case:
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
When you now run ssh localhost env you should see a PATH that matches whatever you put in ~/.ssh/environment.

Openshift 3 , 503 Error (No server is available to handle this request)

I have created a web application using jsp/tiles/struts/mysql/tomcat. I created new project on Openshift 3 console (Openshift online) https://console.preview.openshift.com/console/ then added tomcat/mySql. I was getting 503 error sometimes and other times, same page was working as expected. 503 error came randomly for any page from my project. When I get 503 error, I refresh some no of times and it goes away, and my page is correctly displayed.
Error that I see is:
"503 Service Unavailable
No server is available to handle this request. "
I did some research:
What I understand from this openshift 2 link:
https://blog.openshift.com/how-to-host-your-java-ee-application-with-auto-scaling/
is that to correct 503 error:
SSH into your application gear using rhc ssh --app <app_name>
Change directory to haproxy/conf
change the following in haproxy.cfg option httpchk GET / to option httpchk GET /api/v1/ping
Restart the HAProxy cartridge from your local machine using RHC rhc cartridge-restart --cartridge haproxy
I dont know if it is also applicable to openshift 3. In openshift 3 where is haproxy.log, haproxy.cfg, haproxy/conf or its slightly different in openshift 3. (Nut thanks to Warrens comments, yes he saw 503 error in openshift related to HAProxy)
Now after 1 week after posting this question:
I am getting Quota Reached Error. I am able to build my project but all deployments are failing. I wonder if 503 error that I was getting earlier(either completely or partially) was related to Quota reached. How should I proceed now.
curl -i localhost:8080/GEA
HTTP/1.1 302 Found Server:
Apache-Coyote/1.1
Location: http://localhost:8080/GEA/
Transfer-Encoding: chunked Date: Tue, 11 Apr 2017 18:03:25 GMT
Tomcat logs do not show any application error.
Will Readiness Probe and Liveness Probe help me? I have not set them yet.
Nor do I know how to set them.
Will scaling help me (I dont know how to set it either)
Do I have to set memory/... all at maximum allowed to ensure project runs smooth?
For me I had a similar situation of getting 503's sometimes and sometimes getting my actual page. the reason was because you have haproxy on the frontend handling the requests. Depending on your setup you may even have a few haproxy pods and your request could be funneled between one of the pods. So as in my case one pod was working and the other not.
So basically
oc get pods -n default
NAME READY STATUS RESTARTS AGE
docker-registry-7-i02rh 1/1 Running 0 75d
registry-console-12-wciib 1/1 Running 0 67d
router-1-533cg 1/1 Running 3 76d
router-1-9utld 1/1 Running 1 76d
router-1-uwf64 1/1 Running 1 76d
As you can see in my output default namespace is where my router(haproxy) pods live. If I change to that namespace
oc project default
Then run
oc logs -f router-1-533cg
on each of the pods you will most likely find a sepcific pod that is behaving bad. You can simply delete, and the replication controller will create a new one

deploy router status Pending

By the following composition, openshift-origin, using playbook in ansible, the environment was built.
[node]
openshift-master.example.com<br>
openshift-node01.example.com<br>
openshift-node02.example.com<br>
openshift-etcd.example.com<br>
[/etc/ansible/hosts]
[OSEv3:children]
masters
nodes
etcd
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
deployment_type=origin
[masters]
openshift-master.example.com
[etcd]
openshift-etcd.example.com
# host group for nodes, includes region info
[nodes]
openshift-master.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
openshift-node01.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
openshift-node02.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
In following command, in openshift, login, oh, it was done.
[login command]
oc login -u system:admin -n default
And Replica in router, it was made in following command.
[create router command]
oc scale dc/router --replicas=2
The following event occurs, and a place can't make replica in router.
[create router command]
Failed scheduling
pod (router-2-ievkl) failed to fit in any node fit failure on node (openshift-node01.example.com): CheckServiceAffinity fit failure on node (openshift-node02.example.com): CheckServiceAffinity fit failure on node (openshift-master.example.com): PodFitsHostPorts
It's such situation, but when how doing correspond, would I be able to make replica in router right?
Got the same issue after clean install of origin.
Uncordoning masters do the thing. Thnx to lorenzvth7
During advanced installation, the openshift_hosted_router_selector and openshift_registry_selector Ansible settings are set to region=infra by default. The default router and registry will only be automatically deployed if a node exists that matches the region=infra label.
Also, according to an error topic starter's case "PodFitsHostPorts"
Routers directly attach to port 80 and 443 on all interfaces on a host. Restrict routers to hosts where port 80/443 is available and not being consumed by another service, and set this using node selectors and the scheduler configuration. As an example, you can achieve this by dedicating infrastructure nodes to run services such as routers.
So, this means that you shoul re-label e.g. openshift-node01.example.com as region:infra