Redhat Couchbase Server image/container doesn't respond to http://localhost:8091 - openshift

I am using couchbase server 6.0.2 image from RedHat
https://access.redhat.com/containers/?tab=overview&get-method=registry-tokens#/registry.connect.redhat.com/couchbase/server
in openshift.
The Pod is running but does not react to http://localhost:8091. The Logs show the error shown below.
I have 3 questions:
Why is whoami failing in the entrypoint?
Why isn't the server responding on port 8091?
Does the couchbase server image require root permissions?

It seems the couchbase/server image is expecting to be run as root, then creates its own user couchbase and group couchbase.
At the end it's running an entrypoint script and in there checking if the user running the whole thing, is actually the user couchbase by executing the whois command.
This is not the case if you just run it in openshift, as the container will be run as some "random" unprivileged user.
This leads to a set of consecutive failures:
Here You will find the evaluation that is done in the entrypoint.sh.
Now the whois command is failing since there is not actual user just said random UID. that failing, leaves the first part of the evaluation blank, which will result in a failure.
This is a bug in the couchbase/server image and as such you should, if time allows contribute to fixing by opening an issue against that repo.

Related

VerneMQ plugin_chain_exhausted Authentication MySQL

I have a running instance of VerneMQ (cluster of 2 nodes) on Google kubernets and using MySQL (CloudSQL) for Auth. Server accepts connections over TLS
It works fine, but after a few days i start seeing this message on the log:
can't authenticate client {[],<<"Client-id">>} from X.X.X.X:16609 due to plugin_chain_exhausted
The client app (paho) complains that the server refused the connection for being "not authorized (code=5 in paho error)"
after a few retry it finally connects. but every time it get's harder and harder until it just won't connect anymore
If i restart VerneMQ everything get's back to normal
I have only 3 clients currently connected at most, at the same time.
clients already connected have no issues in pub/sub.
In my configuration i have (among other things):
log.console.level=debug
plugins.vmq_diversity=on
vmq_diversity.mysql.* = all of them set
allow_anonymous=off
vmq_diversity.auth_mysql.enabled=on
it's like the server degrades over time. the status webpage reports no problem
My verne server was build from the git repository about a month ago and runs on a docker container
what could be the cause?
what else could i check to find posibles causes? maybe a diversity missconfiguration?
Tks
To quickly explain the plugin_chain_exhausted log: with Verne you can run multiple authentication/authorization plugins, and they will be checked in a chain. If one plugin allows the client, it will be in. If no plugin allows the client, you'll see the log above.
This does not explain the behaviour you describe, though. I don't think I have seen that.
In any case, the first thing to check is whether you actually run multiple plugins. For instance: have you disabled the vmq.passwd and the vmq.acl plugins?

Internal 500 error on Google Compute Engine, installing littlest jupyter

"Internal 500 server error" after VM runs for a day or two.
This is the second time it has happened, I start the instance, install littlest Jupyterhub
(see details below). I can login to the external ip, for a day, but then it stops
with internal 500 error. I cannot ssh or get into the instance, only alternate is to
create a new instance and re-do. What is the problem?
I have installed littlest jupyterhub using on this instance, using
#!/bin/bash
curl https://raw.githubusercontent.com/jupyterhub/the-littlest-jupyterhub/master/bootstrap/bootstrap.py | sudo python3 - --admin master
I would recommend you enable access on your instance to the serial console [1].
You will also need to setup a password for your user following this documentation [2].
With these two steps done, you should be able to reconnect to your instance once you are locked out like you mentioned by following this [3].
You should then be able to investigate what is going on in the instance.
Then try to verify if your application is still running, if the SSH server is still running etc.
Frederic
[1] https://cloud.google.com/compute/docs/instances/interacting-with-serial-console#enable_instance_access
[2] https://cloud.google.com/compute/docs/instances/interacting-with-serial-console#setting_up_a_local_password
[3] https://cloud.google.com/compute/docs/instances/interacting-with-serial-console#connectserialconsole

Google Cloud instance can't be accessed via SSH after cloning

I'm desperate for help here. I have a compute engine instance that hosts a lot of websites. These are the steps that I took:
Go to Compute Engine > Snapshots and take a snapshot of my instance
Click on the newly created snapshot and click Create Instance.
The new instance has all the configs of the current running instance
Then when I tried to access the new instance via SSH, it wouldn't work. Error message:
"Connection Failed
We are unable to connect to the VM on port 22. Learn more about possible causes of this issue."
Clicking on Learn more gets me to https://cloud.google.com/compute/docs/ssh-in-browser#ssherror
The instance is booting up and sshd is not yet running - Not sure how to check this
The instance is not running sshd - Not sure how to check this either
sshd is listening on a port other than the one you are connecting to - My current instance is having ssh running on port 22 so I guess this is fine?
There is no firewall rule allowing SSH access on the port - Again, my current instance is having ssh running so I don't think it's because of firewall, right?
The firewall rule allowing SSH access is enabled, but is not configured to allow connections from GCP Console services. - Same as above
The instance is shut down - Instance is still running.
Strange thing is if I create a fresh instance from scratch and then do the steps above to clone to a new instance then that new instance can be accessed normally via SSH.
Can anyone show me how to fix this if possible? Or show me how to see logs, check for what went wrong etc as I tried to google but pretty confused with all the jargons or where to find a particular stuff. Sorry for the wall of text. Thanks
**
Edit #1
**: I got technical support from Google. The steps below might help someone else, but not me as when I reached step 7, I waited forever and couldn't get to the login page.
1.) Go to the VM instances page and click on the Instance name of your VM.
2.) Click the Edit button at the top of the page.
3.) Under Custom metadata, click Add item.
4.) Set 'Key' to 'startup-script' and set 'Value' to this script:
#! /bin/bash
useradd -G sudo USERNAME
echo 'USERNAME:PASSWORD' | chpasswd
NOTE: change the value of USERNAME and PASSWORD to the name and password of your choice.
5.) Enable "Enable connecting to serial ports" by checking the box below the SSH button.
6.) Click Save and then click RESET on the top of the page. Wait for some time for the instance to reboot.
7.) Click on 'Connect to serial port' in the page. In the new window, you might need to wait a bit and press on Enter of your keyboard once; then, you should see the login prompt.
8.) Login using the USERNAME and PASSWORD you provided.
Note: Please do not share any of your password and username for your data security.
As those steps above couldn't help me and the Google support representative looked at the log but didn't see anything wrong, she suggested to debug SSH following this guide https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-ssh#use_your_disk_on_a_new_instance which I will do when I have time. Feel like I'm writing an essay. Will keep posted
The troubleshooting steps that you can follow are:
Use the serial console to view your instance logs and check whether the new instance you created from the snapshot failed to start to the appropriate run level where the ssh daemon would get started. If sshd was not started you would not have ssh access to your instance.
You can try restarting the instance if it doesn’t affect production and try to gain ssh access again. Might be that some issue prevented the instance from starting up properly and restarting it could fix it.
You can try creating another VM instance from the snapshot in case the previous instance wasn’t created properly.
If creating a new VM instance from the snapshot doesn’t fix the issue, it might be that the snapshot itself wasn’t created properly. You can read this documentation guide, section Understanding snapshot best practices, and try creating another snapshot and VM instances from it.
I had the same problem and after a lot of searching, I found an answer from user Peripheral from ServerFault that worked for me.
I found the fix for me. A recent update has a known issue where it removes the default gateway from the iptables. To fix it, I have to go to the instance and select Edit. Scroll down, and under Custom Metadata put the following:
key: startup-script
value: route add default gw <gatewayIP> eth0
Save and restart the VM.
Source
All credits to him/her, just want to share to help others find their solution faster.
I had the same issue. I eventually figured that it was because I attached a persistent disk added an entry into the /etc/fstab file. This entry is supposed to automatically mount the attached disk upon restart of the instance.
However, when I created a snapshot of the boot disk, I didn't remove the /etc/fstab entry. So creating a new instance from this snapshot will always cause a boot error as the script tries to mount a disk that is not attached.
This information is present in the documentation

Difference In Phpmyadmin Mysql web client and Terminal client

I got problem (#2006 Mysql server gone away) with mysql while connecting and performing some operations through web browser.
Operation Listed below:
When Executing big procedure
Importing database dump
When Access some particular tables It immediately throws "Server gone away".
Refer this question for Scenarios: Record Not Inserted - #2006 Mysql server gone away
Note : The above operations are works fine when I perform through terminal.
I tried some configuration as googing stated. That is set wait_timeout, max_allowed_packet. I checked for the bin_log but it is not available.
But the issues will not rectified.
What is the problem & How can I figure out & fix the issue?
what is the different between access phpmyadmin mysql server from web browser and terminal?
Where I can find the mysql server log file?
Note: If you know about any one of the above questions. Please post here. It would be helpful to trace.
Please help me to figure this out..
Thanks in advance...
Basically nothing except phpMyAdmin is limited by PHP's timeout and resource limits (limits to keep a runaway script from bogging down your entire machine for all eternity; see the docs for details of those values. In some cases, you might be authenticating through a different user account (for instance, root#localhost and root#127.0.0.1 aren't the same user), but as long as you're using a user with the same permissions the differences are minimal.
You can read more about logs in the MySQL manual, note that "By default, no logs are enabled (except the error log on Windows)".
Below are answer for question
From my research the problem is that browser have some limit to disconnect the connection i.e timeout connection. So that the above problem raised.
To resolve this problem
Go to /opt/lampp/phpmyadmin and open config.inc.php
add the command $cfg['ExecTimeLimit'] = 0;
Restart the xamp server. Now you can perform any operations.
`
2. Web client is differ from terminal because Terminal client will not getting timeout. Terminal client maintain the connection till the progress completed. I recommenced to use command prompt to import/export/run process by safe way.
Basically phpmyadmin will not have any log file. If you wanna see warnings and error you should configure the log file.
Configuration steps:
Go to /opt/lampp/etc/my.cnf
Add log_bin = /opt/lampp/var/mysql/filename.log
Restart the xamp server. You can get the log information.

Unable to create indexes in Sphinx after an emergency server restart [Can't create TCP/IP socket]

I'm trying to execute the command in the Windows console:
C:\SphinxSearch\bin\indexer --all --config C:\SphinxSearch\sphinx.conf
But I get an error:
ERROR: index 'indexname': sql_connect: Can't create TCP/IP socket
(10093) (DSN=mysql://root:*#localhost:3306/test).
A data source is mysql. Before the server restart everyone works fine.
How can I fix it?
I'm having the same error 10093. It's a windows error code by the way. In my case it occurs when trying to run the indexer through the system account via a scheduled task. If I'm running it directly as administrator, there's not a problem.
According to the site above:
Either your application hasn't called WSAStartup(), or WSAStartup() failed, or--possibly--you are accessing a socket which the current active task does not own (i.e. you're trying to share a socket between tasks).
In my case I'm thinking it might be the last one, some security problem due to user SYSTEM being used in my scheduled task. I was able to solve it by using my admin user instead: in the scheduled task, I set to use my local admin account with the option to "Run when user is logged on or not" and "Do not store password". I've also checked "Run with highest privileges". This seems to have done the trick as now my indexes are rotating on schedule.