I've installed Moxi from couchbase homepage and
put
./moxi -Z usr=*,pwd=*,port_listen=11211,concurrency=1024,wait_queue_timeout=200,connect_timeout=400,connect_max_errors=3,connect_retry_interval=30000,auth_timeout=100,downstream_conn_max=16,downstream_timeout=5000,cycle=200,default_bucket_name=test http://192.168.20.101:8091/pools/default/saslBucketsStreaming
(username and password is written in ** for privacy)
and I have couchbase server up and running on 192.168.20.101
When I do this, it seems like it is stuck in middle of something... I've waited for an hour and still not giving me any messages. Is it supposed to be like this? Or is something wrong?
It should be working fine. By default moxi should only log errors and if there are no errors it should log anything to stdout. In order to figure out if there might be an issue I would try to connect your client to moxi and make sure everything makes it to Couchbase. On another note all of the Couchbase SDK's don't require moxi so unless your using a non-Couchbase SDK you don't need to set up any extra moxi processes.
Also, you can try adding the -vvv parameter so that moxi logs on debug level. This will print a lot more to stdout.
Related
I have a running instance of VerneMQ (cluster of 2 nodes) on Google kubernets and using MySQL (CloudSQL) for Auth. Server accepts connections over TLS
It works fine, but after a few days i start seeing this message on the log:
can't authenticate client {[],<<"Client-id">>} from X.X.X.X:16609 due to plugin_chain_exhausted
The client app (paho) complains that the server refused the connection for being "not authorized (code=5 in paho error)"
after a few retry it finally connects. but every time it get's harder and harder until it just won't connect anymore
If i restart VerneMQ everything get's back to normal
I have only 3 clients currently connected at most, at the same time.
clients already connected have no issues in pub/sub.
In my configuration i have (among other things):
log.console.level=debug
plugins.vmq_diversity=on
vmq_diversity.mysql.* = all of them set
allow_anonymous=off
vmq_diversity.auth_mysql.enabled=on
it's like the server degrades over time. the status webpage reports no problem
My verne server was build from the git repository about a month ago and runs on a docker container
what could be the cause?
what else could i check to find posibles causes? maybe a diversity missconfiguration?
Tks
To quickly explain the plugin_chain_exhausted log: with Verne you can run multiple authentication/authorization plugins, and they will be checked in a chain. If one plugin allows the client, it will be in. If no plugin allows the client, you'll see the log above.
This does not explain the behaviour you describe, though. I don't think I have seen that.
In any case, the first thing to check is whether you actually run multiple plugins. For instance: have you disabled the vmq.passwd and the vmq.acl plugins?
I am using couchbase server 6.0.2 image from RedHat
https://access.redhat.com/containers/?tab=overview&get-method=registry-tokens#/registry.connect.redhat.com/couchbase/server
in openshift.
The Pod is running but does not react to http://localhost:8091. The Logs show the error shown below.
I have 3 questions:
Why is whoami failing in the entrypoint?
Why isn't the server responding on port 8091?
Does the couchbase server image require root permissions?
It seems the couchbase/server image is expecting to be run as root, then creates its own user couchbase and group couchbase.
At the end it's running an entrypoint script and in there checking if the user running the whole thing, is actually the user couchbase by executing the whois command.
This is not the case if you just run it in openshift, as the container will be run as some "random" unprivileged user.
This leads to a set of consecutive failures:
Here You will find the evaluation that is done in the entrypoint.sh.
Now the whois command is failing since there is not actual user just said random UID. that failing, leaves the first part of the evaluation blank, which will result in a failure.
This is a bug in the couchbase/server image and as such you should, if time allows contribute to fixing by opening an issue against that repo.
Right now I am connecting to a cluster endpoint that I have set up for an Aurora DB-MySQL compatible cluster, and after I do a "failover" from the AWS console, my web application is unable to properly connect to the DB that should be writable.
My setup is like this:
Java Web App (tomcat8) with HikariCP as the connection pool, with ConnecterJ as the driver for MySQL. I am evaluating Aurora-MySQL to see if it will satisfy some of the needs the application has. The web app sits in an EC2 instance that is in the same VPC and SG as the Aurora-MySQL cluster. I am connecting through the cluster endpoint to get to the database.
After a failover, I would expect HikariCP to break connections (it does), and then attempt to reconnect (it does), however, the application must be connecting to the wrong server, because anytime a write is hit to the database, a SQL Exception is thrown that says:
The MySQL server is running with the --read-only option so it cannot execute this statement
What is the solution here? Should I rework my code to flush DNS after all connections go down, or after I start receiving this error, and then try to re-initiate connections after that? That doesn't seem right...
I don't know why I keep asking questions if I just answer them (I should really be more patient), but here's an answer in case anyone stumbles upon this in a Google search:
RDS uses DNS changes when working with the cluster endpoint to make it looks "seamless". Since the IP behind the hostname can change, if there is any sort of caching going on, then you can see pretty quickly how a change won't be reflected. Here's a page from AWS' docs that go into it a bit more: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-jvm-ttl.html
To resolve my issue, I went into the jvm's security file and then changed it to be 0 just to verify if what was happening was correct. Seems correct. Now I just need to figure out how to do it properly...
I am trying to hit 350 users but Jmeter failing script by saying Connection timed out.
I have added following:
http.connection.stalecheck$Boolean=true in hc.parameter file
httpclient4.retrycount=1
hc.parameter.file=hc.parameter
Is there anything that I am missing to add on?
This normally indicates a problem on application under test side so I would recommend checking the logs of your application for anything suspicious.
If everything seems to be fine there - check logs of your web and database servers, for instance Apache HTTP Server allows 150 connections by default, MySQL - 100, etc. so you may need to identify whether you are suffering from this form of limits and what needs to be done to raise them
And finally it may be simply lack of CPU or free RAM on application under test side so next time you run your test keep an eye on baseline OS health metrics as application may respond slowly or even hang if it doesn't have any spare headroom. You can use JMeter PerfMon plugin to integrate this form of monitoring with your load test.
I am currently trying to move my web server (php zendframework based) from Ubuntu to FreeBSD. Both the servers having the same hardware configuration. After migration, I did JMeter test (Http request (Json), Concurrent = 200) of the server, "Throughput" in FreeBSD server was double that of the Ubuntu server which is amazing.
However, when I increase the concurrent to 500, I see almost 50% request Failure due to "java.net.SocketException: Connection reset". But it works as normal in the Ubuntu server.
After many times testing, I found Ubuntu can handle 1500 concurrent httprequest without error, FreeBSD server can handle 200 concurrent request with double speed without error, but cannot handle more. In order to verify the result, I tried AB command. **ab -c 200 -n 5000 127.0.0.1/responseController. It fails and terminate if the ¬-c parameter is over 200, but works fine in Ubuntu.
For debugging I did following:
1. adjust httpd.conf, /boot/loader.conf, /etc/sysctl.conf somehow, but looks like nothing changed.
2. I try switch to mpn_worker_module in Apache configuration and its relevant configuration in php. Nothing changed but failure part log was different, which showed "request failure to respond" rather than "java.net.SocketException: Connection reset"
I did a lot of search but couldn't find the cause of this failure. I though the Json request would be waiting until response or timeout?
I am not sure which configuration file or parameter will make it work.
Please help.
Thanks for Michael Zhilin, yes "ipfw" did something to caused this, and yes "kern.ipc.soacceptqueue" is the bottleneck in this case.