Jetty starting on first available port - junit

I have several junit tests that need Jetty. Every test uses an instance of Jetty. However, the tests can be added ad-hoc, but if two Jetty servers use the same port then the test will fail because the port is already in use. The error is:
[ERROR] Failed to execute goal org.mortbay.jetty:maven-jetty-plugin:6.1.26:run (start-jetty) on project petproject1: Failure: Address already in use -> [Help 1]
So what I am looking for is a way to start Jetty on the first available port starting from port X (8080 or more?) instead of having a big table with all start ports for every test.

a) You can superclass your tests and implement some sort of port counter in that superclass, which is incremented in each #Before (which, I suppose, manages the setup of a jettyy)
b) You can start jetty with port 0 (which is a random free port), and then ask jetty instance for the port number in each test (if you have access to it in your test, if not: use #Rule)

Related

VerneMQ plugin_chain_exhausted Authentication MySQL

I have a running instance of VerneMQ (cluster of 2 nodes) on Google kubernets and using MySQL (CloudSQL) for Auth. Server accepts connections over TLS
It works fine, but after a few days i start seeing this message on the log:
can't authenticate client {[],<<"Client-id">>} from X.X.X.X:16609 due to plugin_chain_exhausted
The client app (paho) complains that the server refused the connection for being "not authorized (code=5 in paho error)"
after a few retry it finally connects. but every time it get's harder and harder until it just won't connect anymore
If i restart VerneMQ everything get's back to normal
I have only 3 clients currently connected at most, at the same time.
clients already connected have no issues in pub/sub.
In my configuration i have (among other things):
log.console.level=debug
plugins.vmq_diversity=on
vmq_diversity.mysql.* = all of them set
allow_anonymous=off
vmq_diversity.auth_mysql.enabled=on
it's like the server degrades over time. the status webpage reports no problem
My verne server was build from the git repository about a month ago and runs on a docker container
what could be the cause?
what else could i check to find posibles causes? maybe a diversity missconfiguration?
Tks
To quickly explain the plugin_chain_exhausted log: with Verne you can run multiple authentication/authorization plugins, and they will be checked in a chain. If one plugin allows the client, it will be in. If no plugin allows the client, you'll see the log above.
This does not explain the behaviour you describe, though. I don't think I have seen that.
In any case, the first thing to check is whether you actually run multiple plugins. For instance: have you disabled the vmq.passwd and the vmq.acl plugins?

Will AWS Lambda automatically close MySQL connections?

If we don't close the MySQL connection at the end of the handler function in lambda-- will the MySQL connection close automatically when lambda dies and re-connect at the cold-start?
The connections won't be closed immediately but eventually they will. By default, the connection timeouts are 8 hour on MySQL and maximum connections are also capped at 66.
show variables like "wait_timeout"; -- 28800
show variables like "max_connections"; -- 66
When you create a connection to MySQL server, it would create a Thread on the MySQL server to serve this connection.
show status where variable_name = 'threads_connected';
select * from information_schema.processlist;
After a Lambda executes a request and sends a response, the Lambda execution environment is not removed immediately and the same one may be used to serve other requests. This is your Warm/Hot Lambda and in this case an active MySQL connection would be really good for your function execution and this is possible only when you did not close the connection in the previous invocation. Eventually, when there are no more requests, this Lambda execution environment can be shutdown and the resources are returned to the pool of AWS compute resources. When the Lambda execution environment shuts down, the TCP connection to the MySQL server from the Lambda will also terminate. Now the MySQL server can remove the thread associated with the Lambda and in essence would reduce the pool of active connections on the server. This also takes a bit of time. So if you are getting a lot of requests concurrently and if the maximum connections are already active, then the request would start failing.
I did some test to see how long does it really take to reclaim the connections and here is the snapshot. The X axis is in minutes and the Y axis is on the scale of 0-70 where each line parallel to X-Axis is 10 units away from each other.
It roughly takes 10-15 minutes to reclaim the connections. But again it depends on the Lambda usage pattern as well.
So should you close the connection on every invocation? Well, it depends!
Take a look at Lambda Runtime extensions and see if you can use the shutdown hook to close connection. If you can, then it would mean while the Lambda execution environment was serving multiple requests, you used a cached connection and just before your Lambda execution environment is taken away from you, you closed the connection.
Lambda RDS Proxy is also an alternative as mentioned above, but it is not free. Before you take the RDS Proxy route, do consider using another Serverless solution like AWS Fargate. In this case probably you would use a connection pool just like any long running server side application.
No, they will not be closed automatically unless you are doing something with your mysql client that implicit closes the connection when it goes out of scope.
The connection will stay open until it times out. There has been many people who reported problems in the past with poorly written Lambdas creating tons of open sessions/connections to relational databases because the connections were not properly closed and they had to wait to be timed out.
One feature that came out a year or so ago was RDS Proxies which are sort of an intermediary between clients and the MySQL server that implements connection pooling. This solves the problem with Lambdas not being able to effectively use connection pooling since RDS Proxies service can do that for serverless clients.

ECS EC2 Launch Type: Service database connection string

I am trying out a small POC (learning experiment) on docker. I have 3 docker images, one each for a storefront, a search engine and a database engine called, storefront, solr, docmysql respectively. I have tried running them in a docker swarm (on a single node) on ec2 and it works fine.
In the POC, I next needed to move this to AWS ECS using the EC2 launch type on a single Non-Amazon ECS-Optimized AMI. I have installed and started a ecs-agent on this. I have created 3 services with one task for each of the 3 images configured as containers within the task. The question is about connecting to the database from the storefront.
The storefront has a property file where the database connection is typically defined as
"jdbc:mysql://docmysql/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false".
This worked when I ran it as a docker swarm. Once I moved it to ECS (EC2 launch type), I had to expose the port 3306 from my task/container for the docmysql service. This gave me a service endpoint of docmysql.local, with 'local' being a private namespace. I tried changing the connection string to
"jdbc:mysql://docmysql.local/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false"
in the property file and it always fails with " Name or service not known". What should my connection string be? When the service is created I see 2 entries in Route 53, one SRV record and a A record. The A record has as its name a .docmysql.local, If I use this in the database connection string, I see that it works but obvious not the right thing to do with the hadcoded taskid. I have read about AWS Cloud Map (servicediscovery) but still not very clear how to go about it. I will not be putting any loadbalancer in front of my DB task in the service, there will always be only one task for the db.
So what is the best way to generate the connection string that works. Also why did I not have issues when I ran it as a docker swarm.
I know I can use an RDS instead of stating my own database, I will try that but for now need this working as I have started with this. Thanks for any help.
Well, I've raised some points before my own solution within the problem:
Do you need your instance to scale using ECS? If not, migrate it to RDS.
Do you need to deploy it on EC2-Type? If not, use fargate, it is more simple to handle.
Now, I've faced that issue on Fargate, and discovered that depending on your container/task definitions, it can be used inside the same task for testing purposes, so, 127.0.0.1 should be the answer.
On different tasks you need to work with awsvpc network mode so, you will have this:
Each task that uses the awsvpc network mode receives its own elastic network interface, which is attached to the container instance that hosts it. (FROM AWS)
My suggestion is to create a Lambda Function to discover your network interface dynamically.
Read this for deeply understanding:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
https://aws.amazon.com/blogs/developer/invoking-aws-lambda-functions-from-java/

Google Compute Engine: Internal DNS server and issues with the resolving

Since google Compute engine does not provides internal DNS i created 2 centos bind machines which will do the resolving for the machines on GCE and forward the resolvings over vpn to my private cloud and vice versa.
as the google cloud help docs suggests you can have this kind of scenario. and edit the resolv.conf on each instance to do the resolving.
What i did was edit the ifcg-eth0 to disable the PEERDNS and in /etc/resolv.conf
i added the search domain and top 2 nameservrs my instances.
now after one instance gets rebooted..it wont start again because its searching for the metadata.google.internal domain
Jul 8 10:17:14 instance-1 google: Waiting for metadata server, attempt 412
What is the best practice in this kind of scenarios?
ty
Also i need the internal DNS for to do the poor's man round-robin failover, since GCE does not provides internal balancers.
As mentioned at https://cloud.google.com/compute/docs/networking:
Each instance's metadata server acts as a DNS server. It stores the DNS entries for all network IP addresses in the local network and calls Google's public DNS server for entries outside the network. You cannot configure this DNS server, but you can set up your own DNS server if you like and configure your instances to use that server instead by editing the /etc/resolv.conf file.
So you should be able to just use 169.254.169.254 for your DNS server. If you need to define external DNS entries, you might like Cloud DNS. If you set up a domain with Cloud DNS, or any other DNS provider, the 169.254.169.254 resolver should find it.
If you need something more complex, such as customer internal DNS names, then your own BIND server might be the best solution. Just make sure that metadata.google.internal. resolves to 169.254.169.254.
OK, I just ran in to this.. but unfortunately there was no timeout after 30 minutes that got it working. Fortunatly nelasx had correctly diagnosed it, and given the fix. I'm adding this to give the steps I had to take based on his excellent question and commented answer. I've just pulled the info I had to gather together in one place, to get to a solution.
Symptoms: on startup of the google instance - getting connection refused
After inspecting serial console output, will see:
Jul 8 10:17:14 instance-1 google: Waiting for metadata server, attempt 412
You could try waiting, didn't work for me, and inspection of https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google-startup-scripts/usr/share/google/onboot
# Failed to resolve host or connect to host. Retry indefinitely.
6|7) sleep 1.0
log "Waiting for metadata server, attempt ${count}"
Led me to believe that will not work.
So, the solution was to fiddle with the disk, to add in nelasx's solution:
"edit ifcfg-eth and change PEERDNS=no edit /etc/resolv.conf and put on top your nameservers + search domain edit /etc/hosts and add: 169.254.169.254 metadata.google.internal"
To do this,
Best to create a snapshot backup before you start in case it goes awry
Uncheck "Delete boot disk when instance is deleted" for your instance
Delete the instance
Create a micro instance
Mount the disk
sudo ls -l /dev/disk/by-id/* # this will give you the name of the instances
sudo mkdir /mnt/new
sudo mount /dev/disk/by-id/scsi-0Google_PersistentDisk_instance-1-part1 /mnt/new
where instance-1 will be changed as per your setup
Go in an edit as per nelasx's solution - idiot trap I fell for - use a relative path - don't just sudo vi /etc/hosts use /mnt/new/etc/hosts - that cost me 15 more minutes as I had to go through the: got depressed, scratched head, kicked myself cycle.
Delete the debug instance, ensuring your attached disk delete option is unchecked
Create a new instance matching your original with the edited disk as your boot disk and fire it up.

elasticsearch multiple nic bind network interfaces

introduction
when configuring elasticsearch I ran into a problem with binding the
listening interfaces.
somehow the documentation does not provide how to setup multiple network interfaces (network def and bind def)
problem description
my intention is to setup the network.bind_host as _eth1:ipv4_ and _local_
even when trying to setup the bind_host as _local_ only,
the elastic search port 9200 is still only reachable by eth1 (of course i have restarted the server)
solutions tried
i have tested the firewall configuration by setting up a netcat server and this one works perfectly for that port
so this results in 2 Questions:
how to configure multiple nics? (whats the notation?)
would i require to change the network.publish_host ?!
.
any other pointers?
current configuration:
network.bind_host: _eth1:ipv4_
network.publish_host: _eth1:ipv4_
network.host: _eth1:ipv4_
also tested configuration:
network.bind_host: _local_
network.publish_host: _eth1:ipv4_
network.host: _local_
PS:
afaik the publish_host is the nic for the inter-server communication
Using a YAML list for the desired property:
network.bind_host:
- _local_
- _en0:ipv4_
If I understand this answer correctly, publish_host should be _eth1:ipv4_. Your publish_host has to be a one of the interfaces to which elasticsearch binds via the bind_host property.
The above linked answer is actually great, so I have to cite it here:
"bind_host" is the host that an Elasticsearch node uses in the socket
bind call when starting the network. Due to socket programming model,
you can "bind" to an address. By referencing an "address", the socket
allows access to one or all underlying network devices. There are
several addresses with predefined semantics, e.g. 0.0.0.0 is reserved
for "bind to all network devices". So the "bind_host" address does not
necessarily reflect a single unique address.
"publish_host" must be a single unique network address. It is used for
connect calls by other nodes, not for socket bind call by the node
itself. By using "publish_host" all nodes and clients can be sure they
can connect to this node. Declaring this single unique address to the
outside can be interpreted as "publishing", so it is called
"publish_host".
You can not set "bind_host" and "publish_host" to arbitrary values,
the values must adhere to the underlying socket model.