Ok, I have a web app on AWS that is using load balancing and autoscaling over two regions. When i check my connections, it seems as if connection pooling is not being used? Or is it just me that doesn't understand it properly? I have attached an image, and I have the following questions:
Just FYI: The colors indicated the same numbers, so the yellow ones are the same IP and so on. Just wanted to blur them, because who knows, right?
Also: the green connections comes from my computer, that is me using Workbench. The red/yellow I assume are my two servers, and the blue I am not sure. Will check these IPs.
1) Why does the bottom red connection and bottom yellow have a connection each? its the same IP, with the same user, calling the same DB.
2) Why are ports being added to the host IP? Is this what is causing new connections to spawn?
3) The two top green connections are also not sharing the same connection? Why two separate ones?
4) Why is the Id (purple outline) same for all connections?
Related
So I set up a server for free radius and right now I have around 1k users that try to use hotspot that connect to my server when there are more than 400 users that active the local hotspot having problem such as the users cannot log in to the server and sometimes free radius have a duplicate entry in radacct tables and that causing the user's session time to reduced twice from it should ..
sorry for my English... I know that we need to launch the free radius in debug mode to know the exact reason why but, right now we can't really do that because the server is being used most of the time by users
below are the screenshot of the server and also one of the hotspot that are being used by user's
I would like to update this, so I have run the debug mode and found that one of my nas (Mikrotik that connected to my Radius Server ) are sending multiple requests and those requests are being processed again and again by the radius server, so I reset the Mikrotik and set timeout so the router wont sending multiple request that would hammer the radius
I deployed two different versions of application to two eks clusters, blue and green. I plan to use route53 weighted routing policy to switch between blue and green. The microservices inside blue and green eks clusters immediately reading and updating databases once I deployed the application at blue and green. We can only have one application accessing database at one time. How to do it?
In the below image it shows current activity as 99 Connections.
How exactly it is counted.
RDS is accessed through node.js webservices, php website. Every time I do some operations I close the connection. So once after closing it doesn't decrease rather it keeps increasing. Later I got the too many connections error message once the connections became 608. I restarted then it works. I never seen it decreasing.
So what is the best way I can handle it.
Below is the image which is showing when I run SHOW FULL PROCESSLIST;
PHP-based web pages that use a MySQL connection generally exit as soon as they're done rendering page content, so the connection gets closed whether you explicitly call a mysqli or PDO close method or not.
The same is not true of Node services, which run for a long time and can therefore easily leak resources. It's probable that you're opening connections, but not closing them, in your Node service, which would produce the sort of behavior you're seeing here. (This is an easy mistake to make, especially for those of us whose background is largely in more ephemeral PHP scripts.)
One good way to identify the problem is to connect to the MySQL instance via Workbench or the console monitor and issue SHOW FULL PROCESSLIST; to get a list of currently active connections, their originating hosts, and the queries (if any) they are executing. This may help you narrow down the source of the leaking connections, so that you can identify the code at fault and repair it.
I am making a webserver with scapy, which is going pretty well. However, it's a pain in the butt for scapy to maintain different connections at the same time. So I want the client to make a persistent connection with the webserver that servers a html page with an image.
I have the client succesfully iniating a tcp handshake and obtaining the html page, however, it opens a new connection to download the image. Which I do not want.
I understand that in HTTP/1.1 it is not necessary to send the keep-alive header, as it's a default. How come Chrome and Firefox still open more connections to download seperate files?
I am not sending a Connection: close header whatsoever, so I think it's weird that they do not maintain the same connection for all files on the webpage.
EDIT: Tried to use the actual Keep-Alive: timeout=n, max = n header. Still no result.
What could be the problem? Feel free to ask for details!
Persistent connections do not forbid to use parallel connections, they only allow to re-use the same connection for more requests. But, with persistent connections you can only do multiple requests within the same connection one after the other. This means to get lots of resources it is usually faster to open multiple connections in parallel and use each of these connections to get multiple resources, e.g. using 4 connections in parallel to get 12 images (3 images with each connection) is faster then getting all the 12 images one after the other using a single connection.
I have a JSP page. It used to work well but it becomes very slow (15 seconds to load) after the Systems team upgraded the Solaris 10 OS server.
I checked all the queries in that page and every query works fine. In fact they are all very simple queries. And there are only about 300 entries in each related table.
The only special thing, there were about 60 connections in that page. I managed to decrease the connections by 30 after I found it is very slow. After this optimization the loading time decreased to 6 seconds. But, still very slow! And what's worse, I am unable to decrease the connections any more if I don't want to re-construct half of the application.
Another JSP page (not in the same application) worked well but now it becomes slow too. It has only 1 connection but this page is very time sensitive. Thus I can see it becomes slower.
Can anyone tell me how to configure mysql or/and tomcat to decrease mysql connection time?
Are you saying "connection" when you mean "query"? Or are you making a new database connection for each query? You should not do that. In a Tomcat app server, you should use connection pooling, which will reduce the overhead of connections a lot.
Another common issue is that the MySQL Server tries to resolve your client's hostname from its IP address. This is called DNS hostname resolution, and it can be the source of a lot of overhead when making a new connection. You can configure your MySQL Server to skip DNS hostname resolution without much downside.
Why are you making more than one database connection per page load? That should not be the general case. Make one connection and use it for all the rendering of that page.
How are you referencing the server on your connection string? Are you using "localhost" or something? Try to replace that by "127.0.0.1" for instance, which will skip the name resolution.