Freeradius Server using 100% cpu - mysql

So I set up a server for free radius and right now I have around 1k users that try to use hotspot that connect to my server when there are more than 400 users that active the local hotspot having problem such as the users cannot log in to the server and sometimes free radius have a duplicate entry in radacct tables and that causing the user's session time to reduced twice from it should ..
sorry for my English... I know that we need to launch the free radius in debug mode to know the exact reason why but, right now we can't really do that because the server is being used most of the time by users
below are the screenshot of the server and also one of the hotspot that are being used by user's

I would like to update this, so I have run the debug mode and found that one of my nas (Mikrotik that connected to my Radius Server ) are sending multiple requests and those requests are being processed again and again by the radius server, so I reset the Mikrotik and set timeout so the router wont sending multiple request that would hammer the radius

Related

How to force close Access application when user lost connection to back-end

The Question:
Is there some way to force close access so it doesn't need access to the back-end server in order to exit?
The Situation:
I have an Access 2016 DB. The back-end is on a networked share drive which is only accessible when connected to the lan or on VPN. On load there is a ping test to the server, if found it copies the tables to local tables, if not, it just tells the user can't connect and continues on using the old data. The users travel a lot and can't always be on the VPN so the idea is that the data they have isn't more than a few days old. BTW, did I mention the users are only consumers of information and not contributors so I don't care that they can't write to the back-end. The tables contain a few 100k records, the application just puts it in nice easy to search and cross-reference reports.
The Problem:
While this loads and runs really nicely regardless of them being connected to the lan or not, it will NOT close if they don't have a connection to the server. It doesn't produce an error which I could easily handle, it just hangs. Task Manager won't even close it.
Attempted Solutions:
I tried to unlink the tables and just use a temporary connection to the backend to load the tables when I need them at the beginning, however this meant the user was prompted by the Microsoft Trust Center about 8 times every single time they loaded this unless I have each of them actually open the back-end DB themselves, give them the password to do that, and none of that is practical.
Access doesn't play well with remote BE..if you want to be on the Remote Side with Access you have 2 options :
Connect via RDS..the user connects to the server via Remote Desktop ..everything is "local" ..so now issues on lost connections ...as long the RDP connection hold everything is smooth and more importantly you don't have BE disconnects that cause corruption or loosing data (hint : Using the RemoteApp technology it will seem to the end user like he/she is working locally...i am using it and its great)
Switch BE...as i said , is not wise to use Access BE via remote connection..in order to do that switching to MsSQL/MySQL/PostGre ...etc will give you the true remote connection capability
After playing with all the settings for a few days, I finally figured out what my problem was.
In an effort to test different settings to see if I could reduce file size at one point I turned on "clear cache on exit" in the Current Database settings. Turning this off fixed the problem. I had forgotten that was on, so it turned out to not be a programming issue after all.

Google VM Instance becomes unhealthy on its own

I have been using Google Cloud for quite some time and everything works fine. I was using single VM Instance to host both website and MySQL Database.
Recently, i decided to move the website to autoscale so that on days when the traffic increases, the website doesn't go down.
So, i moved the database to Cloud SQL and create a VM Group which will host the PHP, HTML, Image files. Then, i set up a load balancer to divert traffic to various VM Instances under VM Group.
The problem is that the Backend Service (VM Group inside load balancer) becomes unhealthy on its own after working fine for 5-6 hours and then again becomes healthy after 10-15 minutes. I have also seen that the problem can come when i run a file which is a bit lengthy with many MySQL Queries.
I checked the Health check and it was giving 200 response. During the down period of 10-15 minutes, the VM Instance is accessible from it own ip address.
Everything is same, i have just added a load balancer in front of the VM Instance and the problem has started.
Can anybody help me troubleshoot this problem?
It sounds like your server is timing out (blocking?) on the health check during the times the load balancer reports it as down. A few things you can check:
The logs (I'm presuming you're using Apache?) should include a duration along with the request status in the logs. The default health check timeout is 5s, so if your health check is returning a 200 in 6s, the health checker will time out after 5s and treat the host as down.
You mention that a heavy mysql load can cause the problem. Have you looked at disk I/O statistics and CPU to make sure that this isn't a load-related problem? If this is CPU or load related, you might look at increasing either CPU or disk size, or moving your disk from spindle-backed to SSD-backed storage.
Have you checked that you have sufficient threads available? Ideally, your health check would run fairly quickly, but it might be delayed (for example) if you have 3 threads and all three are busy running some other PHP script that's waiting on the database

MySQL Protocol Client/Server Authenication - Token Generation for Authenication Packet from Client

I am currently building a client without using any libraries just to understand the protocol really, and I am confused by an access denied reply when sending my computed Auth packet to the MySQL Server. The Mysql server is just a local server running on my computer for testing purposes. Here is the information I am sending:
The test password is 'peanut'.
Stage 1 Hash = b14ab480028768cb748fd97de56144a304eb8a1a
Stage 2 Hash = fd62797ed464c2843942a9167cc0521779d68862 - This is correct but without the * in the database.
Salt & Stage 2 Hash = rC8/$a?Vr\W|.jN)~cVcfd62797ed464c2843942a9167cc0521779d68862
SHA1(Salt + Stage 2 Hash) XOR Stage 1 Hash = 4B19199ECEB929469EA89C0E942D8D5B9ACBE237
String Sent to Server For Authentication in hex:
\x3A\x00\x00\x01 - Standard Header (Payload length / Sequence nUmber)
\x02\x04\x80\x00 - Compatibility flags
\x00\x00\x00\x01 - Maximum packet size
\x08 - Charset
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 - 23 byte filler
\x72\x6F\x6F\x74\x00 - username zero terminated
\x14 - length of token (20 bytes)
\x4B\x19\x19\x9E\xCE\xB9\x29\x46\x9E\xA8\x9C\x0E\x94\x2D\x8D\x5B\x9A\xCB\xE2\x37 - token
When I send this string, the server just sends out the #28000 error that is access denied.
Could this be an access rights issue? A remote user trying to gain root access, is there something i need to enable?
I have changed the connection timeout settings wait_timeout / connect_timeout etc. and still no joy, these are set to 60 Secs.
I am not sure if I should be computing the SHA1(Salt + Stage 2 Hash) with an asterisk or not, as in the database it shows an * before the password. I have tried both ways and it still doesn't auth.
I am running out of ideas now, the only other thing I can think to do is to write another program which will process the client token as the Mysql Server would, but I thought I would double check here first.
I have been working on this for a while now and am stumped.
Any help greatly appreciated. I don't normally post on Forum's so its a new experience, sorry if I haven't followed etiquette.
Regards
James
Without looking into the protocol documentation myself it is hard to see any errors here. It could be anything from endianness to wrong padding, lengths and other calculations etc. The server rightfully does not provide more details as to why the login might have failed, because that would be leaking information to a potential attacker.
I suggest you grab the MySQL source code and look at how they do it in their command line application. See http://dev.mysql.com/doc/internals/en/guided-tour.html for some introduction and download links.
Apart from that, there is always the option of logging the network traffic with Wireshark to see what the stock mysql client sends and compare that to what you have. That might help to figure out padding etc.
Note that if you are working on Windows there might be problems to capture traffic on localhost, because IIRC Windows shortcuts the traffic somehow, so it does not get past the network interface for Wireshark to see. In that case you might have to set up the MySQL server in a VM or on a different machine.

php/mysql maximum connections reached

I have an application that connects to a 3rd party. They fire web-hooks simultaneously at a time. Sometimes the hooks are about 1000 and over. The problem is that, my script connects to a database and save the hooks. These 1000 queries fired simultaneously on my system makes the system goes off. How effectively can i handle the web hooks?
thanks.
What you can do is, after every query, is close the connection. This will ensure that no connection stays open or is stacked on top of each other.
You will have to re-initiate the connection before launching a new query of course.
You could also increase the number of connection allowed to your MySQL database. This should be changed in your MySQL configuration (usually at /etc/mysql/my.conf ( http://dev.mysql.com/tech-resources/articles/mysql_intro.html )) by changing the "max_connections" variable/configuration value. ( http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_connections )
Good luck and if you have any questions don't hesitate to ask.
Best regards,

How to fix: mysql_connect(): Too many connections

I am getting the following error:
mysql_connect(): Too many connections
It has completely shut down my site, which has been running seamlessly for several years.
Note: I have shared hosting with GoDaddy.
How do I fix this?
ALSO: is there a way to close all connections and restart when on a shared hosting plan?
This is a Technical Response
You will get this "too many connections" error upon connecting to MySQL when the MySQL server has reached its software configurable artificial limit of maximum concurrent client connections.
So, the proper way to fix this is:
Directly connect to the MySQL server, and perform the query: SET GLOBAL max_connections = 1024; to change the connection limit at runtime (no downtime).
Make your change permanent, and edit the /etc/my.cnf (or similar) and add the line max_connections = 1024 line within the [mysqld] section; then restart if you couldn't do the live change.
The chosen limit of 1024 was pure subjective; use whatever limit you want to. You can also inspect your current limit via query SHOW VARIABLES LIKE "max_connections";. Keep in mind, that these limits are there for good use, in order to prevent unnecessary overload of your backend database. So always choose wise limits.
However, for those steps you are required to have direct access to your database MySQL server.
As you said, you are using GoDaddy (I do not know them that much), you are left out with the option to contact your service provider (i.e. GoDaddy). However, they will see this in their logs anyway, also.
Possible Root Causes
This of course means, that too many clients are attempting to connect to the MySQL server at the same time - as of the per configuration specified artificial software limit.
Most probably, you have been a subject of a DDoS attack.
People on this forum complain on exactly same thing with exactly same provider.
The answer is this:
VB told me it was a DOS attack - here is their message:
This is not an 'exploit'. This is a DoS attack (Denial of Service). Unfortunately there is nothing we can do about this. DoS attacks can only be fought at the server or router level, and this is the responsibility of your host. Instead of doing this they have decided to take the easy way out and suspend your account.
If you cannot get them to take this seriously, then you should look for another host. Sorry for the bad news.
A possible workaround can be this: if your connection fails with mysql_connect(): Too many connections, you don't quit, but instead sleep() for half a second and try to connect again, and exit only when 10 attempts fail.
It's not a solution, it's a workaround.
This of course will delay your page loading, but it's better than an ugly too many connections message.
You also can come with a some kind of a method which tells bots and browsers apart.
Like, set a salted SHA1 cookie, redirect to the same page and then check that cookie and connect to MySQL only if the user agent had passed the test.
Another thing that can cause this error is if the Database has run out of space. I recently had this occur, and the issue wasn't connections, it was Disk space. Hope this helps someone else!
Do you close your connection when you're done with them? Are you using some type of connection pooling? Sounds like you're opening connections and not closing them.
EDIT: Already answered by Quassnoi. In the case it is a DDoS, and you're using shared hosting, you may be left with just contacting your host and working it out with them. Unfortunately this is a risk when you don't have control of your whole system.
Consider using mysql_pconnect(). Your host may have adding some sort of throttling for connections. Like a maximum of 100 per 20 minutes or something weird.
First, check your database connections:
show variables like 'max_connections';
to check all variables
show variables;
Connect your MySQL server and update the connection size:
SET GLOBAL max_connections = 1001;