Log4net doesn't log into the server sometimes - configuration

Im using log4net to log errors or debugging. But sometimes it just stops logging for some time.
Is there anything that Im missing or it is the server problem

If it stops momentarily but it later resumes logging and you don't lose any messages, then it probably is that the buffer hasn't been flushed out and written to the file.

Related

Rails Delayed job workers crash due to malformed yml in handler

In the Rails application(Rails: 5.1, Ruby: 2.5.7, Mysql), the current design pushes the csv import operation to be executed in background by Delayedjobs. Due the size limitation of handler(text) the content is partially stored and when workers try to execute the job, they crash and the job never gets cleared automatically though it's set to max attempts=3.
This failure blocks the entire pipeline and keeps everything in pending state. Mysql version is 5.7
With the data payload when this operation is performed locally then the job is gracefully cleared with no crash in cases of malformed yml present in handler, mysql version is 5.6. What could be the issue here, any pointers please?

MySQL, recover database after mysqlbinlog loading error?

I am copying a MySQL DB from an initial dump file and the set of binlogs created after the dump.
The initial load from the dump is fine. Then, while loading the binlogs using mysqlbinlog, what happens is that one of the files will fail, for example, with a "server has gone away" error.
Is there any way to recover from a failed mysqlbinlog run, or is the database copy now irreparably corrupted? I know which log has failed, but I can't just rerun that log since the error could have occurred at any query within the log.
Is there a way to handle this moving forward?
I can look into minimizing the chances that there will be an error in the first place, but it doesn't seem like much of a recovery process (or master/slave process) if any MySQL issue during the loading completely ruins the database. I feel that I must be missing something.
I'd check the configuration value for max_allowed_packet. This is pretty small by default (4MB or 64MB depending on MySQL version). You might need to increase it.
Note that you need to increase that option both in the server and in the client that is applying binlogs. The effective limit on packet size is the lesser of the server and client's configuration value.
Even if the binlog succeeded through replication, it might not succeed when replaying binlogs, because you need to replay with mysql while specifying the --max-allowed-packet option.
See https://dev.mysql.com/doc/refman/8.0/en/gone-away.html for more explanation of the error you got.
If you don't know the binlog coordinates of the last binlog event that succeeded, you'll have to start over: remove the partially-restored instance and restore from the backup again, then apply the binlog.

Read Other threads warnings in MySQL

I am facing a possible memory leak in Mariadb. I hope reading the warning messages from all the thread could lead me to what is causing it. The problem is warning are not logged anywhere. I can see from the network traffic dump that some threads have a high number of warnings but I can not access them. The application using the database is not logging anything related to database either. How can I read all the threads warnings. I have full access to the server.
I want to have access to these warnings, I don't have any control on the app node to log them
I found the cause of the leak. Whenever there was a jump in memory usage, there was a jump at number of subqueries. By disabling subquery cache problem was solved. Still the root cause is unknown.

Mysql have suddenly started regularly opening unsuccessful sockets

I've desperately tried to figure out what's happened here, but haven't seen this particular problem anywhere. I've 'inherited' (as in, not built any of it myself) management of a database server (remote, in a data warehouse, accessed by ssh) where some php daemons are running on a Linux server acting as data crawlers, inserting and processing information in a relatively steady stream into mysql.
A couple of days ago, the server crashed and came back on again. I logged in an restarted the mysql server and the crawlers, thinking no more of it. A day and a half later, the mysql server stopped working, and I couldn't diagnose it since I couldn't log into it, nor did it respond to "/etc/init.d/mysql stop" or varieties thereof. According to the log file, it kept throwing errors very regularly (once every four minutes and 16 seconds) and said that it had too many file handlers open. When I shut down the crawlers, however, I could log in again, but mysql kept throwing the errors. I checked lsof and it showed a lot of open sockets with "can't identify protocol" error.
mysqld 28843 mysql 1990u sock 0,4 2856488 can't identify protocol
mysqld 28843 mysql 1989u sock 0,4 2857220 can't identify protocol
^Thousands of these rows
I thought it was something the crawlers had done, and I restarted mysql and the failed sockets disappeared. But I was surprised to see that mysql kept opening new ones, even when the crawlers weren't running. It did this very regularly, about two new failed sockets a minute, regardless of whether the crawlers were active or not. I increased the maximum amount of filehandlers allowed for mysql to buy some time, but I'm obviously looking for a diagnosis and permanent solution.
All descriptions of such errors (socket leaks) that I've found on forums seems to be about your own software leaking, not closing its sockets. But this seems to be mysql itself that does it, and there has been no change in any of the code from when it worked fine, just a server crash and restart.
Any ideas?

How can I guarantee the w3wp process exists before turning on perfmon logging?

I have a batch script I run before our performance tests that does some pre-test setup on our server; it clears log files, starts the proper services, restores the database, sets some app settings and turns on perfmon logging.
My problem; the w3wp process we need to monitor is not always present at the time we turn on perfmon logging. It's pretty much hit-or-miss if this process is in the log. The test takes anywhere from 4 to 18 hours to complete, and I don't know until the test is done whether or not w3wp was monitored (it doesn't seem that perfmon detects new processes even though my log file is configured to monitor Process(*)), which ends up wasting a lot of time.
Is there a way to force w3wp to get loaded? Is there some command I can call just prior to starting the perfmon logs?
Or, is it possible to configure the perfmon log to monitor processes that may not exist at the time the log is started?
If you install the IIS Admin tools, you can call a command line app called TinyGet. You can pass in any page on your webserver to initialize it. This would start up the process so you can capture it.