configuring log4j in a clustered environment - configuration

We are using log4j for the logging functionality. The application is running in a Clustered environment. How can I configure log4j properties such that all the instances log to the same log file?

One solution is to have a directory dedicated for logging. That directory can be on a network share (NFS, etc.) that is mounted to a location that is the same for both processes. This could be as simple as mounting to the identical spot in the file structure or it could be done using environment variables ($LOGDIR) so each host could point to a different location in their local file structure.
The important thing is that the folder be shared so that multiple processes are writing to the same file. The normal shared-resource restrictions apply though; make sure the file isn't locked by one host while preventing the other from writing, etc. Also, use an output pattern that includes hostname/process name/thread id.
Another approach I've used is a database appender that writes to a log table. No share needed but you still need to design the table considering the issues with multi-process logging.

Related

what is the use of DESKTOP-7JFP5MF-bin.000001 files in data folder in mysql database?

I just wanted to know use of this type of file "DESKTOP-7JFP5MF-bin.000001" in data folder mysql server in windows 10?
have a look on below files.
what is the use of these files?
what if we delete these files?
Thanks
These files are the binary log, which is a sequential log of changes to your database.
It is used primarily for replication. A replica MySQL Server instance downloads these files and applies the same changes in the same order to the replica instance. The replica instance may be restarted or go offline temporarily, and when it reconnects to the source it will resume downloading where it left off. So it can be handy to keep the binary log files around for some time.
Typically they are set to expire automatically after a day (this is configurable). If a given log file is expired and deleted on the source instance before the replica has downloaded it, the replica will miss some updates. It needs all the logs to be a true replica, so if it misses some, it's essentially a failed replica and needs to be scrapped and reinitialized with a fresh copy from the source.
So if you delete these files, you'll spoil the replica.
Read https://dev.mysql.com/doc/refman/8.0/en/binary-log.html and its subsections for more information on the binary log.

Is my MySQL being monitored by monit using nginx?

I'm unsure if my mysql is actually being monitor by monit. See screenshot.
As you can see under processes the mysqld process is not being monitored (it failed a sew times first) but under files there is mysql_bin and mysql_rc both of which are OK.
Is it safe to remove the mysql monitoring symbolic link or do i need this anyway?
thanks
Short answer is no. Some more info:
That both of the entries in the File section are OK does not relate to any kind of working or running application. The File section of monit simply checks file state information, such as, but not limited to, last modification time, size, file hashes, etc.
So basically the two OKs in File section just tell you that the mysql files are present and have not been changed.
The entry inside Process section is what you want to focus an. It checks the presence of a running mysqld process on your system. You need to check if the configuration of that entry inside monitrc (or included files) is looking for the right parameters. It should watch out for a process using a pidfile and could potentially check if a connection could be established.
See monit docs on MySQL and/or paste your monit config for any in-depth help regarding monit.
PS: #Shadow: The question (more likely: the resulting problem) has nothing to do with a DB, but is a question about monitoring.

What if log file size exceeds my claimed Persistent volume?

I am working on logging of my application on Persistent Volume.
I am using OpenShift, I created storage(Persistent volume under nas-thin class) and allocated 64Gib to it. I added mount path to this PV for one of my pods where my application is running and generating logs in one of the folder named "logs".
My mount path is "/logs". Hence anything inside this folder will be root for my PVC.
I am appending my logs inside logs folder in a single file.
I tried to read about expanding PV but couldn't understand much.
What would happen if my log file size exceeds allocated PV size(which is 64Gib)?
That will depend on the persistent storage actually being used by the cluster. Some persistent volume providers will let you write more data than you actually defined. So you'll have to test how your storage and your application actually behave on your particular cluster.
That being said, it is generally a bad idea to have container workload log to a persistent volume. I would strongly recommend to log to STDOUT and then use an external logging system to manage your logs instead of writing to a file.
How will you deal with multiple replicas of your application running? Do you really want to go into each container to get the log files? How will you correlate logs between different replicas?
Applications running on OpenShift / Kubernetes should not manage their logs in files but write to STDOUT.

Are docker-hosted databases somehow exempt from backup best practices?

As far as I was aware, for MS SQL, PostgreSQL, and even MySQL databases (so, I assumed, in general for RDBMS engines), you cannot simply back up the file system they are hosted on, but need to do an SQL-level backup to have any hope of internal consistency and therefore ability to actually restore.
But then answers like this and indeed the official docs referenced seem to suggest that one can just tar away on database data:
docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
These two ideas seem at odds with one another. Is there something special about how Docker works that makes it unnecessary to use SQL-level backups? If not, what am I missing in my understanding? (Why is something used as the official example when you can't use it to back up a production database? That can't be right...)
Under certain circumstances, it should be safe to use the image of a database on a disk:
The database server is not running.
All persistent data is on the disk system(s) being backed up (logs, tables spaces, temporary storage).
All components are restored together.
You are restoring the image to the same server on the same path.
The last condition is important, because some aspects of the database configuration may be stored in operating system files.
You need to do the backup within the database whenever the server is running. The server is responsible for the internal consistency of the data, and the disk image may not be complete or recoverable. If the server is not running, then the state of the database should be consistent in the persistent storage.

Infinispan 5.3.0 - Change cluster configuration

I've a cluster, this cluster has four nodes.
If I stop one node, and edit the configuration file (add a new replicated cache),
When I'll start the node,
Will the cluster have the new replicated cache?
In the others three nodes, is it necessary change the configuration file?
Regards.
a) Yes, the new replicated cache will be created on the node. However, if you have the same cache (name) with different configurations, you're asking for trouble.
b) No, the configuration on other nodes will not change. You have to change it manually, either stopping the nodes, or running rolling upgrade.
You may also look into JMX operations for starting/stopping cache, but this does not allow to change the configuration (I am not 100% sure if starting a cache with unknown name wouldn't start a new cache with default configuration).
If you have programmatic access to CacheManager, you can start cache with configuration provided programmatically.