Postgres - Cent OS. After machine crash, Posgtres Database updated Wrongly using Fsync - configuration

I am using Centos 6 and PostgresSQL 9.4. In my configuration file , fsync and full_page_writes options were enabled. Some point , My machine got crashed and restarted. After that, i checked my database, all the entries are wrongly updated. My entire databse is fully changed. I read about fsync, it will automatically recover the data at consistent point. But why My database is getting wrongly updated???.
And also i need which time my postgres server got shutdown and restarted ??
Then which checkpoint is used to update the Postgres database.??
How to make my database safe from machine crash or power failure using fsync?
Output of this command
sudo -u postgres /usr/pgsql-9.4/bin/pg_controldata /var/lib/pgsql/9.4/data/
pg_control version number: 942
Catalog version number: 201409291
Database system identifier: 6229523677101845380
Database cluster state: in production
pg_control last modified: Thu 05 May 2016 05:52:04 AM EST
Latest checkpoint location: 0/2B7A1518
Prior checkpoint location: 0/2B79D5C0
Latest checkpoint's REDO location: 0/2B7A1518
Latest checkpoint's REDO WAL file: 00000001000000000000002B
Latest checkpoint's TimeLineID: 1
Latest checkpoint's PrevTimeLineID: 1
Latest checkpoint's full_page_writes: on
Latest checkpoint's NextXID: 0/1141605
Latest checkpoint's NextOID: 78367
Latest checkpoint's NextMultiXactId: 1
Latest checkpoint's NextMultiOffset: 0
Latest checkpoint's oldestXID: 1800
Latest checkpoint's oldestXID's DB: 1
Latest checkpoint's oldestActiveXID: 0
Latest checkpoint's oldestMultiXid: 1
Latest checkpoint's oldestMulti's DB: 1
Time of latest checkpoint: Thu 05 May 2016 05:52:04 AM EST
Fake LSN counter for unlogged rels: 0/1
Minimum recovery ending location: 0/0
Min recovery ending loc's timeline: 0
Backup start location: 0/0
Backup end location: 0/0
End-of-backup record required: no
Current wal_level setting: minimal
Current wal_log_hints setting: off
Current max_connections setting: 100
Current max_worker_processes setting: 8
Current max_prepared_xacts setting: 0
Current max_locks_per_xact setting: 64
Maximum data alignment: 8
Database block size: 8192
Blocks per segment of large relation: 131072
WAL block size: 8192
Bytes per WAL segment: 16777216
Maximum length of identifiers: 64
Maximum columns in an index: 32
Maximum size of a TOAST chunk: 1996
Size of a large-object chunk: 2048
Date/time type storage: 64-bit integers
Float4 argument passing: by value
Float8 argument passing: by value
Data page checksum version: 0

Related

Asynchronous operations with Hangfire

I've setup a simple .netcore 3.1 web application, based on Ubuntu 16.04 Server with Mysql 8.0.25 and Hangfire 1.7.30 for scheduled activities.
I'm actually struggling in understanding how scheduling activities should be made. In the actual version of the SW, I've created up to 8 asynchronous scheduled tasks managed by Hangfire, all of them structured as follow:
public static async Task ScheduledTaskAsync_UpdateTransferList()
{
await TaskScheduling.UpdateTransferList(SavedAppServices);
}
All these scheduling activities query the mysql DB, that's why I've made them asynchronous.
One of these 8 services is quite frequent (Cron.Minutely()) and takes few seconds to be executed.
I'm wondering if I'm doing something wrong, because I see in the app log many consecutive messages like:
dbug: Hangfire.Server.RecurringJobScheduler[0]
1000 recurring job(s) processed by scheduler.
sometimes for a total of 9000-10000 recurring jobs processed.
Furthermore, after moving from Mysql 5.7.3 to 8.0.25, I've noticed a sharp increase of mysql binlogs (up to 6Gb generated in two days), all of them because of multiple logs like this (generated at the same second):
# at 9683673
#220617 5:25:45 server id 1 end_log_pos 9683704 CRC32 0x08936f4a Xid = 25774476
COMMIT/*!*/;
# at 9683704
#220617 5:25:45 server id 1 end_log_pos 9683783 CRC32 0xdd8b5fe2 Anonymous_GTID last_committed=26277 sequence_number=26278 rbr_only=yes original_committed_timestamp=1655443545028260 immediate_commit_timestamp=1655443545028260 transaction_length=332
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
# original_commit_timestamp=1655443545028260 (2022-06-17 05:25:45.028260 UTC)
# immediate_commit_timestamp=1655443545028260 (2022-06-17 05:25:45.028260 UTC)
/*!80001 SET ##session.original_commit_timestamp=1655443545028260*//*!*/;
/*!80014 SET ##session.original_server_version=80025*//*!*/;
/*!80014 SET ##session.immediate_server_version=80025*//*!*/;
SET ##SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 9683783
#220617 5:25:45 server id 1 end_log_pos 9683862 CRC32 0x8b83c028 Query thread_id=110 exec_time=0 error_code=0
SET TIMESTAMP=1655443545/*!*/;
BEGIN
/*!*/;
# at 9683862
#220617 5:25:45 server id 1 end_log_pos 9683940 CRC32 0x29c7f191 Table_map: `Hangfire`.`Hangfire.DistributedLock` mapped to number 92
# at 9683940
#220617 5:25:45 server id 1 end_log_pos 9684005 CRC32 0x6f8c7b99 Delete_rows: table id 92 flags: STMT_END_F
To be noted that the Hangfire Dashboard panel signals no issues and I know that I could simply deactivate binlog generation, but I would like to understand if I'm doing something wrong with Hangfire/Mysql and how I can make everything work without a so huge waste of diskspace due to binlog generation.

can't run '/etc/init.d/rcS': No such file or directory

I am trying to emulate a firmware image using qemu. During booting, I get the following error
can't run '/etc/init.d/rcS': No such file or directory
can't open /dev/ttyS0: No such file or directory
can't open /dev/ttyS0: No such file or directory
can't open /dev/ttyS0: No such file or directory
.
.
.
This is the content of the inittab file
# Startup the system
null::sysinit:/etc/init.d/rc.sysinit
# now run any rc scripts
::sysinit:/etc/init.d/rcS
# Put a getty on the serial port
ttyS0::respawn:/sbin/getty -L ttyS0 115200 vt100
# Stuff to do before rebooting
null::shutdown:/bin/umount -a -r
It is able to run the rc.sysinit, but not the rcS.
I have checked permissions of the rcS. Also, the filesystem is mounted as read-only cramfs. Could this be causing an issue?
This is the command I am running:
QEMU_AUDIO_DRV=none \qemu-system-arm -m 256M -M versatilepb
-kernel ~/linux-2.6.23/arch/arm/boot/zImage
-append "console=ttyAMA0,115200 root=/dev/ram rdinit=/sbin/init"
-initrd ~/tmpcramfs2
-nographic
These are the boot messages obtained on running the command:
Linux version 2.6.23 (hsailer#SvanteArrhenius) (gcc version 4.0.2) #1 Thu May 27 09:31:10 EDT 2021
CPU: ARM926EJ-S [41069265] revision 5 (ARMv5TEJ), cr=00093177
Machine: ARM-Versatile PB
Memory policy: ECC disabled, Data cache writeback
CPU0: D VIVT write-through cache
CPU0: I cache: 4096 bytes, associativity 4, 32 byte lines, 32 sets
CPU0: D cache: 65536 bytes, associativity 4, 32 byte lines, 512 sets
Built 1 zonelists in Zone order. Total pages: 65024
Kernel command line: console=ttyAMA0,115200 root=/dev/ram rdinit=/sbin/init
PID hash table entries: 1024 (order: 10, 4096 bytes)
Console: colour dummy device 80x30
Dentry cache hash table entries: 32768 (order: 5, 131072 bytes)
Inode-cache hash table entries: 16384 (order: 4, 65536 bytes)
Memory: 256MB = 256MB total
Memory: 249600KB available (2508K code, 227K data, 100K init)
Mount-cache hash table entries: 512
CPU: Testing write buffer coherency: ok
NET: Registered protocol family 16
NET: Registered protocol family 2
Time: timer3 clocksource has been installed.
IP route cache hash table entries: 2048 (order: 1, 8192 bytes)
TCP established hash table entries: 8192 (order: 4, 65536 bytes)
TCP bind hash table entries: 8192 (order: 3, 32768 bytes)
TCP: Hash tables configured (established 8192 bind 8192)
TCP reno registered
checking if image is initramfs...it isn't (bad gzip magic numbers); looks like an initrd
Freeing initrd memory: 7184K
NetWinder Floating Point Emulator V0.97 (double precision)
Installing knfsd (copyright (C) 1996 okir#monad.swb.de).
JFFS2 version 2.2. (NAND) © 2001-2006 Red Hat, Inc.
JFS: nTxBlock = 2007, nTxLock = 16063
io scheduler noop registered
io scheduler anticipatory registered (default)
io scheduler deadline registered
io scheduler cfq registered
CLCD: Versatile hardware, VGA display
Clock CLCDCLK: setting VCO reg params: S=1 R=99 V=98
Console: switching to colour frame buffer device 80x60
Serial: AMBA PL011 UART driver
dev:f1: ttyAMA0 at MMIO 0x101f1000 (irq = 12) is a AMBA/PL011
console [ttyAMA0] enabled
dev:f2: ttyAMA1 at MMIO 0x101f2000 (irq = 13) is a AMBA/PL011
dev:f3: ttyAMA2 at MMIO 0x101f3000 (irq = 14) is a AMBA/PL011
fpga:09: ttyAMA3 at MMIO 0x10009000 (irq = 38) is a AMBA/PL011
RAMDISK driver initialized: 16 RAM disks of 8192K size 1024 blocksize
smc91x.c: v1.1, sep 22 2004 by Nicolas Pitre <nico#cam.org>
eth0: SMC91C11xFD (rev 1) at d098e000 IRQ 25 [nowait]
eth0: Ethernet addr: 52:54:00:12:34:56
armflash.0: Found 1 x32 devices at 0x0 in 32-bit bank
Intel/Sharp Extended Query Table at 0x0031
Using buffer write method
RedBoot partition parsing not available
afs partition parsing not available
armflash: probe of armflash.0 failed with error -22
mice: PS/2 mouse device common for all mice
input: AT Raw Set 2 keyboard as /class/input/input0
TCP cubic registered
NET: Registered protocol family 1
NET: Registered protocol family 17
VFP support v0.3: implementor 41 architecture 1 part 10 variant 9 rev 0
input: ImExPS/2 Generic Explorer Mouse as /class/input/input1
RAMDISK: cramfs filesystem found at block 0
RAMDISK: Loading 7184KiB [1 disk] into ram disk... done.
VFS: Mounted root (cramfs filesystem) readonly.
Freeing init memory: 100K
can't run '/etc/init.d/rcS': No such file or directory
can't open /dev/ttyS0: No such file or directory
can't open /dev/ttyS0: No such file or directory
can't open /dev/ttyS0: No such file or directory
.
.
.
The errors about /dev/ttyS0 are because your inittab is specifying the wrong device name for the serial port for the (emulated) hardware you're running on. Your QEMU command specifies the 'versatilepb' board, whose serial devices are PL011s, which appear in /dev/ as /dev/ttyAMA0, /dev/ttyAMA1, etc. (/dev/ttyS0 is what the serial ports on an x86 PC appear as.) You need to fix that line of the inittab to refer to ttyAMA0 instead.
For the rcS error, I would suggest you start by double-checking all the things listed in all the responses to this older question.

Is it possible to check that a particular query opens how many files in MySQL?

I have large number of open files limit in MySQL.
I have set open_files_limit to 150000 but still MySQL uses almost 80% of it.
Also I have low traffic and max concurrent connections around 30 and no query has more than 4 joins.
The files opened by the server are visible in the performance_schema.
See table performance_schema.file_instances.
http://dev.mysql.com/doc/refman/5.5/en/file-instances-table.html
As for tracing which query opens which file, it does not work that way, due to caching in the server itself (table cache, table definition cache).
MySQL shouldn't open that many files, unless you have set a ludicrously large value for the table_cache parameter (the default is 64, the maximum is 512K).
You can reduce the number of open files by issuing the FLUSH TABLES command.
Otherwise, the appropriate value of table_cache can be roughly estimated (in Linux) by running strace -c against all MySQLd threads. You get something like:
# strace -f -c -p $( pidof mysqld )
Process 13598 attached with 22 threads
[ ...pause while it gathers information... ]
^C
Process 13598 detached
...
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
58.82 0.040000 51 780 io_getevents
29.41 0.020000 105 191 93 futex
11.76 0.008000 103 78 select
0.00 0.000000 0 72 stat
0.00 0.000000 0 20 lstat
0.00 0.000000 0 16 lseek
0.00 0.000000 0 16 read
0.00 0.000000 0 9 3 open
0.00 0.000000 0 5 close
0.00 0.000000 0 6 poll
...
------ ----------- ----------- --------- --------- ----------------
...and see whether there's a reasonable difference in impact in open() and close() calls; those are the calls which table_cache affects, and that influence how many open files there are at any given point.
If the impact of open() is negligible, then by all means reduce table_cache. It is mostly needed on slow IOSS'es, and there aren't many of those left around.
If you're running on Windows, you'll have to try and use ProcMon by SysInternals, or some similar tool.
Once you have table_cache to manageable levels, your query that now opens too many files will simply close and re-open many of those same files. You'll perhaps notice an impact on performances, that in all likelihood will be negligible. Chances are that a smaller table cache might actually get you results faster, as fetching an item from a modern, fast IOSS cache may well be faster than searching for it in a really large cache.
If you're into optimizing your server, you may want to look at this article too. The take-away is that as caches go, larger is not always better (it also applies to indexing).
Inspecting a specific query on Linux
On Linux you can use strace (see above) and verify what files are opened and how:
$ sudo strace -f -p $( pidof mysqld ) 2>&1 | grep 'open("'
Meanwhile from a different terminal I run a query, and:
[pid 8894] open("./ecm/db.opt", O_RDONLY) = 39
[pid 8894] open("./ecm/prof2_people.frm", O_RDONLY) = 39
[pid 8894] open("./ecm/prof2_discip.frm", O_RDONLY) = 39
[pid 8894] open("./ecm/prof2_discip.ibd", O_RDONLY) = 19
[pid 8894] open("./ecm/prof2_discip.ibd", O_RDWR) = 19
[pid 8894] open("./ecm/prof2_people.ibd", O_RDONLY) = 20
[pid 8894] open("./ecm/prof2_people.ibd", O_RDWR) = 20
[pid 8894] open("/proc/sys/vm/overcommit_memory", O_RDONLY|O_CLOEXEC) = 39
...these are the files that the query used (*be sure to run the query on a "cold-started" MySQL to prevent caching), and I see that the highest file handle assigned was 39, thus at no point were there more than 40 open files.
The same files can be checked from /proc/$PID/fd or from MySQL:
select * from performance_schema.file_instances where open_count > 1;
but the count from MySQL is slightly shorter, it does not take into account socket descriptors, log files, and temporary files.
This would only be possible by adjusting the source code and add logging on that level.
ALternative: Run a test using this scenario:
You will have to setup an automated test to make this possible:
Log your queries;
Create a script which preloads your heap with a normal dataset (else you are testing against empty memory), take a snapshot of the number of open tables;
Run every query and take snapshot of open tables; (In retrospect) I think you could do this without restarting MySQL every time, so then just every query and record the results. Debugging is tedious work: Not impossible, just really tedious.
Personally I would start different:
Install cacti and percona cacti plugin
Register a week of normal workload
Then hunt down high load queries (slow log > 0.1 second, run through a script to find repeating queries).
Another week monitoring
Then hunt down additional queries with a high repeat count: This is often inefficient code firing a high number of queries where less could be used (like retrieving the keys and then all the values for every key per key (one by one: Happens a lot when programmers use ORM).

sphinx index fail, and ask me to restart the server with query_cache_type=1 to enable it

mysql config my.ini default query_cache_type=0 .
I have already set sql_query_pre = SET SESSION query_cache_type=OFF in sphinx.conf.I think it is not good to turn cache while indexing.But sphinx still asking me to turn on cache...
error detail:
win7 x64, sphinx 2.1.7
I:\sphinx\bin>I:\sphinx\bin\indexer.exe --all --config I:\sphinx\bin\sphinx.conf
Sphinx 2.1.7-id64-release (r4638)
Copyright (c) 2001-2014, Andrew Aksyonoff
Copyright (c) 2008-2014, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file 'I:\sphinx\bin\sphinx.conf'...
indexing index 'test1'...
ERROR: index 'test1': sql_query_pre[1]: Query cache is disabled; restart the server with query_cache_type=1 to enable it
(DSN=mysql://root:***#localhost:3306/test).
total 0 docs, 0 bytes
total 0.018 sec, 0 bytes/sec, 0.00 docs/sec
skipping non-plain index 'rt'...
total 0 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
total 0 writes, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
The 'message' you are receiving is coming from mysql - not from sphinx. indexer just runs the commands as provided and reports/uses the results.
Basically mysql is telling yo the query cache is already disabled. its not enabled globally.
So trying to turn if off for just the (indexing) session, fails, because its not on. If its not enabled in teh first place you cant disable it!
http://www.big.info/2013/04/error-code-1651-query-cache-is-disabled.html
Its telling you NEED to turn it on globally first, before you are ABLE to turn if off.
Maybe mysql could just silently fail to turn it off, rather than giving an error, but thats a different story.
I had a case where I was seeing this error, and it was actually preventing the indexer --all command from generating indices. I went to the XAMPP Control Panel and clicked on the Config button for the MySQL module. This opened the file my.ini in Notepad. I added the following line to the [mysqld] section in the file:
query_cache_type = 1
Then I restarted the MySQL service. The value of query_cache_type was now displayed as ON, and the indexer --all command successfully generated indices.

mytop Perl monitor of MySQL is no more valid in terms of Queries/Questions since MySQLd 5.1.63?

We have Perl mytop of Jeremy Zawodny version 2009-04-06 installed on Debian Squeeze OS, with MySQLd version 5.1.53, using apt-get command. I redo the "apt-get install mytop" that indicates that no newer version of mytop is available.
This version of mytop seems outdated, as it gives systematically very low value of queries done by MySQLd. In fact, it uses the status query to get the total queries since uptime:
SHOW STATUS LIKE 'Questions';
It yields an error result in the new version mysqld. In effect to get mysql total number of queries since uptime, the new mysqld server shifted the number of queries to 'Queries' instead of 'Questions':
SHOW STATUS LIKE 'Queries';
You can see an enormous difference between the two variables by:
mysql> SHOW STATUS LIKE 'Que%';
+---------------+--------+
| Variable_name | Value |
+---------------+--------+
| Queries | 135903 |
| Questions | 160 |
+---------------+--------+
2 rows in set (0.00 sec)
that gives the both values of 'Queries' and 'Questions'.
mytop -uJohnDoe2 -ppassword
Here is the original mytop output:
MySQL on localhost (5.1.63-0+squeeze1-log) up 0+01:42:35 [13:36:44]
Queries: 265.0 qps: 0 Slow: 0.0 Se/In/Up/De(%): 14760/00/00/00
qps now: 0 Slow qps: 0.0 Threads: 5 ( 1/ 5) 1500/00/00/00
Key Efficiency: 100.0% Bps in/out: 0.9/173.8 Now in/out: 8.3/ 1.5k
Id User Host/IP DB Time Cmd Query or State
-- ---- ------- -- ---- --- --------------
28019 root localhost 0 Query show full processlist
....
I copied mytop to mytop.pl, and replaced in the source Perl code the string "Questions" by "Queries", and run
mytop.pl -uJohnDoe2 -ppassword
And the modifed mytop.pl gives a more realistic monitoring:
MySQL on localhost (5.1.63-0+squeeze1-log) up 0+01:42:23 [13:36:32]
Queries: 136.1k qps: 23 Slow: 0.0 Se/In/Up/De(%): 28/00/00/00
qps now: 18 Slow qps: 0.0 Threads: 5 ( 1/ 5) 27/00/00/00
Key Efficiency: 100.0% Bps in/out: 0.1/ 18.3 Now in/out: 8.4/ 1.5k
Id User Host/IP DB Time Cmd Query or State
-- ---- ------- -- ---- --- --------------
30789 root localhost 0 Query show full processlist
....
Have you observed this problem in your system? i.e.
Perl monitor of MySQL is now invalid in terms of Queries/Questions since MySQLd 5.1.63?
ADDED:
After reading the answer of Shlomi Noach, I added this link for modified Perl script file:
mytop.pl.
I have indeed noticed this change, and blogged about it: questions or queries?
Apparently this came as a surprise to other monitoring tools (Innotop, MonYOG) developers.
With regard you case you have two very simple options:
Switch to innotop instead
Modify the source code for mytop and replace Questions with Queries.
The change was made in 5.1.31; in aforementioned post you can also read comment by an Oracle employee.