Modify text file with math expression - language-agnostic

I have a file called POSCAR that looks like this.
Pt-FCC
3.975
3.975000 0.000000 0.000000
0.000000 3.975000 0.000000
0.000000 0.000000 3.975000
I need to change the 3x3x3 matrix several times, to take the following shape and values, where d ranges from 0.005 to 0.025 with a 0.005 increment.
Pt-FCC
3.975
1+d 0.000000 0.000000
0.000000 1-d 0.000000
0.000000 0.000000 1/(1-d^2)
For example, for d=0.005:
Pt-FCC
3.975
1.005000 0.000000 0.000000
0.000000 0.995000 0.000000
0.000000 0.000000 1.000025
I cannot assign a variable inside the file and use expr and echo to evaluate it, because the simulation program does not understand this. I am attempting to use a loop that iterates through all the values of d and copies the original POSCAR file, then uses perl, sed, or awk to modify the matrix while keeping the spacing constant.
for i in $(seq 0.005 0.005 0.025)
do
cp POSCAR POSCAR_pure_shear/POSCAR_pure_$i
perl -pi .................. POSCAR_pure_$i
done
I understand this is a long question and I appreciate any help that might shift me in the right direction. I am still a beginner!

You can do something like this:
awk -v d=0.005 'FNR==3 {$1=sprintf("%0.6f", 1+d)}
FNR==4 {$2=sprintf("%0.6f",1-d)}
FNR==5 {$3=sprintf("%0.6f",1/(1-d**2))}
1' file
Prints:
Pt-FCC
3.975
1.005000 0.000000 0.000000
0.000000 0.995000 0.000000
0.000000 0.000000 1.000025
Plugging that into your loop:
for i in $(seq 0.005 0.005 0.025)
do
awk -v d="$i" 'FNR==3 {$1=sprintf("%0.6f", 1+d)}
FNR==4 {$2=sprintf("%0.6f",1-d)}
FNR==5 {$3=sprintf("%0.6f",1/(1-d**2))}
1' file
done

Related

How to list the DDL statements (CREATE, ALTER, DROP) processed in a period of time when a general log was recorded?

How to list the DDL statements (CREATE, ALTER, DROP) processed in a period of time when a general log was recorded?
I was able to use pt-query-digest (a free command-line tool, part of the Percona Toolkit) this way:
pt-query-digest --type genlog --no-report --output slowlog \
--filter '$event->{fingerprint} =~ /^create|alter|drop/' \
/usr/local/var/mysql/bkarwin.log
Output:
# Time: 2021-11-18T16:16:53.452721Z
# Thread_id: 24
# Query_time: 0.000000 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
create table foo (i int);
# Time: 2021-11-18T16:16:54.805065Z
# Thread_id: 24
# Query_time: 0.000000 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
drop table foo;
Those two statements are just test DDL statements I ran after enabling the general query log on my localhost MySQL instance.
The pt-query-digest tool also has options for restricting a date range, --since and --until. But I got an error when I tried it. It seemed to have a problem with the date format.
Pipeline process 5 (since) caused an error: Argument "2021-11-18T16:16:53.452721Z" isn't numeric in numeric ge (>=) at /usr/local/bin/pt-query-digest line 13660, <$fh> line 2.
Perhaps you can just skip the date range options and eyeball the output yourself.

mysql-slow.log analysis with goaccess Nothing valid to process

To visually analyze mysql-slow.log, I ran goaccess -f mysql-slow.log in my terminal and got this error:
Nothing valid to process. Verify your date/time/log format.
My mysql-slow.log looks like this:
# User#Host: towfiqpiash[towfiqpiash] # ip-xx-xx-xxx-xxx.ec2.internal [xx.xx.xxx.xxx] Id: 396
# Query_time: 0.000180 Lock_time: 0.000056 Rows_sent: 38 Rows_examined: 39
SET timestamp=1482221404;
SELECT AQ.answer_id FROM `answer_quality_log` as AQ WHERE AQ.active=1;
I need appropriate date-format, time-format and log-format for goaccess configuration. Any help appreciated.

MySQL Slow Query Lock_time = years?

I have seen a lot of slow query logs, but never one like this:
/usr/sbin/mysqld, Version: 5.1.46-log (SUSE MySQL RPM). started with:
Tcp port: 3306 Unix socket: /var/run/mysql/mysql.sock
Time Id Command Argument
# Time: 160627 9:10:05
# User#Host: sysop[sysop] # [127.0.0.1]
# Query_time: 3.768728 Lock_time: 0.034402 Rows_sent: 734 Rows_examined: 734
use asterisk;
SET timestamp=1467033005;
select PostID, lead_id, list_id from vdlist_temp where Posted is null;
# Time: 160627 10:35:11
# User#Host: sysop[sysop] # [192.168.0.248]
# Query_time: 35.563521 Lock_time: 0.000054 Rows_sent: 1222017 Rows_examined: 2444034
SET timestamp=1467038111;
SELECT `vicidial_list`.`lead_id`, `vicidial_list`.`source_id` FROM `vicidial_list` ORDER BY `vicidial_list`.`source_id`;
# User#Host: sysop[sysop] # [127.0.0.1]
# Query_time: 0.000095 Lock_time: 18446744073699.406250 Rows_sent: 2 Rows_examined: 1
SET timestamp=1467038111;
call spUpdate_VDList_from_temp(0, 0, 1324903);
# Time: 160627 10:35:12
# User#Host: sysop[sysop] # [127.0.0.1]
# Query_time: 0.000055 Lock_time: 18446744073699.359375 Rows_sent: 0 Rows_examined: 0
SET timestamp=1467038112;
call spMoveXDrop(10376163);
# Time: 160627 11:26:14
# User#Host: sysop[sysop] # [127.0.0.1]
# Query_time: 0.000057 Lock_time: 18446744073697.218750 Rows_sent: 3 Rows_examined: 0
SET timestamp=1467041174;
call spUpdate_VDList_from_temp(10795520, 616062301, 1955758);
This seems to show I have queries waiting over 500,000 years for a lock. (I wrote the stored procedure and I'm not quite that old!) Somehow I don't think that's right. These are all MyISAM tables. (Not my choice.) I did a mysqldump and restore of the database, rebooted the server and I'm still getting seeing lock times like this.
Can anyone give me a clue where to look for the problem? (Server times are all good.)
EDIT:This MySql Version: 5.1.46-log that comes with an OpenSource project Vicidial. It seems clear that the Lock_time is a bug. Problem is that I'm looking at the slow query log to track down user complaints of slow web server response. I was hoping someone would know what triggers this bug to help me locate the actual problem. As you can see from the log, most slow queries have sane Lock_times. Both stored procedures and PHP generated queries generate the insane Lock_time. The only thing I see in common is they all select or update from the table vicidial_list. I dumped, dropped and recreated that table to no avail.
Sometimes the clock seems to run backwards. This has been a problem for more than a decade. It seems to be harmless. Ignore it as a bogus value.
Note that if a -1 is stored in a BIGINT UNSIGNED, you get a value very similar to the 18446744073699.406250 that you are seeing.
Those queries are not part of stock VICIdial, so I might suggest some query optimization for those custom queries you're trying to run. Also, in more recent versions of our VICIbox ISO installer we are using MariaDB, and a much newer version as well, which has a lot of bug fixes and optimizations compared to the older version of MySQL you are using.

mysql: slow queries log: sleep(60)?

I am trying to understand what is causing my lamp server to timeout a few times a day since four days, after running fine for 417 days.
Looking at running processes top may point to mysql. So I looked in the slow queries log, and only found this.
Can you make sense of it for me? The line select sleep(60)particularly worries me. Especially, since it is not in my website codebase.
# Time: 150125 17:16:43
# User#Host: brujobs_live[brujobs_live] # localhost []
# Query_time: 12.912479 Lock_time: 0.000000 Rows_sent: 1 Rows_examined: 0
use brujobs_live;
SET timestamp=1422202603;
select sleep(60);
# Time: 150125 17:20:17
# User#Host: root[root] # localhost []
# Query_time: 60.000274 Lock_time: 0.000000 Rows_sent: 1 Rows_examined: 0
SET timestamp=1422202817;
select sleep(60);
It seems that your event_scheduler is ON.
That would at least explain "sleep(60)".
Read this, it may helps you:
http://dev.mysql.com/doc/refman/5.1/en/events-configuration.html
http://mysqlopt.blogspot.co.at/2012/03/mysql-event-scheduler.html

Cannot see processlist anymore

I am importing a large csv file into mysql(30 million rows) and I had another terminal open to show the process list. I used to be able to see the row count on the process list but now, whenever I enter the command "show process list" it hangs. I had 20 million records imported. Do I have to start all over again?
iostat:
[user#gggggg ~]$ iostat
Linux 2.6.18-308.4.1.el5
avg-cpu: %user %nice %system %iowait %steal %idle
5.02 0.00 0.16 0.87 0.00 93.95
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 2.15 5.31 55.58 8644514 90425752
sda1 0.00 0.00 0.00 2138 5592
sda2 2.15 5.31 55.57 8640178 90419984
sda3 0.00 0.00 0.00 1582 176
top - 14:18:55 up 18 days, 20:00, 2 users, load average: 2.02, 2.09, 2.06
Tasks: 106 total, 3 running, 103 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2%us, 0.0%sy, 0.0%ni, 49.9%id, 49.8%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 8174532k total, 7656780k used, 517752k free, 105904k buffers
Swap: 4257216k total, 88k used, 4257128k free, 6958020k cached
First, check the filesize from the OS
If the table is called mydb.mytable and it is MyISAM, do this:
cd /var/lib/mysql/mydb
watch ls -l mytable.*
If the filesize keeps growing, you are fine
Don't forget to check for diskspace
df -h
If you ran out of diskspace while loading a MyISAM table, mysqld is just sitting there. Why? According to MySQL 5.0 Certification Study Guide Page 408,409 Section 29.2 bulletpoint 11 says:
If you run out of disk space while adding rows to a MyISAM table, no
error occurs. The server suspends the operation until space becomes
available, and then completes the operation.
Therefore, if the data partition is out of diskspace, you must free up some space so mysqld can conitnue the LOAD DATA INFILE.
If everything seems frozen, you may have to kill mysqld as follows:
IDTOKILL=`ps -ef | grep mysqld_safe | grep -v grep | awk '{print $2}'`
kill -9 ${IDTOFKILL}
IDTOKILL=`ps -ef | grep mysqld | grep -v grep | awk '{print $2}'`
kill -9 ${IDTOFKILL}
Then, check your diskspace