RAMDISK Partition not getting registered by the kernel - partitioning

all!
So I've been sleuthing over this problem for the past two days.
I have a ramdisk, and I tried partitioning it using both parted and fdisk. I also tried to register it using partprobe and kpartx.
lsblk is displaying the partition I made for cow_ram0 as cow_ram0p1. However, the /dev/cow_ram0p1 file doesn't exist.
Has anyone experienced this before? If so, how'd you solve it?
It doesn't even have to be ramdisk specific. Has anyone ever had the /dev/ file not appear upon partitioning?
Now, for some details (don't want to get downvoted to oblivion):
The ramdisk is called cow_ram0.
So I tried this first:
fdisk /dev/cow_ram0
I hit n for new partition.
Then, I just hit enter twice to get the first sector number at 2048 and the last one at whatever the last sector is.
Then I hit w to write all these actions.
Then I called lsblk. I don't see a partition for the ramdisk.
So I call partprobe and then kpartx -u /dev/cow_ram0 (calling both because partprobe didn't work, and ayy... I am looking into an Issue, so this won't go to master... redundancy won't hurt).
Now the lsblk output actually contains cow_ram0p1.
So I try to mount it, and I get a /dev/cow_ram0p1 file doesn't exist error.
I repeat the process above with parted -a opt /dev/cow_ram0 -t ext4 primary 0% 100% instead of the fdisk stuff above. Same result.
Has anyone experienced this before?

Ok. The problem was I had forgotten to set a flag(max_part) in the kernel module when inserting it. Hopefully you, the reader, made the same mistake.

Related

Zabbix Agent 3.4.9 Active Monitoring Log file, Not supported: too many parameters

I'm trying to monitor the log file: /var/log/neo4j/debug.log
I'm looking for the text: Application threads blocked for ######ms
I have devised a regular expression for this as: Application threads blocked for (\d+)ms
We want to skip old info: add skip as mode
I want to pull out the number of MS so that the trigger will alert on blockages > 150ms.: \1 must be set as output parameter
I constructed the key as:
log[/var/log/neo4j/debug.log,Application threads blocked for (\d+)ms,,,skip,\1]
in accordance with
log[/path/to/file/file_name,< regexp >,< encoding >,< maxlines >,< mode >,< output >,< maxdelay >]
Type of Information is: Log
Update interval: 30s
History storage period: 90d
Timestamps appear in the log file as: 2018-10-03 13:29:20.460+0000
My timestamp appears as: yyyypMMpddphhpmmpss
I have tried a bunch of different things over the past week trying to get it to stop showing a "Too Many Parameters" error in the GUI without success. I'm completely lost at this point. We have 49 other items working correctly (all others are passive). Active checks are enabled in zabbix_agentd.conf.
I know this is an old thread but it took me a while to solve this problem, so I'd like to share and hope it helps...
According to the Zabbix official documentation the parameters usage for log (and logrt) keys should be:
logrt[file_regexp,<regexp>,<encoding>,<maxlines>,<mode>,<output>,<maxdelay>]
So, if we would use only the "skip" parameter, the item key should look like:
logrt[MyLogFile.log,,,,skip,,]
Nevertheless, it triggers the error "too many parameters".
In fact, to solve this issue I configured this key in my environment with only one coma after the parameter, like this:
logrt["MyLogFile.log","MyFilter",,,skip,]
That's it... hope it helps someone else.

Pascal results window

I have written a programme which merges two 1D arrays containing names. I print the list of arr1, arr2 and arr3.
I am using Lazarus Free Pascal v. 1.0.14 . I was wondering if anyone knows how to break the results in the dos-like window because the list is so long that I can only see the last few names in the returned results. The rest go by too fast to read.
I know I can save the resuls to file and I also use the delay command, but would like to know if there is a way to somehow break the results or slow them down or even edit the output console?
I appreciate your help.
This isn't really a programming question, because your console application should output the values without pause. Otherwise your program would become useless if you ever wanted it to run as part of another pipeline in an automated fashion.
Instead you need a tool that you wrap around your program to paginate the output if, and when, you so desire. Such tools are known as terminal pagers and the basic one that ships with Windows is called more. You execute your program and pipe the output to the more program. Like this:
C:\SomeDir>MyProject.exe <input_args> | more
You can change the code of your loop in the following way:
say you print the results by the followng loop:
for i:=0 to 250 do
WriteLn(ArrUnited[i]);
you can replace it with:
for i:=0 to 250 do
begin
WriteLn(ArrUnited[i]);
if (i mod 25) = 24 then //the code will wait for the user pressing Enter every 25 rows
ReadLn;
end;
For the future please! post MCVE in your questions otherwise everyone has to guess what your code is.

How to test whether log compaction is working or not in Kafka?

I have made the changes in server.properties file in Kafka 0.8.1.1 i.e. added log.cleaner.enable=true and also enabled cleanup.policy=compact while creating the topic.
Now when I am testing it, I pushed the following messages to the topic with following (Key, Message).
Offset: 1 - (123, abc);
Offset: 2 - (234, def);
Offset: 3 - (345, ghi);
Offset: 4 - (123, changed)
Now I pushed the 4th message with a same key as an earlier input, but changed the message. Here log compaction should come into picture. And using Kafka tool, I can see all the 4 offsets in the topic. How can I know whether log compaction is working or not? Should the earlier message be deleted, or the log compaction is working fine as the new message has been pushed.
Does it have to do anything with the log.retention.hours or topic.log.retention.hours or log.retention.size configurations? What is the role of these configs in log compaction.
P.S. - I have thoroughly gone through the Apache Documentation, but still it is not clear.
even though this question is a few months old, I just came across it doing research for my own question. I had created a minimal example for seeing how compaction works with Java, maybe it is helpful for you too:
https://gist.github.com/anonymous/f78184eaeec3ee82b15182aec24a432a
Furthermore, consulting the documentation, I used the following configuration on a topic level for compaction to kick in as quickly as possible:
min.cleanable.dirty.ratio=0.01
cleanup.policy=compact
segment.ms=100
delete.retention.ms=100
When run, this class shows that compaction works - there is only ever one message with the same key on the topic.
With the appropriate settings, this would be reproducible on command line.
Actually, the log compaction is visible only when the number of logs reaches to a very high count eg 1 million. So, if you have that much data, it's good. Otherwise, using configuration changes, you can reduce this limit to say 100 messages, and then you can see that out of the messages with the same keys, only the latest message will be there, the previous one will be deleted. It is better to use log compaction if you have full snapshot of your data everytime, otherwise you may loose the previous logs with the same associated key, which might be useful.
In order check a Topics property from CLI you can do it using Kafka-topics cmd :
https://grokbase.com/t/kafka/users/14aev0snbd/command-line-tool-for-topic-metadata
It is a good point to take a look also on log.roll.hours, which by default is 168 hours. In simple words: even in case you have not so active topic and you are not able to fill the max segment size (by default 1G for normal topics and 100M for offset topic) in a week you will have a closed segment with size below log.segment.bytes. This segment can be compacted on next turn.
You can do it with kafka-topics CLI.
I'm running it from docker(confluentinc/cp-enterprise-kafka:6.0.0).
$ docker-compose exec kafka kafka-topics --zookeeper zookeeper:32181 --describe --topic count-colors-output
Topic: count-colors-output PartitionCount: 1 ReplicationFactor: 1 Configs: cleanup.policy=compact,segment.ms=100,min.cleanable.dirty.ratio=0.01,delete.retention.ms=100
Topic: count-colors-output Partition: 0 Leader: 1 Replicas: 1 Isr: 1
but don't get confused if you don't see anything in Config field. It happens if default values were used. So, unless you see cleanup.policy=compact in the output - the topic is not compacted.

I deleted my HKCU\Software\Microsoft\Windows NT\cuttentversion\ProductName

I know i did a mistake ...
I deleted my HKCU\Software\Microsoft\Windows NT\cuttentversion\ProductName registry key ...
How can i retrieve the information in it ?
I did searched before posting ...
Try using system restore, even if you dont have a specified checkpoint you can still rollback to a little bit.
If that doesnt work then you might as well either
A) save your stuff onto an hdd and reformatt.
B)Try to get a new registry key so that it will be re-added.[there are a few sites thatll give out keys]
I hope this helps you!

Muetexes in perl & MySQL

I am trying to insure that only one instance of a perl script can run at one time. The script performs some kind of db_operation depending on the parameters passed in. The script does not necessarily live in one place or on one machine, and possibly multiple OSs. Though the file system is automounted across the various machines.
My first aproach was to just create a .lock file, and do the following:
use warnings;
use strict;
use Fcntl qw(:DEFAULT :flock);
...
open(FILE,">>",$lockFilePath);
flock(FILE,LOCK_EX) or die("Could not lock ");
do_something();
flock(FILE,LOCK_UN) or die("Could not unlock ");
close(FILE);
but I keep getting the following errors:
Bareword "LOCK_EX" not allowed while "strict subs" in use
Bareword "LOCK_UN" not allowed while "strict subs" in use
So I am looking for another way to approach the problem. Locking the DB itself is also not practical since the db could be used by other scripts(which is acceptable), I am just trying to prevent this script from running. And locking a table for write is not practical, since my script is not aware of what table the operation is taking place, it just launches another perl script supplied as a parameter.
I am thinking of adding a table to the db, with just one value, and to use that as a muetex, but I don't know how practical/reliable that is(a lot of red flags go up in my head). I have a DBI connection to a db that this script useses.
Thanks
The Bareword error you are getting sounds like you've done something in that "..." to confuse Perl with regard to the imported Fcntl constants. There's nothing wrong with using those constants like that. You might try something like LOCK_UN() to see what error that gets you.
If you are using MySQL, you can use the GET_LOCK() and RELEASE_LOCK() mechanism. It works reasonably well for cases like this:
SELECT GET_LOCK("script_lock");
and then when you are finished:
SELECT RELEASE_LOCK("script_lock");
See http://dev.mysql.com/doc/refman/4.1/en/miscellaneous-functions.html for details.
You may want to avoid the file locking; from what I remember it's notoriously unreliable on non-local filesystems. Your better bet is to just use the existence of the file itself to the indicator that the script is already running (similar to a UNIX PID file) Granted, this won't be 100% reliable but should work reasonably reliably with very low overhead, provided the script isn't getting invoked incessantly.
If you need better reliability than that, using the database for the mutex is a good solution.