Good day,
Using SGE 8.6.18, I get correct output with qconf -sq q#hostname.fqdn...com
I want to change the qtype of the queue with the -mq option as qconf -sq q#hostname.fqdn...com but I get this error:
Cluster queue entry "q#hostname.fqdn...com" does not exist
I checked the output of qconf -sm and I am listed as a manager of the farm.
Thanks for any guidance.
The 'does not exist' simply means there is no individual queue, for each machine. I found out there is only on queue, named 'q'. Therefore I must use qconf -mq q.
Related
I'm trying to monitor the log file: /var/log/neo4j/debug.log
I'm looking for the text: Application threads blocked for ######ms
I have devised a regular expression for this as: Application threads blocked for (\d+)ms
We want to skip old info: add skip as mode
I want to pull out the number of MS so that the trigger will alert on blockages > 150ms.: \1 must be set as output parameter
I constructed the key as:
log[/var/log/neo4j/debug.log,Application threads blocked for (\d+)ms,,,skip,\1]
in accordance with
log[/path/to/file/file_name,< regexp >,< encoding >,< maxlines >,< mode >,< output >,< maxdelay >]
Type of Information is: Log
Update interval: 30s
History storage period: 90d
Timestamps appear in the log file as: 2018-10-03 13:29:20.460+0000
My timestamp appears as: yyyypMMpddphhpmmpss
I have tried a bunch of different things over the past week trying to get it to stop showing a "Too Many Parameters" error in the GUI without success. I'm completely lost at this point. We have 49 other items working correctly (all others are passive). Active checks are enabled in zabbix_agentd.conf.
I know this is an old thread but it took me a while to solve this problem, so I'd like to share and hope it helps...
According to the Zabbix official documentation the parameters usage for log (and logrt) keys should be:
logrt[file_regexp,<regexp>,<encoding>,<maxlines>,<mode>,<output>,<maxdelay>]
So, if we would use only the "skip" parameter, the item key should look like:
logrt[MyLogFile.log,,,,skip,,]
Nevertheless, it triggers the error "too many parameters".
In fact, to solve this issue I configured this key in my environment with only one coma after the parameter, like this:
logrt["MyLogFile.log","MyFilter",,,skip,]
That's it... hope it helps someone else.
I have a test plan with 12 thread-groups, each one is one test scenario.I want to use unique login credentials for each thread-group. So I've created a CSV file, added CSV Data Config element to each thread-group and selected "All Threads" in "Sharing mode". Whenever I execute the test plan(All thread-groups concurrently) the thread-groups are not taking variable rows sequentially. I expected that the 1st thread-group in the test plan would consider 1st row of variables in the CSV file based on the post: JMeter test plan with different parameter for each thread
But it is not happening and I am unable to understand the pattern of variable allocation. Please help me resolve my issue.
My CSV file looks like below:
userName,password,message
userone,sample123,message1
usertwo,sample123,message2
.
.
so on...
Refer below for configuration of CSV Data Config element:
Thanks!
Threads and thread groups are different things. When you choose "All Threads" in "Sharing mode", it just means that all threads in the same thread group will share CSV. Thread groups are always independent.
You have 2 simple options:
Use one thread group and control what users are doing with controllers. For example Throughput Controller can allow you to control how many threads perform this or other script scenario within the same thread group.
Split your CSV so, that each thread group has its own CSV.
And many more complicated options, for example:
Use __CSVRead or __StringFromFile function, which allows to read one line. That way you can assign each thread group a range of lines to read, rather than reading the entire file.
If your usernames and passwords are predictable (e.g. user1, user2, etc), you could use a counter and a range for each thread group.
I have made the changes in server.properties file in Kafka 0.8.1.1 i.e. added log.cleaner.enable=true and also enabled cleanup.policy=compact while creating the topic.
Now when I am testing it, I pushed the following messages to the topic with following (Key, Message).
Offset: 1 - (123, abc);
Offset: 2 - (234, def);
Offset: 3 - (345, ghi);
Offset: 4 - (123, changed)
Now I pushed the 4th message with a same key as an earlier input, but changed the message. Here log compaction should come into picture. And using Kafka tool, I can see all the 4 offsets in the topic. How can I know whether log compaction is working or not? Should the earlier message be deleted, or the log compaction is working fine as the new message has been pushed.
Does it have to do anything with the log.retention.hours or topic.log.retention.hours or log.retention.size configurations? What is the role of these configs in log compaction.
P.S. - I have thoroughly gone through the Apache Documentation, but still it is not clear.
even though this question is a few months old, I just came across it doing research for my own question. I had created a minimal example for seeing how compaction works with Java, maybe it is helpful for you too:
https://gist.github.com/anonymous/f78184eaeec3ee82b15182aec24a432a
Furthermore, consulting the documentation, I used the following configuration on a topic level for compaction to kick in as quickly as possible:
min.cleanable.dirty.ratio=0.01
cleanup.policy=compact
segment.ms=100
delete.retention.ms=100
When run, this class shows that compaction works - there is only ever one message with the same key on the topic.
With the appropriate settings, this would be reproducible on command line.
Actually, the log compaction is visible only when the number of logs reaches to a very high count eg 1 million. So, if you have that much data, it's good. Otherwise, using configuration changes, you can reduce this limit to say 100 messages, and then you can see that out of the messages with the same keys, only the latest message will be there, the previous one will be deleted. It is better to use log compaction if you have full snapshot of your data everytime, otherwise you may loose the previous logs with the same associated key, which might be useful.
In order check a Topics property from CLI you can do it using Kafka-topics cmd :
https://grokbase.com/t/kafka/users/14aev0snbd/command-line-tool-for-topic-metadata
It is a good point to take a look also on log.roll.hours, which by default is 168 hours. In simple words: even in case you have not so active topic and you are not able to fill the max segment size (by default 1G for normal topics and 100M for offset topic) in a week you will have a closed segment with size below log.segment.bytes. This segment can be compacted on next turn.
You can do it with kafka-topics CLI.
I'm running it from docker(confluentinc/cp-enterprise-kafka:6.0.0).
$ docker-compose exec kafka kafka-topics --zookeeper zookeeper:32181 --describe --topic count-colors-output
Topic: count-colors-output PartitionCount: 1 ReplicationFactor: 1 Configs: cleanup.policy=compact,segment.ms=100,min.cleanable.dirty.ratio=0.01,delete.retention.ms=100
Topic: count-colors-output Partition: 0 Leader: 1 Replicas: 1 Isr: 1
but don't get confused if you don't see anything in Config field. It happens if default values were used. So, unless you see cleanup.policy=compact in the output - the topic is not compacted.
I make a tool with sqlalchemy that copies entries from one base to another. I want to add "dry run" option, so instead of real committing, it would just print a number of entries that would be committed:
session.add(foo)
session.add(bar)
if dry_run:
print session.number_of_items_to_commit # <-- should print "2"
else:
session.commit()
How to get the number of items that are to be committed? I didn't see any appropriate method in Session class.
You could probably use len(session.new) for your task:
The set of all instances marked as ‘new’ within this Session.
if you also need to track modified objects, use session.dirty
these are my first steps in Erlang so sorry for this newbie question :) I'm spawning a new Erlang process for every Redis request which is not what I want to ("Too many processes" at 32k Erlang processes) but how to throttle the amount of the processes to e.g. max. 16?
-module(queue_manager).
-export([add_ids/0, add_id/2]).
add_ids() ->
{ok, Client} = eredis:start_link(),
do_spawn(Client, lists:seq(1,100000)).
do_spawn(Client, [H|T]) ->
Pid = spawn(?MODULE, add_id, [Client, H]),
do_spawn(Client, T);
do_spawn(_, []) -> none.
add_id(C, Id) ->
{ok, _} = eredis:q(C, ["SADD", "todo_queue", Id]).
Try using the Erlang pg2 module. It allows you to easliy create process groups and provides an API to get the 'closest' (or a random) PID in the group.
Here is an example of a process group for the eredis client:
-module(redis_pg).
-export([create/1,
add_connections/1,
connection/0,
connections/0,
q/1]).
create(Count) ->
% create process group using the module name as the reference
pg2:create(?MODULE),
add_connections(Count).
% recursive helper for adding +Count+ connections
add_connections(Count) when Count > 0 ->
ok = add_connection(),
add_connections(Count - 1);
add_connections(_Count) ->
ok.
add_connection() ->
% start redis client connection
{ok, RedisPid} = eredis:start_link(),
% join the redis connection PID to the process group
pg2:join(?MODULE, RedisPid).
connection() ->
% get a random redis connection PID
pg2:get_closest_pid(?MODULE).
connections() ->
% get all redis connection PIDs in the group
pg2:get_members(?MODULE).
q(Argv) ->
% execute redis command +Argv+ using random connection
eredis:q(connection(), Argv).
Here is an example of the above module in action:
1> redis_pg:create(16).
ok
2> redis_pg:connection().
<0.68.0>
3> redis_pg:connection().
<0.69.0>
4> redis_pg:connections().
[<0.53.0>,<0.56.0>,<0.57.0>,<0.58.0>,<0.59.0>,<0.60.0>,
<0.61.0>,<0.62.0>,<0.63.0>,<0.64.0>,<0.65.0>,<0.66.0>,
<0.67.0>,<0.68.0>,<0.69.0>,<0.70.0>]
5> redis_pg:q(["PING"]).
{ok,<<"PONG">>}
You could use a connection pool, e.g., eredis_pool. This is a similar question which might be interesting for you.
You can use a supervisor to launch each new process (for your example it seems that you should use a simple_one_for_one strategy):
supervisor:start_child(SupRef, ChildSpec) -> startchild_ret().
You can access then to the process count using the function
supervisor:count_children(SupRef) -> PropListOfCounts.
The result is a proplist of the form
[{specs,N1},{active,N2},{supervisors,N3},{workers,N4}] (the order is not guaranteed!)
If you want more information about active processes, you can also use
supervisor:which_children(SupRef) -> [{Id, Child, Type, Modules}] but this is not recommended when a supervisor manage a "large" amount of children.
You are basically "on your own" when you implement limits. There are certain tools which will help you, but I think the general question "how do I avoid spawning too many processes?" still holds. The trick is to keep track of the process count somewhere.
An Erlang-idiomatic way would be to have a process which contains a counter. Whenever you want to spawn a new process, you ask it if you are allowed to do so by registering a need for tokens against it. You then wait for the counting process to respond back to you.
The counting process is then a nice modular guy maintaining a limit for you.