We are using ejabberd_16.01-0_amd64.deb and we want to set max number of users per room to 10000. According to doc: (https://docs.ejabberd.im/admin/configuration/#modmuc)
max_users: Number: This option defines at the service level, the
maximum number of users allowed per room. It can be lowered in each
room configuration but cannot be increased in individual room
configuration. The default value is 200.
On the other hand,
https://github.com/processone/ejabberd/blob/master/src/mod_muc_room.erl#L58
says, it could be also 5000.
We have tried 10000, but it didn't work (of course, values lower then 200 did work ).
Can anyone please advice us, what to do?
Ok, we tried to set max users per room to 5000 and that worked.
max_users: Number: This option defines at the service level, the
maximum number of users allowed per room. It can be lowered in each
room configuration but cannot be increased in individual room
configuration. The default value is 200.
It looks like, I misunderstood what the doc says: The limit max users per room is set globally. It can be only lowered per room (it can't be increased over the global maximum).
Note: we would expect the server to log an error or at least a warning, why value 10000 can't be set, but we couldn't find anything.
Related
I am using GCE on Asia-Southeast-b with 2vCPU 10GM for my website
I am trying to test CDN and LB, so was halfway to create an instance group in the US but it threw me an error no matter how.
Instance Group : Exceeded limit 'QUOTA_FOR_INSTANCES' on resource 'us-instance-group-1'. Limit: 8.0
https://prnt.sc/tzyyrk
This document from https://cloud.google.com/compute/quotas leads me to think it could be due to the zone that I choose, so I have tried to choose all multi zones in different regions and even single zone but didn't allow me to create one no matter how I select it seems (I can't say I tried all different combinations but almost all).
I chose the Instance template with the lowest spec N1-standard with CentOS7 + 20G standard disk.
Under this project, I have the 4 following service accounts associated with this.
Compute Engine default service account, Google APIs Service Agent, Compute Engine Service Agent, Myself as Owner
I went to IAM & Admin > Quotas > All green checked
Is it because I am building this with free 300 credit?
How do I check which zone available I should create the instance group on?
What could be the reason? What did I do wrong?
Thank you
It seems to be the configuration you're setting in the Maximum number of instances.
For example, when you create an Instance Group, you set the Minimum number of instances and the Maximum number of instances. Even if you set as minimum 1 instance and you left the default value for Maximum number of instances (which is 10), it will always fail since it checks the pre-condition that the Maximum number of instances never exceeds the Quota for a region.
I reproduced this by setting Maximum number of instances to a value greater than my quota limit.
I suggest to change the value of Maximum number of instances to 3 and check if you can deploy the instance group.
I would like to know how to set maximum limit to message characters in ejabberd. I want my users to send messages limited to 2000 characters.
I've searched a lot but i have not find anything useful to solve this problem.
Thanks in advance.
The closer thing I can think is this ejabberd_c2s listener option, that probably you already noticed:
max_stanza_size: Size: This option specifies an approximate maximum
size in bytes of XML stanzas. Approximate, because it is calculated
with the precision of one block of read data. For example
{max_stanza_size, 65536}. The default value is infinity. Recommended
values are 65536 for c2s connections and 131072 for s2s connections.
s2s max stanza size must always much higher than c2s limit. Change
this value with extreme care as it can cause unwanted disconnect if
set too low.
I have an item that has vm.memory.size[used] as a key, this returns the memory used, this also included the cached and the buffers.
I need to subtract vm.memory.size[cached] and vm.memory.size[buffers] from the vm.memory.size[used] to get the value that I need.
How can I do this please since I cannot find a way to do this, this what I tried lately but deos not work.
If you want to calculate it in a separate item, you must have used, cached and buffers already monitored as normal items. Once you have them, the calculated item formula would be last(vm.memory.size[used])-last(vm.memory.size[cached])-last(vm.memory.size[[buffers]) .
You can also calculate that directly in a trigger, removing the need for the calculated item.
And maybe even simpler than that - vm.memory.size[available] and vm.memory.size[pavailable] item keys can give you the (raw and percentage, respectively) amount of the available memory - already excluding cache & buffers - that you might be able to alert on directly.
We have a website with many users. To manage users who transacted on a given day, we use Redis and stored a list of binary numbers as the values. For instance, if our system had five users, and user 2 and 5 transacted on 2nd January, our key for 2nd January will look like '01001'. This also helps us to determine unique users over a given period and new users using simple bit operations. However, with growing number of users, we are running out of memory to store all these keys.
Is there any alternative database that we can use to store the data in a similar manner? If not, how should we store the data to get similar performance?
Redis' nemory usage can be affected by many parameters so I would also try looking in INFO ALL for starters.
With every user represented by a bit, 400K daily visitors should take at least 50KB per value, but due to sparsity in the bitmap index that could be much larger. I'd also suspect that since newer users are more active, the majority of your bitmaps' "active" flags are towards its end, causing it to reach close to its maximal size (i.e. total number of users). So the question you should be trying to answer is how to store these 400K visits efficiently w/o sacrificing the functionality you're using. That actually depends what you're doing with the recorded visits.
For example, if you're only interested in total counts, you could consider using the HyperLogLog data structure to count your transacting users with a low error rate and small memory/resources footprint. On the other hand, if you're trying to track individual users, perhaps keep a per user bitmap mapped to the days since signing up with your site.
Furthermore, there are bitmap compression techniques that you could consider implementing in your application code/Lua scripting/hacking Redis. The best answer would depend on what you're trying to do of course.
In my production server I was getting the below exception
weblogic.socket.MaxMessageSizeExceededException: Incoming message of size: '10000080' bytes exceeds the configured maximum of: '10000000' bytes for protocol: 't3'.
To resolve this I increased the value of -Dweblogic.MaxMessageSize.
My question is what should be the optimum size of this flag? I just can not keep on increasing
it to resolve this issue in future. Is there another flag which will help me set this flag
to a particular value and also the application runs without any issue.
There is no global optimum size. They probably have 10000000 as the default because they assume that'll be most peoples max. Realistically it will be limited to whatever your producer is sending as a max. Is there a limit for the producer in what they can send?
In general, you want to avoid large objects. but you can't always.