How to set maximum message characters limit in ejabberd - configuration

I would like to know how to set maximum limit to message characters in ejabberd. I want my users to send messages limited to 2000 characters.
I've searched a lot but i have not find anything useful to solve this problem.
Thanks in advance.

The closer thing I can think is this ejabberd_c2s listener option, that probably you already noticed:
max_stanza_size: Size: This option specifies an approximate maximum
size in bytes of XML stanzas. Approximate, because it is calculated
with the precision of one block of read data. For example
{max_stanza_size, 65536}. The default value is infinity. Recommended
values are 65536 for c2s connections and 131072 for s2s connections.
s2s max stanza size must always much higher than c2s limit. Change
this value with extreme care as it can cause unwanted disconnect if
set too low.

Related

What's the best way to read in an entire LOB using ODBC?

Reading in an entire LOB whose size you don't know beforehand (without a max allocation + copy) should be a fairly common problem, but finding good documentation and/or examples on the "right" way to do this has proved utterly maddening for me.
I wrestled with SQLBindCol but couldn't see any good way to make it work. SQLDescribeCol and SQLColAttribute return column metadata that seemed to be a default or an upper bound on the column size and not the current LOB's actual size. In the end, I settled on using the following:
1) Put any / all LOB columns as the highest numbered columns in your SELECT statement
2) SQLPrepare the statement
3) SQLBindCol any earlier non-LOB columns that you want
4) SQLExecute the statement
5) SQLFetch a result row
6) SQLGetData on your LOB column with a buffer of size 0 just to query its actual size
7) Allocate a buffer just big enough to hold your LOB
8) SQLGetData again on your LOB column with your correctly sized allocated buffer this time
9) Repeat Steps 6-8 for each later LOB column
10) Repeat Steps 5-9 for any more rows in your result set
11) SQLCloseCursor when you are done with your result set
This seems to work for me, but also seems rather involved.
Are the calls to SQLGetData going back to the server or just processing the results already sent to the client?
Are there any gotchas where the server and/or client will refuse to process very large objects this way (e.g. - some size threshold is exceeded so they generate an error instead)?
Most importantly, is there a better way to do this?
Thanks!
I see several improvements to be done.
If you need to allocate a buffer then you should do it once for all the records and columns. So, you could use the technique suggested by #RickJames, improved with a MAX like this:
SELECT MAX(LENGTH(blob1)) AS max1, MAX(LENGTH(blob2)) AS max2, ...
You could use max1 and max2 to upfront allocate the buffers, or maybe only the largest one for all columns.
The length of the buffer returned at 1. might be too large for your application. You could decide at runtime how large the buffer would be. Anyway, SQLGetData is designed to be called multiple times for each column. Just by calling it again, with the same column number, it will fetch the next chunk. The count of available bytes will be saved where StrLen_or_IndPtr (the last argument) points. And this count will decrease after each call with the amount of bytes fetched.
And certainly there will be roundtrips to the server for each call because the purpose of all this is to prevent the driver from fetching more than the application can handle.
The trick with passing NULL as buffer pointer in order to get the length is prohibited in this case, check SQLGetData on Microsoft's Docs.
However, you could allocate a minimal buffer, say 8 bytes, pass it and its length. The function will return the count of bytes written, 7 in our case because the function add a null char, and will put at StrLen_or_IndPtr the count of remaining bytes. But you probably won't need this if you allocate the buffer as explained above.
Note: The LOBs need to be at the end of the select list and must be fetched in that order precisely.
SQLGetData
SQLGetData get the result of already fetched result. For example, if you have SQLFetch the first row of your table, SQLData will send you back the first row. It is used if you don't know if you can SQLBindCol the result.
But the way it is handle depends on your driver and is not describe in the standards. If your database is a SQL database, cursor cannot go backward, so the result may be still in the memory.
Large object query
The server may refuse to process large object according to the server standard and your ODBC Driver standard. It is not described in the ODBC standard.
To avoid a max-allocation, doing an extra copy, and to be efficient:
Getting the size first is not a bad approach -- it takes virtually no extra time to do
SELECT LENGTH(your_blob) FROM ...
Then do the allocation and actually fetch the blob.
If there are multiple BLOB columns, grab all the lengths in a single pass:
SELECT LENGTH(blob1), LENGTH(blob2), ... FROM ...
In MySQL, the length of a BLOB or TEXT is readily available in front of the bytes. But, even if it must read the column to get the length, think of that as merely priming the cache. That is, the overall time is not hurt much in either case.

What is the max number of users per room on ejabberd?

We are using ejabberd_16.01-0_amd64.deb and we want to set max number of users per room to 10000. According to doc: (https://docs.ejabberd.im/admin/configuration/#modmuc)
max_users: Number: This option defines at the service level, the
maximum number of users allowed per room. It can be lowered in each
room configuration but cannot be increased in individual room
configuration. The default value is 200.
On the other hand,
https://github.com/processone/ejabberd/blob/master/src/mod_muc_room.erl#L58
says, it could be also 5000.
We have tried 10000, but it didn't work (of course, values lower then 200 did work ).
Can anyone please advice us, what to do?
Ok, we tried to set max users per room to 5000 and that worked.
max_users: Number: This option defines at the service level, the
maximum number of users allowed per room. It can be lowered in each
room configuration but cannot be increased in individual room
configuration. The default value is 200.
It looks like, I misunderstood what the doc says: The limit max users per room is set globally. It can be only lowered per room (it can't be increased over the global maximum).
Note: we would expect the server to log an error or at least a warning, why value 10000 can't be set, but we couldn't find anything.

Best Data Type for Storing AWS ARNs in MySQL?

What's the best datatype to store an ARN in MySQL? I'm guessing a VARCHAR with a large character limit would be best. Is there a limit to how long ARNs can be? How long of a VARCHAR should I have?
I have found no documentation on the maximum length of an ARN, overall, but it's service-specific, more often than not, with maximum lengths of each element in the ARN -- presumably -- combining within each service to define the maximum as this forum answer suggests.
A quick search indicates that you'll see a maximum of 2048 here or 256 here or the oddly-sized non-power-of-2 111 here or... You get the idea. It varies by service.
The longest ARN I have encountered has been from S3, where an ARN can include a key prefix, so those could theoretically exceed 1,024 though I've not encountered any that actually approached that length.
Bearing in mind that ARNs, or at least many of their elements are case-sensitive, I tend to go with VARBINARY() with a length suited to the expected size range for the service in question. I would expect many applications would be quite comfortable somewhere below 255 but that's an architectural decision specific to your application and the AWS services involved.

DBCC SHOWCONTIG : Row Count size

Using the DBCC SHOWCONTIG command we get the size of a row in minimum, maximum and on average.
Just to make sure, the unit of Measurement is Byte right?
Yes, the unit of measurement is Bytes.
I use it but I don't found any official informations about that.
I continue searching and post a link if I find any interesting informations.
EDIT :
Bytes is also used here :
Row size overhead

Optimum Size of Message in weblogic

In my production server I was getting the below exception
weblogic.socket.MaxMessageSizeExceededException: Incoming message of size: '10000080' bytes exceeds the configured maximum of: '10000000' bytes for protocol: 't3'.
To resolve this I increased the value of -Dweblogic.MaxMessageSize.
My question is what should be the optimum size of this flag? I just can not keep on increasing
it to resolve this issue in future. Is there another flag which will help me set this flag
to a particular value and also the application runs without any issue.
There is no global optimum size. They probably have 10000000 as the default because they assume that'll be most peoples max. Realistically it will be limited to whatever your producer is sending as a max. Is there a limit for the producer in what they can send?
In general, you want to avoid large objects. but you can't always.