In my production server I was getting the below exception
weblogic.socket.MaxMessageSizeExceededException: Incoming message of size: '10000080' bytes exceeds the configured maximum of: '10000000' bytes for protocol: 't3'.
To resolve this I increased the value of -Dweblogic.MaxMessageSize.
My question is what should be the optimum size of this flag? I just can not keep on increasing
it to resolve this issue in future. Is there another flag which will help me set this flag
to a particular value and also the application runs without any issue.
There is no global optimum size. They probably have 10000000 as the default because they assume that'll be most peoples max. Realistically it will be limited to whatever your producer is sending as a max. Is there a limit for the producer in what they can send?
In general, you want to avoid large objects. but you can't always.
Related
Devs.!
I've very common but rare problem in sqlflite database. As i'm getting large amount of data from server that's why we used sqflite for save data locally in database.
But as i mentioned we are getting large amount of data. So, when we retrieve that data from database, we are getting some errors or can say bugs which is as following,
W/CursorWindow(15261): Window is full: requested allocation 1310926 bytes, free space 904042 bytes, window size 2097152 bytes
E/SQLiteQuery(15261): exception: Row too big to fit into CursorWindow required Pos=0, totalRows=1; query: SELECT * FROM my_package
As solution i found that we are retrieving more than 1 mb sized data from table at single query and Because of there's limit of 1 MB in sql. we are facing this issue.
So, My question is how do we increase this limit in sql flutter?
I believe there is no way to increase CursorWindow size limit as it is mentioned here : Issue raised in Sqflite having comment saying about limit been unchangeable
Feel free to try any of the suggested solutions in this StackOverflow comment: various ways to circumvent the issue in the question
I would like to know how to set maximum limit to message characters in ejabberd. I want my users to send messages limited to 2000 characters.
I've searched a lot but i have not find anything useful to solve this problem.
Thanks in advance.
The closer thing I can think is this ejabberd_c2s listener option, that probably you already noticed:
max_stanza_size: Size: This option specifies an approximate maximum
size in bytes of XML stanzas. Approximate, because it is calculated
with the precision of one block of read data. For example
{max_stanza_size, 65536}. The default value is infinity. Recommended
values are 65536 for c2s connections and 131072 for s2s connections.
s2s max stanza size must always much higher than c2s limit. Change
this value with extreme care as it can cause unwanted disconnect if
set too low.
Reading in an entire LOB whose size you don't know beforehand (without a max allocation + copy) should be a fairly common problem, but finding good documentation and/or examples on the "right" way to do this has proved utterly maddening for me.
I wrestled with SQLBindCol but couldn't see any good way to make it work. SQLDescribeCol and SQLColAttribute return column metadata that seemed to be a default or an upper bound on the column size and not the current LOB's actual size. In the end, I settled on using the following:
1) Put any / all LOB columns as the highest numbered columns in your SELECT statement
2) SQLPrepare the statement
3) SQLBindCol any earlier non-LOB columns that you want
4) SQLExecute the statement
5) SQLFetch a result row
6) SQLGetData on your LOB column with a buffer of size 0 just to query its actual size
7) Allocate a buffer just big enough to hold your LOB
8) SQLGetData again on your LOB column with your correctly sized allocated buffer this time
9) Repeat Steps 6-8 for each later LOB column
10) Repeat Steps 5-9 for any more rows in your result set
11) SQLCloseCursor when you are done with your result set
This seems to work for me, but also seems rather involved.
Are the calls to SQLGetData going back to the server or just processing the results already sent to the client?
Are there any gotchas where the server and/or client will refuse to process very large objects this way (e.g. - some size threshold is exceeded so they generate an error instead)?
Most importantly, is there a better way to do this?
Thanks!
I see several improvements to be done.
If you need to allocate a buffer then you should do it once for all the records and columns. So, you could use the technique suggested by #RickJames, improved with a MAX like this:
SELECT MAX(LENGTH(blob1)) AS max1, MAX(LENGTH(blob2)) AS max2, ...
You could use max1 and max2 to upfront allocate the buffers, or maybe only the largest one for all columns.
The length of the buffer returned at 1. might be too large for your application. You could decide at runtime how large the buffer would be. Anyway, SQLGetData is designed to be called multiple times for each column. Just by calling it again, with the same column number, it will fetch the next chunk. The count of available bytes will be saved where StrLen_or_IndPtr (the last argument) points. And this count will decrease after each call with the amount of bytes fetched.
And certainly there will be roundtrips to the server for each call because the purpose of all this is to prevent the driver from fetching more than the application can handle.
The trick with passing NULL as buffer pointer in order to get the length is prohibited in this case, check SQLGetData on Microsoft's Docs.
However, you could allocate a minimal buffer, say 8 bytes, pass it and its length. The function will return the count of bytes written, 7 in our case because the function add a null char, and will put at StrLen_or_IndPtr the count of remaining bytes. But you probably won't need this if you allocate the buffer as explained above.
Note: The LOBs need to be at the end of the select list and must be fetched in that order precisely.
SQLGetData
SQLGetData get the result of already fetched result. For example, if you have SQLFetch the first row of your table, SQLData will send you back the first row. It is used if you don't know if you can SQLBindCol the result.
But the way it is handle depends on your driver and is not describe in the standards. If your database is a SQL database, cursor cannot go backward, so the result may be still in the memory.
Large object query
The server may refuse to process large object according to the server standard and your ODBC Driver standard. It is not described in the ODBC standard.
To avoid a max-allocation, doing an extra copy, and to be efficient:
Getting the size first is not a bad approach -- it takes virtually no extra time to do
SELECT LENGTH(your_blob) FROM ...
Then do the allocation and actually fetch the blob.
If there are multiple BLOB columns, grab all the lengths in a single pass:
SELECT LENGTH(blob1), LENGTH(blob2), ... FROM ...
In MySQL, the length of a BLOB or TEXT is readily available in front of the bytes. But, even if it must read the column to get the length, think of that as merely priming the cache. That is, the overall time is not hurt much in either case.
We are reading information from cookie and storing the value of the cookie in a SQL Server database. Currently we are using varchar(max) as the data type, however it feels as if we can do away with a smaller size datatype.
My question is what is the ideal datatype and size in SQL Server 2008 of storing cookie value, considering the client can utilize the max limit allowed of a cookie value ?
I'd base the decision on the answer to the following questions:
What is the absolute maxiumum largest size (in bytes) that a cookie can be?
No, really, think it through. What might some bozo developer out there saddle you with in a week, a year, three years?
Might they ever contain unicode characters? Binary data?
Is it acceptable to ever store less than all the data in the cookie?
If we choose to make the maximum size stored less than the maximum size possible, what data do we lose? Truncate the first or last N characters? Search for and dump specific contents (and you may still be over max storable size)?
The lack of any true control over the size of data you are required to store can be a killer.
I am having difficulty storing more than 64000 in a column defined as mediumtext within mysql. I ran into this limitation with the text datatype earlier and decided to drop the column and recreate it as a mediumtext field. Trouble is, my data is getting truncated at 64000 bytes.
I did double check that the field is now a medium text field. The best I can tell, you don't have to specify the length when creating the column like you would with a varchar field.
Any ideas why this would be limited to 64000 and how to change it?
There's an option in the CF Admin datasource advanced section to set the maximum column size, and it defaults to 64000, so it seems very likely this is your problem.
If you don't have access to CF Administrator yourself, you'll need to contact your hosting provider and ask them to increase it for you.
I would try inserting something very long using the MySQL client if you can, just to double check that things work. If it doesn't, "SHOW WARNINGS" should tell you what happened.
I can't help but wonder if this is some kind of Cold Fusion thing (I have no experience with it). Mediumtext should be long enough, and you verified that things changed.
Gabriel suggested a maximum packet size limitation. It's a good idea, but I kind of doubt that's it. The default size is 1MB, which shouldn't be a problem unless you are sending multiple inserts/updates at a time.
You can ask your hosting provider what the current size is. If it is very small, you can always ask if they would be willing to increase it. On the other hand if it's 8MB, 16MB or more, I doubt that would be the problem (again, unless you are batching up many large changes).
What exactly does the table definition look like when you do a describe? Unless it says something like "MEDIUMTEXT(65536)", that shouldn't be your problem.
you should set max_packet_size in my.cnf
do you have a thread about that... here
saludos