Trouble storing more than 64000 in mysql mediumtext column - mysql

I am having difficulty storing more than 64000 in a column defined as mediumtext within mysql. I ran into this limitation with the text datatype earlier and decided to drop the column and recreate it as a mediumtext field. Trouble is, my data is getting truncated at 64000 bytes.
I did double check that the field is now a medium text field. The best I can tell, you don't have to specify the length when creating the column like you would with a varchar field.
Any ideas why this would be limited to 64000 and how to change it?

There's an option in the CF Admin datasource advanced section to set the maximum column size, and it defaults to 64000, so it seems very likely this is your problem.
If you don't have access to CF Administrator yourself, you'll need to contact your hosting provider and ask them to increase it for you.

I would try inserting something very long using the MySQL client if you can, just to double check that things work. If it doesn't, "SHOW WARNINGS" should tell you what happened.
I can't help but wonder if this is some kind of Cold Fusion thing (I have no experience with it). Mediumtext should be long enough, and you verified that things changed.
Gabriel suggested a maximum packet size limitation. It's a good idea, but I kind of doubt that's it. The default size is 1MB, which shouldn't be a problem unless you are sending multiple inserts/updates at a time.
You can ask your hosting provider what the current size is. If it is very small, you can always ask if they would be willing to increase it. On the other hand if it's 8MB, 16MB or more, I doubt that would be the problem (again, unless you are batching up many large changes).
What exactly does the table definition look like when you do a describe? Unless it says something like "MEDIUMTEXT(65536)", that shouldn't be your problem.

you should set max_packet_size in my.cnf
do you have a thread about that... here
saludos

Related

How can you add data into a set column taken from another table?

I'm actually building a database on phpMyAdmin and I'm asking my self if something is possible and how could I implement it?
The fact is that I'm building lists through a website and then saving it inside of my database, but this list is only composed of items I already have stored in my database on another table.
I thought than a column with a SET datatype and all the selected items would be a memory gain and a clarity improvement instead of creating x lines linked to the created list by an ID column.
So, the question I'm asking is, can I create this kind of set for a column, which will update by it-self when I'll add items in the other table ? If yes, can I do it through phpMyAdmin interface or do I have to work on the MySQL server itself.
Finally, it won't be possible to use the SET datatype in my application because it can only store up to 64 items and I'll be manipulating around a thousand.
I'm still interested if any of you guys have an idea on how to do it because a table with x times(ID,wordID#) (see my situation, explained a bit higher in this post, on the answers part) doesn't seem really optimized and a light-weighted option.
Have a nice day :)
It is possible to simulate a SET in a BLOB (max 64K bits) or MEDIUMBLOB (max 16M bits), but it takes a bit of code -- find the byte, modify it using & or |, stuff it back in.
Before MySQL 8.0 bitwise operations (eg ANDing two SETs, etc) was limited to 64 bits. With 8.0, BLOBs can be operated on that way.
If your SETs tend to be sparse, then a list of bit numbers (in a commalist or in a table) may be more compact. However, "dictionary" implies to me that your SETs are likely to be somewhat dense.
If you are doing some other types of operations, clue us in.

Why Specify a Length for an Auto Increment ID

I have been working with SQL for about 2 years now, and it has always been on my mind.
Best practices say assign the length of the column to what you are
expecting
The SQL wants a specific row dedicated as a Primary key but it's also in best practice for an A_i field... But what length to assign it? If left blank it defaults to 11, which represents 999,999,999
Which seems fine, but best practices also state never to actually clear anything from a database; just append a 0 or 1 to represent deleted, this is for archival/recovery purposes.. Also can be used for auditing of what users want to clear..
Take this example:
I have a website which is around for years, following the best practices in terms of not deleting anything from the database; my database/website traffic is very heavy with tons of unique users/visitors per day.
Now, if I leave the SQL default length of 11, what would happen if my table reaches the maximum length and then another user decides to register? It would throw an error and not continue, which will bring up a small amount of downtime for new users reason being is that a Database administrator will have to login to the SQL and change the length.. Which is not much effort, but it IS effort which can be avoided during the early development..
What I do, when creating a table is give a length of 255 which in the back of my mind, something is telling me 'this is not good practice' but it avoids the very slim possibility of the example stated above.
When compared to a text field, which does not have a specified length, why cannot this be the same in terms of an A_I field.
Don't get me wrong, I completely understand the data types available.
I have performed an amount of research both through google and SO, but the results have pointed to questions about altering the table to increase the current length. This is not what i'm asking for.
Overall:
So overall, what I am trying to ask; what is the ideal length for an A_I field? to minimize the slim risk of an error being thrown if it maxes out the length but also keeping best practices in mind.
The reason is simple,
being as a primary key, the ID should be just well fit for what you are expecting.
If you specify a varchar, the drawback is bigger size on index,
which could be slow down both read and write performance.
int(11) .. does not store up to 99,999,999,999.
It only store up to 2,147,483,647.
If you set it to unsigned,
then it can allow 4,294,967,295 of records (4 billion!)
Facebook has just over 1 billion of users!
So, I dun see anyone can has a 4 time bigger user base anytime soon...
Couple of the best practices has been explained very well in this article:
http://net.tutsplus.com/tutorials/other/top-20-mysql-best-practices/
Smaller Columns Are Faster
integer are fixed length, but varchar are not fixed length
Index and Use Same Column Types for Joins
Analyze your application, or system. Estimate How many users will register per day? per year? once you know this, Then decide how "safe" you want to be - in terms of how many years you want the system to run without the need to modify this. Say 100 years is enough... So, multiply the expected number of annual user registrations by 100 and make sure the PK is large enough to accompodate that many values.

Find out how much storage a row is taking up in the database

Is there a way to find out how much space (on disk) a row in my database takes up?
I would love to see it for SQL Server CE, but failing that SQL Server 2008 works (I am storing about the same data in both).
The reason I ask is that I have a Image column in my SQL Server CE db (it is a varbinary[max] in the SQL 2008 db) and I need to know now many rows I can store before I max out the memory on my device.
Maybe not the 100% what you wanted but if you want to know how much size an Image take just do
SELECT [RaportID]
,DATALENGTH([RaportPlik]) AS 'FileSize'
,[RaportOpis]
,[RaportDataOd]
,[RaportDataDo]
FROM [Database]
Any other additional counting you need to do yourself (as in prediction etc).
A varbinary(max) column could potentially contain up to 2GB of data by itself for each row. For estimated use based on existing data, perhaps you could do some analysis using the DATALENGTH function to work out what space a typical one of your images is taking up, and extrapolate from there.
You can only make rough guesses - there is no exact answer to the question "how many rows I can store before I max out the memory on my device" since you do not have exclusive use of your device - other programs take resources too, and you can only know how much storage is available at the present time, not at some time in the future. Additionally, your images are likely compressed and therefore take variable amounts of RAM.
For guessing purposes, simply the size of your image is a good approximation of the row size; the overhead of the row structure is negligible.

mySQL error when inserting too long varchar, when was it introduced?

There was a version jump in mySQL, I don't know whether it was from 4 from 5 or a number within 4.x, that the default behaviour when dealing with too long input was changed. Before, strings that didn't fit into their varchar column they were silently cut off. After, an error was raised.
I'm having a hard time finding anything in the documentation or the change logs about this. Could somebody point me to the right direction where to find info on this?
You may be running under strict mode, which is different than past behavior. You can change this... See:
http://dev.mysql.com/doc/refman/5.0/en/server-sql-mode.html

How to work around unsupported unsigned integer field types in MS SQL?

Trying to make a MySQL-based application support MS SQL, I ran into the following issue:
I keep MySQL's auto_increment as unsigned integer fields (of various sizes) in order to make use of the full range, as I know there will never be negative values. MS SQL does not support the unsigned attribute on all integer types, so I have to choose between ditching half the value range or creating some workaround.
One very naive approach would be to put some code in the database abstraction code or in a stored procedure that converts between negative values on the db side and values from the larger portion of the unsigned range. This would mess up sorting of course, and also it would not work with the auto-id feature (or would it some way?).
I can't think of a good workaround right now, is there any? Or am I just being fanatic and should simply forget about half the range?
Edit:
#Mike Woodhouse: Yeah, I guess you're right. There's still a voice in my head saying that maybe I could reduce the field's size if I optimize its utilization. But if there's no easy way to do this, it's probably not worth worrying about it.
When is the problem likely to become a real issue?
Given current growth rates, how soon do you expect signed integer overflow to happen in the MS SQL version?
Be pessimistic.
How long do you expect the application to live?
Do you still think the factor of 2 difference is something you should worry about?
(I have no idea what the answers are, but I think we should be sure that we really have a problem before searching any harder for a solution)
I would recommend using the BIGINT data type as this goes up to 9,223,372,036,854,775,807.
SQL Server does not support signed and unsigned values.
I would say this.. "How do we normally deal with differences between components?"
Encapsulate what varies..
You need to create an abstraction layer within you data access layer to get it to the point where it doesn't care whether or not the database is MySQL or MS SQL..