mySQL error when inserting too long varchar, when was it introduced? - mysql

There was a version jump in mySQL, I don't know whether it was from 4 from 5 or a number within 4.x, that the default behaviour when dealing with too long input was changed. Before, strings that didn't fit into their varchar column they were silently cut off. After, an error was raised.
I'm having a hard time finding anything in the documentation or the change logs about this. Could somebody point me to the right direction where to find info on this?

You may be running under strict mode, which is different than past behavior. You can change this... See:
http://dev.mysql.com/doc/refman/5.0/en/server-sql-mode.html

Related

MySQL TRUNCATE accuracy is off by one

The MySQL server I'm using is 5.5.41. I also want to note I did not design this database.
The problem I'm running in to, is when using MySQL's TRUCATE function, I seem to be getting an off by one error. As in it's not accurate. See the attached screen shot for what I mean.
If the option of changing the table structure was limited, is there a way around fixing this bug and returning the correct number?
Floating point numbers are not exact. The actual value of 70.85 is probably something like 70.84999999, but it's being shown rounded off to the nearest 2 decimal places. TRUNCATE takes the actual value and just discards all the decimal places beyond what you requested, so it always rounds down, not to the nearest value, so it becomes 70.84.
If you don't want to lose accuracy like this, use the DECIMAL datatype instead of FLOAT. You could also use ROUND(reserve_amount, 2) instead of TRUNCATE(reserve_amount, 2).

ABAP TVRO field TRAZTD, Route Customizing Data

A customer of mine is looking to mass create some customizing data related the routes. and as such I have a small program which reads in a CSV file with all of the fields as they would be in the customizing transaction.
I'm having a particular problem wrapping my head around a field TVRO-TRAZTD for a couple of reasons.
The user is only filling in a number which represents a number of days.
There is a conversion exit on TRAZTD, except it's obsolete, use CONVERT TIMESTAMP they say
I don't have a timestamp, I have a decimal number representing a part of a day
For example, TRAZTD would be entered as 0,58 from the CSV file, so why is it represented in the table as 135.512?
I tried it the old fashion way and multiplied 0,58 * 24 which gives me 13,92. if I take 13,92 * 10 I get 139.200, which isn't the same but it's the closest I can get, but I don't get it why 10?
Using the conversion exit even though it's obsolete doens't give me a result either, no matter number I give it I always get 0 back. I can't use the convert timestamp either because well, it's not a timestamp or I didn't look up carefully enough how to use it (I didn't see anything other than strings and characters).
The other thing I tried too was just saying "screw it" and placed the data from the CSV directly into the field and hoping the conversion routine will take care of the work, but that doesn't happen either.
Is there anybody out here that can maybe shed some light on where the number after the conversion comes from?
everybody I came to a solution, just incase anybody stumbles upon this same problem.
I took the value from the excel document and multiplied it by 24 to get the amount of hours, and then multipled it 10000 because I don't know, I picked it randomly.

MySQL InnoDB auto_increment value increases by 2 instead of 1. Virus?

There's an InnoDB table for storing comments for blog posts used by a custom built web application.
Recently I noticed that the auto incremented primary key values for the comments are incrementing by 2 instead of just 1.
I also noticed that in another MySQL table which is used for remembering the last few commenter's footprint signature (e.g. ip, session id, uagent string, etc) the name of the PHP session starts with "viruskinq" which is weird because I thought it should always be a hexadecimal md5-like string.
Google yields only a couple of results for "viruskinq", all in Turkish. It is interesting because approximately a year ago the website in question was defaced by Turkish villains. (I'm 100% sure that the attackers didn't succeed because of any security holes in my app, because other websites, hosted by the same company, were defaced too at that time.)
The site is on a shared host, using Linux.
Do you think it is possible that the server itself may still be under the influence of those hackers? Examining the comment's id values revealed that this doubling phenomena exists since this May, but the defacing happened almost a year ago.
What other causes could there be that explain the weird behavior of the auto increment value? The application hasn't been changed and at older comments the auto incremented primary key values are in order.
Edit: Summary of the solution
The hosting company informed me that the reason of the doubled auto increment value is because they use a Master-Slave MySQL architect and according to them this phenomena is normal.
They also admitted that various hackers are constantly attacking their servers, "especially the sessions" and they cannot do anything about it.
I think I better start packing my things and move to a better webhost.
I really, really doubt this is a virus. Double-check whether that really is the session ID that starts with that string (which would indeed be reason for some concern). My guess would be this is a kid who discovered how to alter the User Agent string in the browser, and you are seeing the results of that, which is entirely harmless.
In regards to the increment problem.
First, check the auto_increment_increment setting of your mySQL server. Maybe it was set to 2 for some reason?
Second, if it's not that, I would look at all DELETE operations that the comment system runs on the table. Do comments recognized as spam get deleted? Can you log deletions for a while, or switch to soft deletions?
Also, try to create some subsequent comments yourself. Does the same phenonmenon occur? What if you add records using mySQL manually?
Look through the PHP code inserting a submitted comment making really sure there is nothing that could lead to this behaviour.
Try moving the comment system to a different server - preferably a local one, maybe freshly set up - to see whether the behaviour persists there.
Could it just be that the table's auto-increment value is set to 2?
See: MySQL autoincrement column jumps by 10- why?

Trouble storing more than 64000 in mysql mediumtext column

I am having difficulty storing more than 64000 in a column defined as mediumtext within mysql. I ran into this limitation with the text datatype earlier and decided to drop the column and recreate it as a mediumtext field. Trouble is, my data is getting truncated at 64000 bytes.
I did double check that the field is now a medium text field. The best I can tell, you don't have to specify the length when creating the column like you would with a varchar field.
Any ideas why this would be limited to 64000 and how to change it?
There's an option in the CF Admin datasource advanced section to set the maximum column size, and it defaults to 64000, so it seems very likely this is your problem.
If you don't have access to CF Administrator yourself, you'll need to contact your hosting provider and ask them to increase it for you.
I would try inserting something very long using the MySQL client if you can, just to double check that things work. If it doesn't, "SHOW WARNINGS" should tell you what happened.
I can't help but wonder if this is some kind of Cold Fusion thing (I have no experience with it). Mediumtext should be long enough, and you verified that things changed.
Gabriel suggested a maximum packet size limitation. It's a good idea, but I kind of doubt that's it. The default size is 1MB, which shouldn't be a problem unless you are sending multiple inserts/updates at a time.
You can ask your hosting provider what the current size is. If it is very small, you can always ask if they would be willing to increase it. On the other hand if it's 8MB, 16MB or more, I doubt that would be the problem (again, unless you are batching up many large changes).
What exactly does the table definition look like when you do a describe? Unless it says something like "MEDIUMTEXT(65536)", that shouldn't be your problem.
you should set max_packet_size in my.cnf
do you have a thread about that... here
saludos

How to work around unsupported unsigned integer field types in MS SQL?

Trying to make a MySQL-based application support MS SQL, I ran into the following issue:
I keep MySQL's auto_increment as unsigned integer fields (of various sizes) in order to make use of the full range, as I know there will never be negative values. MS SQL does not support the unsigned attribute on all integer types, so I have to choose between ditching half the value range or creating some workaround.
One very naive approach would be to put some code in the database abstraction code or in a stored procedure that converts between negative values on the db side and values from the larger portion of the unsigned range. This would mess up sorting of course, and also it would not work with the auto-id feature (or would it some way?).
I can't think of a good workaround right now, is there any? Or am I just being fanatic and should simply forget about half the range?
Edit:
#Mike Woodhouse: Yeah, I guess you're right. There's still a voice in my head saying that maybe I could reduce the field's size if I optimize its utilization. But if there's no easy way to do this, it's probably not worth worrying about it.
When is the problem likely to become a real issue?
Given current growth rates, how soon do you expect signed integer overflow to happen in the MS SQL version?
Be pessimistic.
How long do you expect the application to live?
Do you still think the factor of 2 difference is something you should worry about?
(I have no idea what the answers are, but I think we should be sure that we really have a problem before searching any harder for a solution)
I would recommend using the BIGINT data type as this goes up to 9,223,372,036,854,775,807.
SQL Server does not support signed and unsigned values.
I would say this.. "How do we normally deal with differences between components?"
Encapsulate what varies..
You need to create an abstraction layer within you data access layer to get it to the point where it doesn't care whether or not the database is MySQL or MS SQL..