Can I check if data was truncated after query? - mysql

If I have a table with some varchar columns, whose lengths will obviously be limited, then I would have to show on the front-end whenever insertion of too large values fails. For example, if the limit on the name column is 20, but someone enters a name that is 30 characters long, I should notify them of the error.
This gets to be a lot of work when the application becomes big.
What I would like, to make life a bit easier, and skip taking care of individual limits for every step of the users' journey, is to just carry on with the normal functioning of the application, but show them a warning that their data was not saved in entirety because it was too long. So if MySQL would provide some method that would allow me to ask if all data was saved in its entirety, or some strings were truncated due to their respective varchar fields being shorter (or maybe a property on the MySQLi object that I can check), then my main method for saving data in the database could always check that after any inserts or updates have been executed and just issue a warning on the next page load.
Does MySQL provide such functionality?

Sure you can. MySQL throws a warning, when data is truncated.
You can check is any warning occured by checking ##warning_count
SELECT ##warning_count;
Or
SHOW COUNT(*) WARNINGS;
To check what warning has occured:
SHOW WARNINGS [LIMIT [offset,] row_count]
More info:
http://dev.mysql.com/doc/refman/5.0/en/show-warnings.html

Related

Changing nvarchar(MAX) to nvarchar(n) in database

I just changed a nvarchar(MAX) field in a table to nvarchar(250).
Could someone please tell me what happens to the data if there was an entry larger than 250 characters?
My concern is not with the visible data, but what happens behind the scenes:
What is done to the data which overshoots the limit of that container
of data?
I read in a few places that the table has to be deleted and re created again. Is this true and why? I didn't see any errors which the others received.
Is there a way to recover the truncated data after making this change? (I dont want to do it, but I'm curious)
If you have altered/changed column nvarchar(MAX) field into nvarchar(250) and you did not receive any error, it means that none on rows contains the data more than 250 characters that why SQL server successfully changed the column length and your data is accurate/complete.
If any of row contains more than 250 characters then SQL server will give you an error and alter statement will be failed. It means that data type length will not be changed.
Msg 8152, Level 16, State 13, Line 12 String or binary data would be
truncated. The statement has been terminated.
While altering column length if SET ANSI_WARNINGS OFF then SQL server will change the column length without any warning and extra data will be truncated.
By Default, it is SET ANSI_WARNINGS ON to warn the user.
I think Once data is truncated it can't be recovered later.
The system should prevent you or at least warn you of possible data loss when changing column length if any row exceeds the new length.
Depending on DBMS and version, you may even not be able to change column length.
However, if you don't have any rows exceeding 250, as you said, then there should be no problem.
There is no way to recover truncated data unless you have access to a database backup that's just before the change
On a side note, regardless of what you intend to do with that change, I should suggest to avoid columns of variable-length
MySQL automatically reserves maximum-possible length for a variable-length column, regardless of whether a row is 15 characters or 45 or 250.
This, as you can imagine, eventually leads to bottlenecks in the system.
(Maybe you don't have a database large enough for this to show effects, but my motto is "forewarned is forearmed" )

Best methods to avoid MySQL 1406 errors on VARCHAR

My server is using a MySQL DB, connecting to it via the C++ connector. I'm nearing production and I've been spending some time trying to break things as part of hardening the server.
One action item I had was to see what would happen if I execute a statement with a string that is longer than VARCHAR. For example, if I have a column defined as VARCHAR(4) and then set it to the string "hello".
This of course throws an exception with the error code 1406 (Data too long for column).
What I was wondering was if there was a good or standard way to defend against this? Obviously one thing is to check against the string length and truncate manually. I can do this, however there are many tables and several columns with VARCHAR. So my worry is updating server code if one of the columns using VARCHAR has its length increased (i.e. code maintainability)
Note that the server does do some validation up front. I'm just trying to defend against a subtle bug or corner case that lets something slip through.
A couple of other options on the table are to disable strict so it will give a warning and truncate or to convert VARCHAR to TEXT.
I was wondering a few things.
Is there a recommended method to handle this situation?
What are the disadvantages of disabling strict?
Is it worth (and is it possible) to query the DB at runtime the VARCHAR lengths? Note that I'm using the C++ connector. I suppose I could also write a tool that is run before compiling which would extract out VARCHAR lengths from the SQL code used to generate tables. But that then makes me wonder is I'm over engineering this.
I'm just sorting through the possible approaches now and thought I'd seek advice from those with more experience with MySQL.
As an experience database engineer I would recommend a combination of the follow two strategies:
1) If you that know that a there is a chance, however small, that data for your varchar(4) could go higher than 4 then make the varchar field larger than 4. For example, if you expect that the field can go as high as 8 then set the field to varchar(10). The beauty of using a varchar field instead of a char is that a varchar will only use whatever storage it needs.
2) If there is a real issue with data constantly being larger than the varchar field length then you should right your own exception handler to trap for the 1406 error. For the exception to work properly you will need to come up with some type of strategy on exactly how you want to handle the exception. For example, you could send an error to the user and ask them to fix the problem, you could accept the data but truncated it so it fits into the field, or you could send the error to a log file to get fixed at a later time.

What can cause a sporadic "-2147352567 The data has been changed" error?

One of our forms keeps generating this error message sporadically. The issue occurs on our Order form, which is bound to a linked SQL Server 2008 table. Having printed an advice note (using a report), the order status is then set to 'Printed Order'. At this point, I'm sporadically seeing a "-2147352567 The data has been changed" error. I would say 95% of the time, this doesn't occur, but it's that other 5% that's causing us headaches (and numerous support calls).
Oddly, closing the form and trying the same action on the order causes the same error message, but closing the database and trying again works fine.
It's as if there are some uncommitted changes to the table/record the form is bound to, which exist even when there are no forms, reports, etc open.
The code looks like this:
Select Case Me.txtCurrentStatus
Case NEW_ORDER, UNPRINTED_ORDER:
Me.txtCurrentStatus = PRINTED_ORDER
End Select
'Commit changes
If Me.Dirty = True Then Me.Dirty = False
#iDevelop actually prompted me to look into adding a timestamp to the table, so I take only partial credit ;)...
In short - when using linked tables in Microsoft Access, if the table does not contain a column of type timestamp, Access will compare every column in the table to see if the data has been changed since the record was retrieved. There are several data types that Access is unable to check reliably (see article below). Simply adding a column of type timestamp changes this behaviour and instead, Access only checks to see if the rowversion has changed... which makes this check more reliable and also improves performance.
Oddly, this isn't really a timestamp at all - it's a rowversion, but in SQL Server 2008, which I am using rowversion isn't available via the GUI.
See https://technet.microsoft.com/en-us/library/bb188204%28v=sql.90%29.aspx.
Probably the leading cause of updatability problems in Office Access–linked tables is that Office Access is unable to verify whether data on the server matches what was last retrieved by the dynaset being updated. If Office Access cannot perform this verification, it assumes that the server row has been modified or deleted by another user and it aborts the update.
There are several types of data that Office Access is unable to check reliably for matching values. These include large object types, such as text, ntext, image, and the varchar(max), nvarchar(max), and varbinary(max) types introduced in SQL Server 2005. In addition, floating-point numeric types, such as real and float, are subject to rounding issues that can make comparisons imprecise, resulting in cancelled updates when the values haven't really changed. Office Access also has trouble updating tables containing bit columns that do not have a default value and that contain null values.
A quick and easy way to remedy these problems is to add a timestamp column to the table on SQL Server. The data in a timestamp column is completely unrelated to the date or time. Instead, it is a binary value that is guaranteed to be unique across the database and to increase automatically every time a new value is assigned to any column in the table. The ANSI standard term for this type of column is rowversion. This term is supported in SQL Server.
Office Access automatically detects when a table contains this type of column and uses it in the WHERE clause of all UPDATE and DELETE statements affecting that table. This is more efficient than verifying that all the other columns still have the same values they had when the dynaset was last refreshed.

SSIS missing data from SQL table using Fast Load

I have a bit of a problem. When I set up a SSIS package and i fire it off it shows me the amount of rows that is going into the SQL table, but when I query the table there is almost 40000 rows missing from what the last count was after the conditional split that I have in the package.
What causes this problem? Even if I have it on normal table or view it still does the same thing. But here I have to use the fastload option as it is a lot of source files being loaded. This is only testing before sending it to production and I am stuck at the moment. Is there a way I can work around this problem and get all the data that is supposed to be pumped into the table. please also take note that in the conditional split it removes any NULL values as seen in first picture.
Check the Error Output (under Connection Manager and Mappings) within Destination Component. If the Error setting is set to Ignore Failure or Redirect Row, the component will succeed, but only the successful rows will be inserted.
What is the data source? Try checking your data and make sure you don't have any terminators stored in one of the rows.

Trying to put too much data into mysql TEXT data type

Let's say that I have a html form (actually I have an editor - TinyMCE) which through PHP inserts a bunch of text into Mysql table.
I want to know the following:
If I have TINYTEXT data type in Mysql column - what happens if the user tries to put more text than 255 bytes into Mysql table??
Does the application save first 255 bytes and "cuts off" the rest? Or does nothing get saved into Mysql table and mysql issues a warning?? Or none of the above?
Actually, what I want and intend to do is the following:
Limit the size of user form input by setting the column data type in Mysql to TEXT data type, which can hold maximum of 64 KB of text. I want to limit the amount of text that gets passed from user to database, so that user can't put too much data to the server at once.
So, basically, I want to know what happens, if the user puts more text through TinyMCE editor than 65535 bytes, assuming TEXT data type in mysql table.
MySQL, by default, truncates the data if it's too long, and sends a warning.
SHOW WARNINGS;
Data truncated for foo ..
Just to be clear: the data will be saved, but you will be missing the part that was too large.
Default mysql configuration truncate the data if the value is greater than the maximum table field definition size, this will produce a non blocking warning.
If you want a blocking error you have to set the sql_mode to STRICT_ALL_TABLES
dev.mysql.com/doc/refman/5.0/en/server-sql-mode.html#sqlmode_strict_all_tables
IMHO the best way is to manage this error via applicatin software.
Hope this helps
If you enter too much data to a TEXT field in MySQL it will insert the row anyway but with that field truncated to the maximum length, and issue a warning.
Even if MySQL did prevent the row from being added it would not be a good way of limiting the length of data that a user can enter. You should check the length of the POSTed string in PHP, and not run the query at all if it is too long - and perhaps tell the user why their data wasn't entered.
As well as this you can prevent the user from entering too many characters at the client side (although you should always do the check server side as well because someone could bypass the client side limit). It appears that there is no built-in way of doing this in TinyMCE, but it is possible by writing a callback: Limit the number of character in tinyMCE