dbunit tinyint improper value inserted into DataBase - junit

I'm using JUnit with DBUnit and a MySQL database.
I have a field that is defined as a tinyint in the database. My xml file has a '5' for the field but when it gets inserted into the database, it is changed into a '1'. Changing the Db definition to int allows the 5 to be inserted.
This is only a testing problem, but it's something that could skew results under the covers.
Is there a way to redefine the DbUnit mapping. Obviously, it is thinking that any tinyint is a boolean.

Check the database type configuration, is it MySQL? If needed, extend that class and change the behavior of specific field type handling.
Also, upgrade to the latest dbUnit version if needed.

Related

Mysql same version (5.6.27) behaving differently on 2 different servers

I have mysql 5.6.27 installed on my two servers.
Database has a table which has a column type bigint(20) unsigned NOT NULL.
While inserting a string type value (like 1_2_3_4) in this column on one server it storing value 1 and showing a data truncation warning.
But if i am executing the same query on another server it showing the error message for data truncation and not letting the value inserted.
Just trying to understand why mysql is casting the value on one server but not on another.
Sounds like a configuration issue, specifically strict settings, see https://www.davidpashley.com/2009/02/15/silently-truncated/

Data cells "#Deleted" in Access - ODBC, MySQL and BIGINT unique ID

I have problem with MS Access 2007 table connected via ODBC to MySQL server (not Microsoft SQL Server).
If unique identifier in MySQL table is BIGINT - all cells content is displayed like this: "#Deleted".
I have found this article:
"#Deleted" errors with linked ODBC tables (at support.microsoft.com)
and it says:
The following are some strategies that you can use to avoid this
behavior:
Avoid entering records that are exactly the same except for the unique index.
Avoid an update that triggers updates of both the unique index and another field.
Do not use a Float field as a unique index or as part of a unique index because of the inherent rounding problems of this data type.
Do all the updates and inserts by using SQL pass-through queries so that you know exactly what is sent to the ODBC data source.
Retrieve records with an SQL pass-through query. An SQL pass-through query is not updateable, and therefore does not cause
"#Delete" errors.
Avoid storing Null values within any field making up the unique index of your linked ODBC table.
but I don't have any of these things "to avoid". My problem is in BIGINT. To make sure if this is it I created 2 tables, one with INT id, one with BIGINT. And this is it.
I can't change BIGINT to INT in my production database.
Is there any way to fix this?
Im using: Access 2007, mysql-connector-odbc-3.51.30-winx64, MySQL server 5.1.73.
You can try basing the form on an Access query, and converting the BIGINT to an INT using CInt() in the query. This happens before the form processing. Depending on your circumstance, you may need to convert to a string (CStr()) in the Query, and then manually handle validating a user has entered a number using IsNumeric. The idea is to trick the form into not trying to interpret the datatype, which seems to be your problem.
Access 2016 now supports BigInt: https://blogs.office.com/2017/03/06/new-in-access-2016-large-number-bigint-support/
It's 2019 and with the latest ODBC driver from Oracle (v 8.0.17) and Access 365 (v 16.0.11904), the problem still occurs.
When the ODBC "Treat BIGINT columns as INT columns" is ticked and in Access support for Bigint is enable in options, the Linked tables with Bigint #id columns (the primary key) shows as deleted. Ruby creates these by default, so we are loathe to fiddle with that.
If we disable the above two option, Access thinks the #id column bigint is a string and shows the data. But then the field type is not bigint or int anymore.
This is quite pathetic, since this problem is almost 10 years old now.
The MySQL driver has an option to convert BIGINT values to INT. Would this solve the issue for you?

Facing problem accessing MySql Timestamp column in Entity Framework

I am using MySql .net connector 6.3.6 and Visual Studio 2008 sp1.
One of the table in the database has a timestamp column.
When I generate Entity mappings (.edmx file), the timestamp column is getting mapped to DateTimeOffset data type.
And when I hit a Linq query on this table, I always get Null value for this column (this column is nullable) even though there are valid non-null values in the table for this column.
If I try to update the mapping to a datetime datatype, visual studio throws error.
I tried to google for possible solutions, and many places it was mentioned that MySql timestamp column should map to .net datetime datatype by default.
I am not sure what the problem is?
Thanks.
I recommend you to try dotConnect for MySQL. It generates DateTime properties for the corresponding Timestamp columns.
You can download a Trial version here, the only limitation of this version is 30-day trial period.
Update. You can try editing the .edmx file using an XML editor. Set the type of the CSDL property to DateTime, and if this causes any validation issues you can try setting the type of the SSDL property to "datetime" as well.

LONGTEXT valid in migration for PGSQL and MySQL

I am developing a Ruby on Rails application that stores a lot of text in a LONGTEXT column. I noticed that when deployed to Heroku (which uses PostgreSQL) I am getting insert exceptions due to two of the column sizes being too large. Is there something special that must be done in order to get a tagged large text column type in PostgreSQL?
These were defined as "string" datatype in the Rails migration.
If you want the longtext datatype in PostgreSQL as well, just create it. A domain will do:
CREATE DOMAIN longtext AS text;
CREATE TABLE foo(bar longtext);
In PostgreSQL the required type is text. See the Character Types section of the docs.
A new migration that updates the models datatype to 'text' should do the work. Don't forget to restart the database. if you still have problems, take a look at your model with 'heroku console' and just enter the modelname.
If the db restart won't fix the problem, the only way I figured out was to reset the database with 'heroku pg:reset'. No funny way if you already have important data in your database.

SqlDateTime overflow on INSERT when date is correct using a Linq to SQL DataContext

I get an SqlDateTime overflow error (Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.) when doing an INSERT using an Linq DataContext connected to SQL Server database when I do the SubmitChanges().
When I use the debugger the date value is correct. Even if I temporary update the code to set the date value to DateTime.Now it will not do the insert.
Did anybody found a work-around for this behaviour? Maybe there is a way to check what SQL the datacontext submits to the database.
Do you have the field set as autogenerated in the designer? If that's not the problem, I'd suggest setting up logging of the data context actions to the console and checking the actual SQL generated to make sure that it's inserting that column, then trace backward to find the problem.
context.Log = Console.Out;
FWIW, I often set my "CreatedTime" and "LastUpdatedTime" columns up as autogenerated (and readonly) in the designer and give them a suitable default or use a DB trigger to set the value on insert or update. When you set it up as autogenerated, it won't include it in the insert/update even if modified. If the column doesn't allow nulls, then you need to supply an alternate means of setting the value, thus the default constraint and/or trigger.
Are you sure you're looking at the right Date column? Happened to me once, and the error turned out to be caused by another non-nullable Date column that wasn't set before submitting.
I came across this recently.
The error may as well say "something's preventing the save!". Because in my case, it was not the DateTime value that was the problem.
I thought I was passing a value in for the primary key, and what was arriving was "null". Being the key, it can't be null - and so my problem was completely somewhere else. By resolving the null, the problem disappeared.
We all hate misleading errors - and this is one of them.
Lastly, as a suggestion... If you do find conversion of dates a problem, then don't use dates at all! .NET's DateTime class supports the "Ticks" value. It can also instantiate a new DateTime(ticks); too. The only Gotcha with that one, is the implementation of ticks in Javascript has a different starting point in history. So you might want a conversion between ticks if you ever tried getting DateTimes from c# to Javascript.
I suggest you change your project's Target Framework. Maybe SQL Server is newer than .Net Framework. I see the same your issue:
My project's Target Framework is 3.5.
SQL Server is 2012
And then I change to 4.0. The issue is solved.
Bottom line: watch the order of your calls to SubmitChanges() and ensure that all objects that would be "submitted" are actually ready to be submitted. This often happens to me when I'm in the middle of setting the attributes of new LINQ object (e.g, the ".FirstName" of new "tblContact"), and then some conditional logic requires the creation of a separate, related record (e.g., a new "tblAddress" record), so the code goes to create the "tblAddress" and tries to SubmitChanges() on saving that record, but that SubmitChanges() then also tries to insert the unfinished "tblContact" record, which maybe doesn't yet have a required "BirthDate" field value set. Thus, the exception looks to occur when I'm inserting the "tblAddress" object/record, but actually refers to the lack of "BirthDate" for the "tblContact" object/record.