TEXT field that is compatible in mysql and hsqldb - mysql

I have an application that uses a mysql database but I would like to run the unit tests for the application in a hsqldb in-memory database. The problem is that some of my persistable model objects have fields which I have annotated as columnDefinition = "TEXT" to to force mysql to cater for long string values, but now hsqldb doesn't know what TEXT means. If I change it to CLOB, then hsqldb is fine but mysql fails.
Is there a standard column definition that I can use for long strings that is compatible with mysql AND hsqldb?

What worked for me was to just enable MySQL compatibility mode by changing the connection URL to jdbc:hsqldb:mem:testdb;sql.syntax_mys=true

You can use the same solution as offered in this post for PostgreSQL TEXT columns and HSQLDB with Hibernate:
Hibernate postgresql/hsqldb TEXT column incompatibility problem
As HSQLDB allows you to define TEXT as a TYPE or DOMAIN, this may be a solution if you find out how to execute a statement such as the one below before each test run with HSQLDB via Hibernate.
CREATE TYPE TEXT AS VARCHAR(1000000)
Update for HSQLDB 2.1 and later: This version support a MySQL compatibility mode. In this mode, the MySQL TEXT type is supported and translated to LONGVARCHAR. LONGVARCHAR is by default a long VARCHAR, but a property (sql.longvar_is_lob) allows it to be interpreted as CLOB. See:
http://hsqldb.org/doc/2.0/guide/dbproperties-chapt.html#dpc_sql_conformance
http://hsqldb.org/doc/2.0/guide/compatibility-chapt.html#coc_compatibility_mysql

Not really. MySQL has TEXT and BLOB, with various size prefixes to indicate their maximum size. hsqldb only appears to have clob and various varchars. Most like you'd have to special case your tests depending on which database you're talking to.
If your text strings are short enough, you could use varchars, but those are limited to just under 64k in mysql, and that's the max size of a row, so the larger the varchar, the less space for other fields.

You can also solve some issues at the JPA-vendor (Hibernate etc.) level.
With #Lob for example, the long/large type is determined in runtime based on the vendor (longvarchar/longtext MySql vs clob in H2).

Related

Sqoop compatibility with TINYTEXT, TEXT, MEDIUMTEXT, and LONGTEXT

For a project of mine, I would like to transfer multiple tables fomr a MySQL database into hive using sqoop. Because I have a few columns that use the MEDIUMTEXT datatype, I'd like to check the compatibility with someone that has experience, to prevent sudden surprises down the road.
Taken from the latest Sqoop userguide (1.4.6) there is no compatibility for BLOB, CLOB, or LONGVARBINARY columns in direct mode.
Given that there is no mention of incompatibilities with "TEXT" datatypes, will I be able to import them from MySQL without problems?
In MySQL, TEXT is same as CLOB. What ever limitations user guide mentions for CLOB is applicable to TEXT types.
Unlike typical datatypes, CLOB and TEXT need not store data inline to the record, instead the contents can be stored in a separate file and there will be pointer in the record. That is why direct path does not work for special types like CLOB/TEXT, BLOB in most of the databases.
I finally got around to setting up my hadoop cluster for my project. I am using hadoop 2.6.3 with hive 1.2.1 and sqoop 1.4.6.
It turns out that there is no problem with importing TEXT datatypes from MySQL into Hive using Sqoop. You can even supply the '--direct' parameter that makes use of the mysqldump tool for quicker transfers. In my project I had to import multiple tables containing 2 MEDIUMTEXT columns each. The tables were only about 2 GB each, so not that massive.
I hope this helps someone that is in the same sitation I was in.

Adding a TEXT field to entity framework model for MySql

I'm trying to add a text field to my EDMX file which I have set to generate DDL for MySql. but the only option I have is to add a string with the maximum length set to Max. This is reporting a error when executing the SQL statements against the database, that the maxlength of 4000 is not supported.
I also tried it the other way around, updating the field in the database and than update the EDMX file based on the database, but that sets the field back to a string field with maximum length set to None.
Am I overlooking something? Have anyone used this field?
Right now I have a kind of workaround to have my text field in the database mapped to the string property in the EDMX model. I generate the database script from the EDMX file and manually update the type for the TEXT columns from nvarchar(1000) to TEXT, execute it against the database and after that validate the mappings in the EDMX file.
Hopefully someone will come up with a better solution, because this is definitly not a cool workaround.
Update
This bug is fixed in Mysql Connector for .Net version 6.4.4
I'm not familiar with the entity framework, however a TEXT field in mysql does not have a length as part of its definition. Only CHAR/VARCHAR does.
There is a maximum lenght of data that can be stored in TEXT, which is 64kb.

mysql to oracle

I've googled this but can't get a straight answer. I have a mysql database that I want to import in to oracle. Can I just use the mysql dump?
Nope. You need to use some ETL (Export, Transform, Load) tool.
Oracle SQL Developer has inbuilt feature for migrating MySQL DB to Oracle.
Try this link - http://forums.oracle.com/forums/thread.jspa?threadID=875987&tstart=0 This is for migrating MySQL to Oracle.
If the dump is a SQL script, you will need to do a lot of copy & replace to make that script work on Oracle.
Things that come to my mind
remove the dreaded backticks
remove all ENGINE=.... options
remove all DEFAULT CHARSET=xxx options
remove all UNSIGNED options
convert all DATETIME types to DATE
replace BOOLEAN columns with e.g. integer or a CHAR(1) (Oracle does not support boolean)
convert all int(x), smallint, tinyint data types to simply integer
convert all mediumtext, longtext data types to CLOB
convert all VARCHAR columns that are defined with more than 4000 bytes to CLOB
remove all SET ... commands
remove all USE commands
remove all ON UPDATE options for columns
rewrite all triggers
rewrite all procedures
The answer depends on which MySQL features you use. If you don't use stored procedures, triggers, views etc, chances are you will be able to use the MySQL export without major problems.
Take a look at:
mysqldump --compatible=oracle
If you do use these features, you might want to try an automatic converter (Google offers some).
In every case, some knowledge of both syntaxes is required to be able to debug problems (there almost certainly will be some). Also remember to test everything thoroughly.

LONGTEXT valid in migration for PGSQL and MySQL

I am developing a Ruby on Rails application that stores a lot of text in a LONGTEXT column. I noticed that when deployed to Heroku (which uses PostgreSQL) I am getting insert exceptions due to two of the column sizes being too large. Is there something special that must be done in order to get a tagged large text column type in PostgreSQL?
These were defined as "string" datatype in the Rails migration.
If you want the longtext datatype in PostgreSQL as well, just create it. A domain will do:
CREATE DOMAIN longtext AS text;
CREATE TABLE foo(bar longtext);
In PostgreSQL the required type is text. See the Character Types section of the docs.
A new migration that updates the models datatype to 'text' should do the work. Don't forget to restart the database. if you still have problems, take a look at your model with 'heroku console' and just enter the modelname.
If the db restart won't fix the problem, the only way I figured out was to reset the database with 'heroku pg:reset'. No funny way if you already have important data in your database.

Compatible and recommended data types for MS Access frontend / MySQL backend

I need a list of recommended MySQL data types to use when using Microsoft Access as the front end. Can anyone point me to a succinct article on the net, or post a list here please?
Check out this: Using Connector/ODBC with Microsoft Applications
For all versions of Access, you should enable the Connector/ODBC Return matching rows option. For Access 2.0, you should additionally enable the Simulate ODBC 1.0 option.
You should have a TIMESTAMP column in all tables that you want to be able to update. For maximum portability, do not use a length specification in the column declaration (which is unsupported within MySQL in versions earlier than 4.1).
...
Access cannot always handle the MySQL DATE column properly. If you have a problem with these, change the columns to DATETIME.
....
Here's a comparison of MS Access, MySQL, and SQL Server datatypes.
There are a lot of tricky issues to watch for; in some cases, Access and MySQL give the same name to different data types, e.g.
TEXT in Access is 255 characters (similar to MySQL's TINYTEXT)
TEXT in MySQL is 65535 characters (similar to Access's MEMO)
So if you use a TEXT field in MySQL, you'll have to access it as a MEMO in Access.
Number types can be tricky, too. MySQL has both signed and unsigned versions of each type, but Access doesn't. For example,
BYTE in Access is equivalent to MySQL's TINYINT UNSIGNED
INTEGER in Access is equivalent to MySQL's SMALLINT (signed)