I'm having some trouble mapping a byte array to a MySQL database in Hibernate and was wondering if I'm missing something obvious. My class looks roughly like this:
public class Foo {
private byte[] bar;
// Getter and setter for 'bar'
}
The table is defined like this in MySQL 5.5:
CREATE TABLE foo (
bar BINARY(64) NOT NULL)
And the Hibernate 3.6.2 mapping looks similar to this:
<hibernate-mapping>
<class name="example.Foo" table="foo">
<property name="bar" column="bar" type="binary" />
</class>
</hibernate-mapping>
I am using hbm2ddl for validation only and it gives me this error when I deploy the application:
Wrong column type in foo for column bar. Found: binary, expected: tinyblob
If using type="binary" in the mapping wouldn't cause Hibernate to expect the column's type to be binary (instead of tinyblob,) I don't know what would. I spent some time Googling this but couldn't find the exact error. The solutions for similar errors were to...
Specify "length" on the <property>. That changes what type Hibernate expects but it's always some variety of blob instead of the "binary" type it's finding.
Instead of declaring a "type" on the property element, nest a column element and give it a sql-type attribute. That work but that would also make the binding specific to MySQL so I would like to avoid it if possible.
Does anything stand out about this setup that would cause this mismatch? If I specify type="binary" instead of "blob", why is Hibernate expecting a blob instead of a binary?
I believe the problem is type="binary".
That type is a hibernate, generic type. It does not directly map to DB-engine specific types. They are translated to different SQL types based on driver you are using. Apparently the MySQL driver maps the hibernate type "binary" to a tinyblob.
The full list of hibernate types is available here.
You have 2 options. You can change your CREATE TABLE script to store that column with a tinyblob data type. Then your hibernate validation would not fail and your application would work. This would be the suggested solution.
The second option should be used only if you HAVE to use BINARY data type in the DB. What you can do is specify a sql-type in the hibernate mapping so that you enforce hibernate to use the type you want. The mapping would look like this:
<property name="bar">
<column name="bar" sql-type="binary" />
</property>
The main down side to this is you lose DB -engine independence which is why most people use hibernate in the first place. This code will only work on DB engines which have the BINARY data type.
What we ended up doing to solve a problem similar to this is write our own custom UserType.
UserTypes are relatively easy to implement. Just create a class that implements org.hibernate.usertype.UserType and implement the #override methods.
in your hibernate definitions, using a user type is pretty easy:
<property name="data" type="com.yourpackage.hibernate.CustomBinaryStreamUserType" column="binary_data" />
Simply put, What this will do is execute this class for reading and writing the data from the database. Specifically the methods nullSafeGet and nullSafeSet are used.
In our case, we used this to gzip compress binary data before writing it to the database, and uncompress it as its read out. This hides the fact that the data is compressed from the application using this data.
I think there is an easy solution for mapping Binary columns in hibernate.
"BINARY" columns can be easily mapped to "java.util.UUID" in hibernate entity classes.
For e.g. Column definition will look like
`tokenValue` BINARY(16) NOT NULL
Hibernate Entitiy will have below code to support BINARY column
private UUID tokenValue;
#Column(columnDefinition = "BINARY(16)", length = 16)
public UUID getTokenValue() {
return this.tokenValue;
}
public void setTokenValue(UUID sessionTokenValue) {
this.tokenValue = tokenValue;
}
Related
I am trying to use javax.persistence.* to auto create Table by uesing #Entity.
Here is some problem.
Is there anyway to convert the JsonNode to String By use Annotation.
edit: the Jpa is Spring-Data-Jpa and the JsonNode is fasterxml.jackson
You cannot use a JsonNode on entity column using Spring Data Jpa, You must use String and in another class you can write a method which converts string to Json (a reverse Jason to string) format and Resolved!
Annotate your Json property with #Transient (see https://stackoverflow.com/a/1281957/66686). This will make JPA ignore it.
Have another String property. In the getter and setter transform between String and Json representation.
If you have many properties like this you might want to use an embeddable or if you are using Hibernate a user type (other JPA providers might offer something similar). See this article for an example: https://theodoreyoung.wordpress.com/2012/02/07/custom-user-types-with-jpa-and-spring/
Read this to annotate your column correctly.
It's possible to use a json column with hibernate:
https://prateek-ashtikar512.medium.com/how-to-handle-json-in-postgresql-5e2745d5324
In eclipse JPA Entities from Tables it's convert column byte from mysql to byte in java.
How can i change to boolean (Netbeans can generate correctly)?
Thanks in advance for answer.
[Hmmmm. I don't see that the MySQL documentation says it has a data type of BYTE. Maybe you are mean BIT?]
Either way:
Dali (the part of Eclipse that generates JPA entities) uses DTP (another part of Eclipse) to determine the Java attribute type for a particular data type. These mappings are database platform-specific and are specified in .xmi files in various DTP plug-ins.
For example, for MySQL, the data type BIT (along with BOOL and BOOLEAN) is mapped to the Java type byte in the file
/runtime/vendors/MySql_5.1/MySql_5.1.xmi
in the plug-in jar
./plugins/org.eclipse.datatools.enablement.mysql.dbdefinition_1.0.4.v201109022331.
You can extract the appropriate .xmi file, edit it, and return it to its jar and this should alter how entities are generated.
When using PostgreSQL to store data in a field of a string-like validated type, like xml, json, jsonb, xml, ltree, etc, the INSERT or UPDATE fails with an error like:
column "the_col" is of type json but expression is of type character varying
... or
column "the_col" is of type json but expression is of type text
Why? What can I do about it?
I'm using JDBC (PgJDBC).
This happens via Hibernate, JPA, and all sorts of other abstraction layers.
The "standard" advice from the PostgreSQL team is to use a CAST in the SQL. This is not useful for people using query generators or ORMs, especially if those systems don't have explicit support for database types like json, so they're mapped via String in the application.
Some ORMs permit the implementation of custom type handlers, but I don't really want to write a custom handler for each data type for each ORM, e.g. json on Hibernate, json on EclipseLink, json on OpenJPA, xml on Hibernate, ... etc. There's no JPA2 SPI for writing a generic custom type handler. I'm looking for a general solution.
Why it happens
The problem is that PostgreSQL is overly strict about casts between text and non-text data types. It will not allow an implicit cast (one without a CAST or :: in the SQL) from a text type like text or varchar (character varying) to a text-like non-text type like json, xml, etc.
The PgJDBC driver specifies the data type of varchar when you call setString to assign a parameter. If the database type of the column, function argument, etc, is not actually varchar or text, but instead another type, you get a type error. This is also true of quite a lot of other drivers and ORMs.
PgJDBC: stringtype=unspecified
The best option when using PgJDBC is generally to pass the parameter stringtype=unspecified. This overrides the default behaviour of passing setString values as varchar and instead leaves it up to the database to "guess" their data type. In almost all cases this does exactly what you want, passing the string to the input validator for the type you want to store.
All: CREATE CAST ... WITH FUNCTION ...
You can instead CREATE CAST to define a data-type specific cast to permit this on a type-by-type basis, but this can have side effects elsewhere. If you do this, do not use WITHOUT FUNCTION casts, they will bypass type validation and result in errors. You must use the input/validation function for the data type. Using CREATE CAST is suitable for users of other database drivers that don't have any way to stop the driver specifying the type for string/text parameters.
e.g.
CREATE OR REPLACE FUNCTION json_intext(text) RETURNS json AS $$
SELECT json_in($1::cstring);
$$ LANGUAGE SQL IMMUTABLE;
CREATE CAST (text AS json)
WITH FUNCTION json_intext(text) AS IMPLICIT;
All: Custom type handler
If your ORM permits, you can implement a custom type handler for the data type and that specific ORM. This mostly useful when you're using native Java type that maps well to the PostgreSQL type, rather than using String, though it can also work if your ORM lets you specify type handlers using annotations etc.
Methods for implementing custom type handlers are driver-, language- and ORM-specific. Here's an example for Java and Hibernate for json.
PgJDBC: type handler using PGObject
If you're using a native Java type in Java, you can extend PGObject to provide a PgJDBC type mapping for your type. You will probably also need to implement an ORM-specific type handler to use your PGObject, since most ORMs will just call toString on types they don't recognise. This is the preferred way to map complex types between Java and PostgreSQL, but also the most complex.
PgJDBC: Type handler using setObject(int, Object)
If you're using String to hold the value in Java, rather than a more specific type, you can invoke the JDBC method setObject(integer, Object) to store the string with no particular data type specified. The JDBC driver will send the string representation, and the database will infer the type from the destination column type or function argument type.
See also
Questions:
Mapping postgreSQL JSON column to Hibernate value type
Are JPA (EclipseLink) custom types possible?
External:
http://www.postgresql.org/message-id/54096082.1090009#2ndquadrant.com
https://github.com/pgjdbc/pgjdbc/issues/265
http://www.pateldenish.com/2013/05/inserting-json-data-into-postgres-using-jdbc-driver.html
I have two entities X and Y with the relation #ManyToMany. X has a list of Y's, let's call it yList. Both X and Y has other class members as well (they are not important).
I am using Hibernate as JPA provider, and jackson-databind / jackson-annotations for things like serialization and deserialization.
Now, the following json is received from the client. It has all the fields of X, but only a list of id's for Y. As a concrete example, X could be Person and Y could be Country. And the many-to-many relation captures which countries have been visited by whom.
{
name: 'Bob Dylan',
age: '74',
visitedCountryIds: ['45', '23', '85']
}
When deserializing this json, I want to populate all the fields of the entity X, including yList, such that the elements of yList are resolved by looking up these entities in the database.
My idea so far is to deserialize yList by writing a custom subclass of JsonDeserializer, and have it perform the lookup by id.
Is this a reasonable approach?
You could use #JsonCreator (as already suggested by Uri Shalit) or just a setter method for your property in which you would do necessary lookups from the database.
However, if you have many entities (and associations) for which you want to do this, then this could be a repeated boilerplate code. Also, if implemented in entity classes directly, it would pollute them with database lookup code (readability, SRP, etc).
If you want some generic approach to this, then I think you are on a good way; custom deserializer is the place to implement it.
If I were to implement the generic approach, I would probably introduce a custom annotation which I would place on the association definition together with standard JPA annotations. For example:
#MyCustomJsonIds("visitedCountryIds")
#ManyToMany(...)
private List<Country> countries;
Then, in the deserializer, I would query for the presence of those annotations to dynamically determine what needs to be looked up from the database.
Another option is to create a constructor that that accepts those parameters, annotate it with #JsonCreator and have the constructor perform the lookup from the database, this way you don't need to write a specific deserializer.
LINQ to SQL allows table mappings to automatically convert back and forth to Enums by specifying the type for the column - this works for strings or integers.
Is there a way to make the conversion case insensitive or add a custom mapping class or extenstion method into the mix so that I can specify what the string should look like in more detail.
Reasons for doing so might be in order to supply a nicer naming convention inside some new funky C# code in a system where the data schema is already set (and is being relied upon by some legacy apps) so the actual text in the database can't be changed.
You can always add a partial class with the same name as your LinqToSql class, and then define your own parameters and functions. These will then be accessible as object parameters and methods for this object, the same way as the auto-generated LinqToSql methods are accessible.
Example: You have a LinqToSql class named Car which maps to the Car table in the DB. You can then add a file to App_Code with the following code in it:
public partial class Car {
// Add properties and methods to extend the functionality of Car
}
I am not sure if this totally meets your requirement of changing the way that Enums are mapped into a column. However, you could add a parameter where the get/set properties will work to map the enums that you need while keeping things case-insensitive.