how to simulate sequence during DAO layer test - junit

I have spring+hibernate project, I want to write Unit test case for DAO layers,
Currently I am using HSqldb's in memory DB to test it. (I referred this )
In project, IDs are provided by sequences, As I am using in-memory DB, during test sequence are not presents so it was falling, For workaround, I have created different set of hbm files without sequence(and put them test's resource folder). Is there any better way to handle this, as keeping duplicate hbm file does look good to me. Any suggestion would be appreciated

If you need a sequence in your test database, just create it.
Also, make sure you have the correct database dialect configured with Hibernate.
Consult the following related questions for details:
Syntax issue with HSQL sequence: `NEXTVAL` instead of `NEXT VALUE`
SequenceGenerator problem with unit testing in Hsqldb/H2
"correct" way to select next sequence value in HSQLDB 2.0.0-rc8

Holding complete copies of the HBM files doesn't sound like a good idea (one of the principles I strongly believe in is the "DRY" principle).
The solution I'd suggest (unless there's a better solution from the Hibernate side) is to edit the HBM file in the "#before" method, in order to change just the different bits.
I'm more of a .Net guy, and I know that in In .Net there's a library called FluentNHibernate that allows the generate (and I assume that also edit existing) hbm at runtime. I'm not sure if there's something similar in Java, but you can also fallback for manipulating the hbm as an XML file.

Related

Is it possible to create a SQL data base using a core data schema

I have a core data schema file with relationships between the entities.
I need to create a SQL database and would like to know if it can be created automatically (MySql or MS-SQL) using only this file.
Looking at the SQLite DB I see that the relationships are not mapped in any logical way.
First, your assessment that the relationships are "not mapped in any logical way" is not correct. If you look carefully at the Core Data generated database you will discover that the relationships are mapped exactly as in any other old relational database scheme, i.e. with foreign keys referring to rows in other tables.
Also, the naming conventions in these SQLite databases are very transparent (e.g., entity and attribute names start with Z, etc.
That being said, I would strongly discourage you to hack the Core Data generated database file, or even to use it to inform another database scheme, the reason being that these are undocumented features that could change any time without notice and thus break any code you write based on them.
IMO, the most practical thing to do is to rewrite the model quickly in the usual MySQL schema format and update it manually as well when you change the managed object model.
If you would like to automate the process, there is a rich set of APIs provided for interpreting and parsing NSManagedObjectModel, including classes like NSEntityDescription, NSAttributeDescription etc. You could write a framework that iterates though your entities and attributes and generates a text file that is a readable schema for MySQL, complete with information about indexing, versions etc..
If you go down that route, please make sure to notify us and do post your framework on Github for the benefit of others.
If you use Core Data you can create an SQL based database using a schema file but its structure is entirely controlled by the Core Data framework. Apple specifically tell us as developers to leave it alone and do not edit it using libsqlite or any other method. If you do then Core Data won't have anything to do with you!
In terms of making your own DB using one of Apple's schema files, I'm sure it is possible, but you'd have to know the inner workings of the Core Data framework to even attempt it.
In terms of making your own SQLite DB then you have to sort out all the relationships and mapping yourself.
I think that mixing and matching Core Data resources and custom built SQLite databases is probably a headache waiting to happen. I have used both methods and find that Core Data is brilliant (especially with iCloud) as long as you're OK with your App being limited to Apple only.

How do I manage database evolutions when I'm using JPA?

I learned play by following the tutorial on their website for building a little blogging engine.
It uses JPA and in it's bootstrap calls Fixtures.Deletemodels(), (or something like that).
It basically nukes all the tables every time it runs and I lose all the data.
I've deployed a production system like (sans the nuke statement).
Now I need to deploy a large update to the production system. Lots of classes have changed, been added, and been removed. In my testing locally, without nuking the tables on every run I ran into sync issues. When I would try to write or read from tables play would throw errors. I opened up mysql and sure enough the tables had only been partially modified and modified incorrectly in some cases. Even if I have the DDL mode set to "create" in my config JPA can't seem to "figure out" how to reconcile the changes and modify my schema accordingly.
So I have to put back in the bootstrap statement that nukes all my tables.
So I started looking into database evolutions within Play and read an article on the play framework website about database evolutions. The article talked about version scripts, but it said, "If you work with JPA, Hibernate can handle database evolutions for you automatically. Evolutions are useful if you don’t use JPA".
So if JPA is supposed to be taking care of this for me, how do I deploy large updates to a large Play app? So far JPA has not been able to make the schema changes correctly and the app will throw errors. I'm not interested in losing all my data so the fix on dev "Fixtures.deleteModels()" can't really be used in prod.
Thanks in advance,
Josh
No, JPA should not take care of it for you. It's not a magic tool. If you decide to rename the table "client" to "customer", the column "street" to "line1" and to switch the values of the customer type column from 1, 2, 3 to "bronze", "silver", "gold", there is no way for JPA to read in your mind and figure all the changes to do automagically.
To migrate from one schema to another, you use the same tools as if you didn't use JPA: SQL scripts, or more adavanced schema and data migration tools, or even custom migration JDBC code.
Have a look at flyway. You may trigger database migrations from your code or maven.
There is a property called hbm2ddl.auto=update which will update your schema.
I would STRONGLY suggest to not use this setting in production as it introduces a whole new level of problems if something goes wrong.
It's perfectly fine for development though.
When a JPA container starts (say, EclipseLink or any other), it expects to find a database which matches #Entity classes you've defined in your code. If the database has been migrated already, everything will work smoothly; otherwise: probably it will fail.
So, long story short, you need to perform database migrations (or evolutions, if you prefer) before the JPA container starts. Apparently, Play performs migrations for you, before Play kicks off the database manager you configured. So, in theory, regardless the ORM you are using, Play decides when it's time for the ORM to start its work. So, conceptually it should work.
For a good presentation about this subject, please have a look at the second video at: http://flywaydb.org/documentation/videos.html

Configuration Information in DB?

I have lots of stuff in an app.config, and when changes are necessary, an app restart is required. Bad for my 24x7 web server system (it really is 24x7, not even 23x7). I would like to use a good strategy for keeping the config information in a DB table and query/use it as needed. I googled around a bit and am coming up dry. Does anyone have any suggestions before I re-invent the wheel?
Thanks.
I needed exactly this for my recent application, and couldn't use any application server specific techniques as I needed some console apps run on cronjobs to access them too.
I basically made a couple of small tables to create a registry-style configuration database. I have a table of keys (which all have parent-keys so they can be arranged in a tree structure) and a table of values which are attached to keys. All keys and values are named, so my access functions look like this:
openKey("/my_app");
createKey("basic_settings");
openKey("basic_settings");
createValue("log_directory","c:\logs");
getValue("/my_app/basic_settings","log_directory");
The tree structure allows you to logically separate similar data (e.g. you can have a "log_directory" value under several different keys) and avoids having the overly verbose names you find in properties files.
All the values are just strings (varchar2 in the db), so there's some overhead in converting booleans and numbers: but it's only config data, so who cares?
I also create a "settings_changed" value that has a datetime string in it: so any app can quickly tell if it needs to refresh it's configuration (you obviously need to remember to set it when you change anything though).
There may be tools out there to do this kind of thing already: but this was only a days worth of coding and works a treat. I added command line tools to edit and upload/download parts or all of the tree, then made a quick graphical editor in Java Swing.

Hadoop JUnit testing writing/reading to/from the hdfs

I have written a class(es) that writes and reads from hdfs. Given certain conditions that are occurring when these classes are instantiated they create a specific path and file, and write to it (or they go to a previously created path and file and read from it). I have tested it by running a few hadoop jobs, and it appears to be functioning correctly.
However, I would like to be able to test this in the JUnit framework, but I have not found a good solution for being able to test reading and writing to hdfs in JUnit. I would appreciate an helpful advice on the matter. Thanks.
I haven't tried this myself yet, but I believe what you are looking for is org.apache.hadoop.hdfs.MiniDFSCluster.
It is in hadoop-test-.jar NOT hadoop-core-.jar. I guess the Hadoop project uses it internally to test.
Here it is:
http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/MiniDFSCluster.java?revision=1127823&view=markup&pathrev=1130381
I think there are plenty of uses of it in that same directory, but here is one:
http://svn.apache.org/viewvc/hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestWriteRead.java?revision=1130381&view=markup&pathrev=1130381

Pattern for compile time discovery of unsafe/wrong queries

My team is developing a large java application which extensively queries a MySQL database (in different classes and modules).
I'd like to known if there is a pattern that allows me to be notified at compile time if there are queries that refer to a wrong table structure (for instance if I remove or add a field on a table and the query string refers to it), in order to prevent runtime errors.
This should work also for JOIN queries.
Querydsl is similar to LiquidForm and supports both JPA / Hibernate and SQL based backends.
For the SQL based version we currently support MySQL (5.? tested), Oracle (10g tested) and HSQLDB.
In a nutshell a query like this
select count(*) from test where name = null
would become
long count = query.from(test).where(test.name.isnull()).count();
Querydsl SQL uses code generation to reflect SQL schemas into Java classes.
There's an open-source tool called DODS (Data Object Design Studio) that could do what you want. The DODS tool was originally part of the Enhydra Java application server project, and since the company backing that project went kablooey in 2002, DODS has been hosted and maintained at ObjectWeb. Anyway, it's open-source (LGPL).
http://forge.objectweb.org/projects/dods
The concept is that you describe your schema in an XML file, and DODS generates Java POJO classes with which you can query and manipulate the database tables. Of course every time you change your schema, you need to run DODS again to re-generate the ORM classes, and recompile your app against them.
But the result is that if a table or column disappears, and your app is querying database metadata that no longer exists, you do get a compile-time error, because your code is now calling a corresponding class or method that no longer exists.
I would say that the simple answer is "no". The more complete answer is "yes, to some degree", depending on your willingness to jump through hoops.
Unless you have a java representation of your database schema you will never be able to get compile time notification if your queries are wrong (these classes can be generated). Also, you must use these classes to build your queries, so the method you use today (query strings) must be abandoned. To be able to use the java classes to build your queries, you must also use tricks. LiquidForm uses the required tricks to build JPA queries, but I have not seen a similar library for constructing SQL queries (LiquidForm is new and quite brilliant). You would actually have to build a similar library yourself. So, as you see, getting compile time warnings when constructing SQLs is hard, but not impossible (only nearly impossible). But even if you should be able to create what I suggest, your java representation of the schema must be updated immediately after a schema change, so the generation of java classes would have to be built into your IDE or build tool.
I would suggest you rather have good unit tests that will notice when your queries become illegal as a result of schema change. This is the most common way to achieve what you want. Also, should you decide to "upgrade" to JPA, you could use LiquidForm to get what you want.