I have a webapp which has user/group functions, and existing user/group data.
I want to use Activiti the process engine, however, it seems Activiti manage user/group info itself.
Should I:
Refactor my existing webapp, to reuse the user/group data from Activiti, or
Write some adapter code, to make Activiti reuse user/group data in my existing database? Maybe, another implmentation of RepositoryService, IdentityService, etc., and recompile? It seems RepositionServiceImpl is hard coded in the Activiti sources, and there isn't a setRepositionService() method in ProcessEngine.
I can't rename the existing db tables, because there are some other apps using them.
I have read the user guide, but I didn't found any information on how to integrate Activiti with existing apps.
I don't know what version you are currently using, but I used your second option successfully with version 5.5, overriding some Activiti classes:
Extend GroupManager and UserManager (from package org.activiti.engine.impl.persistence.entity), and implement the methods you need, using the required DAOs/EntityManager/whatever pointing to your database. Code here: GroupManager / UserManager.
Implement org.activiti.engine.impl.interceptor.SessionFactory.SessionFactory, for groups and users. Check out code here: ActivitiGroupManagerFactory / ActivitiUserManagerFactory.
Finally, in your activity config you have to set your new SessionFactory classes. I was using spring, so there is my activiti-config bean code: activiti-config.xml (check line 14)
Hope this helps in some way :)
You can check the Lim Chee Kin code to integrate activiti with spring security https://github.com/limcheekin/activiti-spring-security and maybe you can reuse your user/group data with spring security this way you can reuse his code.
Related
I am continuing the development of an android app that works against a server using maven, springframework and server database is MySQL.
need 2 very specific things:
I want to create a table
I want to create a trigger
But I want to create from springframework and maven and I could not find a way to do it.
Since it is a project among several people I want to be as automatic as possible to keep things simple.
I want to start when the server maven can create the table and the trigger if there are not exist.
It will be done?
I would like a simple example or a site I can visit and give me at least the concept of how.
Maven is a tool which builds your application. It's not available when the application runs. Therefore, it's often not the best choice to create database tables, etc.
A better approach is to locate the place where Spring's ApplicationContext is created and add code there that examines the database, finds out which tables already exist and create those which don't.
Later, you can extend the code to migrate data when your data model changes.
To execute SQL, check the Spring JDBC documentation and especially JdbcTemplate.
What options exists to manage database scripts and do a new development for database:
For example, the database used by a number of applications and there are a number of developers working with database, what will be the best options to maintain database up to date with the last changes and what should be the process of deployment changes to production
I see two options:
Microsoft visual studio has a database project, so all database
scripts should be add in the project and database can be rebuild
from visual studio
Restore database from backup and apply only new scripts to database
What another options exists? How can I manage database development, what is the best practices? what will be advantages and disadvantages of options I write above? How to maintain new sql scripts?
I understand then source control system should be used, but with DB scripts it's not so easy as with application.
I believe it will be no universal solution, but at least I am interesting in DB developers opinion how it's implemented in your company.
Liquibase is IMHO the best tool. It's brutally simple in its approach, which is one of the reasons it works so well.
You can read up on the site how it works, but basically it creates and manages a simple table that stores a hash of each script to determine if it has run a script yet or not. There's pre- and post- sql too, and you can bypass on conditions... it does pretty much everything you'd want or need. It also has maven integration, so it can seamlessly become part of your build.
I used it very successfully on a large (8 developers) project and now I wouldn't use anything else.
And it's free!
Currently we use SVN and have an "UpgradeScripts" folder where all developers commit their scripts to.
Each script has a generated prefix in the format upg_yyyymmddhhmmss_ScriptName.sql - So when they are deployed they run in a pre-defined order; keeping the database consistent.
This is generated through the below SQL and enforced through a pre commit hook:
select 'upg_' + convert(varchar, SYSUTCDATETIME(), 112)
+ replace(convert(varchar, SYSUTCDATETIME(), 8), ':', '')
+ '-'
+ 'MeaningfulScriptName'
Another handy technique we use is making sure the difference between static and non-static data is clear; so in our database there is the standard "dbo" schema - which indicates non-static data which may change between environments, and a "static" schema. All tables in this schema have static id's, so developers know they can use them in enums and reference the id's in scripts.
If you are looking for something more formal, Red Gate have a utility called SQL Source Control.
Or you could look into using the Data Tier Application framework.
We use DBGhost to version control the database. The scripts to create the current database are stored in TFS (along with the source code) and then DBGhost is used to generate a delta script to upgrade an environment to the current version. DBGhost can also create delta scripts for any static/reference/code data.
It requires a mind shift from the traditional method but is a fantastic solution which I cannot recommend enough. Whilst it is a 3rd party product it fits seamlessly into our automated build and deployment process.
The task is to create several JSPs, in which the user would be able to interact by inputting information, which would be saved on a database server, so the info can be called up later.
I'm not sure if this question is constructive enough or not, but I have no idea how to even start. I know what each one of the components means, but that's about it. I have no idea how the whole process works and I don't know what's it called, so I can't even search for it properly.
Could anyone briefly describe the process from start to finish how this system would work and what should be my first concern? I'm more interested in the JSP hosting (would Tomcat be a better choice, or is Geronimo much better in my case) and the connection of JSP to the database.
You need several components and layers for an application like that, so the first thing to do is select your technology stack so you don't reinvent the wheel and adopt best practices that your frameworks include. My choice, is Spring Framework.
Your JSP's represent the View Layer of your app. You can use JavaScript/AJAX to embelish your forms and sent data to your server.
The data that the user enters in the form is received and processed by the Controller Layer. Spring MVC has some neat collection of controllers for you to use. Once the data es ready, you can pass it to the Service Layer to execute Business Logic.
The Service Layer contains Business Logic rules. Spring Framework let this Layer to be simple POJO's, and to apply Transactional logic if you wish. It's highly probable that Service Layer requires to persist some data in the Database, so it invokes the DAO layer.
The classes in DAO layer have the responsability of storing data in the database. You can use several frameworks for this, and Spring supports many of them. Also, Spring includes some inherent JDBC support with templates included.
With that you can start your project. It should run with no problems in Tomcat, Geronimo or any Java EE Container
In my application, the entity database schema is created after application deployment based on inputs captured from end user, using a tool. I cannot use Entity Framework in this situation, since modeling is not possible without development environment (Visual Studio). The 'Code First' approach is also ruled out since it would require code generation which may lead to needless complexity.
Anyhow I need a Data Access Layer. I am therefore planning to introduce Data Access Application Block (DAAB) into my solution. Using SQL Management Objects (SMO) I can carry out the DDLs and for Data access I will use DAAB.
Now here is my confusion. Can I use LINQ for SQL technology on top of DAAB? I want DAAB to abstract all data access related complexities and then use LINQ to query. I also have a situation where I need to expose entity data through RESTful interface (read as OData). Would I be able to expose my data using WCF Data Services via DAAB?
LINQ is not supported in DAAB. DAAB is based on the good old DataSet and DataReader approach. This post has much more detailed answer with respect to role of DAAB.
LINQ support in Enterprise Library Data Access Application Block
I don't yet fully understand your scenario. If your database schema is created after deployment, then hows your front end application being developed against (as there wont' be any schema, if I get your question right).
If the schema is created after deployment what functionality is in your deployed application. Are you creating user interfaces on the fly using the dynamic schema that end user modelled?
Please do correct my understanding also good if you can give in some more info about your scenario.
We’ll be releasing shortly a companion Rails application to our existing Rails app. We will be running the companion app alongside our existing app on the same servers.
My question concerns the databases. My hosting provider generally would configure a 2nd distinct database for the new application - secondappname_production. However, there are a series of shared tables between the applications. These shared tables are also maintained by a series of cron jobs. I would love to avoid duplicating these tables (and thus the cron jobs) if at all possible.
Is there a way that I can put these shared tables in perhaps a shared database that both Rails apps can leverage? Any suggestions as to how to configure that or documentation pointers?
Thanks so much!
EDIT: To clarify why I don't want to run both apps out of the same DB: Both apps have models of the same name (yet different attributes of the models, etc.), so I would prefer to not run both out of the same DB....
You can have some models in one database (the ones that you want to share), and others in the new app's own database (so they don't collide with the existing app).
To specify a different database for a particular model, try something like this:
class SharedModelBase < ActiveRecord::Base
self.abstract_class = true
establish_connection(ActiveRecord::Base.configurations["shared_db_connection_#{RAILS_ENV}"])
end
Now, use this as a base class for your shared models, and you should be good to go.
Part of your question is best practices, so a couple of other options.
One option is to not even try to access to the db directly, but instead build an integration between the apps using ActiveResource. Have the original app provide a RESTful interface to these tables, and consume it in the new app, and don't share the db at all. I like this option, but may not be clever for your situation.
Another option is to refactor these shared tables into their own database, and have both the rails apps access that db. You could even end up writing services (e.g. restful interface) to this shared data to be used by both apps, and then you are nicely decoupled.
Consider the complexities of when this shared db structure changes. If you are sharing the tables directly, both rails apps could have to be changed simultaneously to accommodate the change - you have linked your release schedule now, these apps are now coupled. If you wrap the access to the db in services, this can provide abstraction as you can serve both the old structure and new structure simultaneously by deploying the new updated service at the same time as the old service interface. It all depends on your app if such complexity is worth it.
I think what you want is share model,not only database table,in rails table is model based.
create main rails app -->rake g model User name:string->rake db:migrate
create shared rails app
-->rake sync:copy
-->(DO NOT generate same model in shared app, also do not db:migrate)
-->config/generater shared
controller and router.rb file(dependend your requirement)
sync.rake(appshared/lib/tasks/)
namespace :sync do
desc 'Copy common models and tests from Master'
task :copy do
source_path = '/Users/ok/github/appDataTester/appmain'
dest_path = '/Users/ok/github/appDataTester/appshared'
# Copy all models & tests
%x{cp #{source_path}/app/models/*.rb #{dest_path}/app/models/}
%x{cp #{source_path}/test/models/*_test.rb #{dest_path}/test/models/}
# Fixtures
%x{cp #{source_path}/test/fixtures/*.yml #{dest_path}/test/fixtures/}
# Database YML
%x{cp #{source_path}/config/database.yml #{dest_path}/config/database.yml}
end
end