We’ll be releasing shortly a companion Rails application to our existing Rails app. We will be running the companion app alongside our existing app on the same servers.
My question concerns the databases. My hosting provider generally would configure a 2nd distinct database for the new application - secondappname_production. However, there are a series of shared tables between the applications. These shared tables are also maintained by a series of cron jobs. I would love to avoid duplicating these tables (and thus the cron jobs) if at all possible.
Is there a way that I can put these shared tables in perhaps a shared database that both Rails apps can leverage? Any suggestions as to how to configure that or documentation pointers?
Thanks so much!
EDIT: To clarify why I don't want to run both apps out of the same DB: Both apps have models of the same name (yet different attributes of the models, etc.), so I would prefer to not run both out of the same DB....
You can have some models in one database (the ones that you want to share), and others in the new app's own database (so they don't collide with the existing app).
To specify a different database for a particular model, try something like this:
class SharedModelBase < ActiveRecord::Base
self.abstract_class = true
establish_connection(ActiveRecord::Base.configurations["shared_db_connection_#{RAILS_ENV}"])
end
Now, use this as a base class for your shared models, and you should be good to go.
Part of your question is best practices, so a couple of other options.
One option is to not even try to access to the db directly, but instead build an integration between the apps using ActiveResource. Have the original app provide a RESTful interface to these tables, and consume it in the new app, and don't share the db at all. I like this option, but may not be clever for your situation.
Another option is to refactor these shared tables into their own database, and have both the rails apps access that db. You could even end up writing services (e.g. restful interface) to this shared data to be used by both apps, and then you are nicely decoupled.
Consider the complexities of when this shared db structure changes. If you are sharing the tables directly, both rails apps could have to be changed simultaneously to accommodate the change - you have linked your release schedule now, these apps are now coupled. If you wrap the access to the db in services, this can provide abstraction as you can serve both the old structure and new structure simultaneously by deploying the new updated service at the same time as the old service interface. It all depends on your app if such complexity is worth it.
I think what you want is share model,not only database table,in rails table is model based.
create main rails app -->rake g model User name:string->rake db:migrate
create shared rails app
-->rake sync:copy
-->(DO NOT generate same model in shared app, also do not db:migrate)
-->config/generater shared
controller and router.rb file(dependend your requirement)
sync.rake(appshared/lib/tasks/)
namespace :sync do
desc 'Copy common models and tests from Master'
task :copy do
source_path = '/Users/ok/github/appDataTester/appmain'
dest_path = '/Users/ok/github/appDataTester/appshared'
# Copy all models & tests
%x{cp #{source_path}/app/models/*.rb #{dest_path}/app/models/}
%x{cp #{source_path}/test/models/*_test.rb #{dest_path}/test/models/}
# Fixtures
%x{cp #{source_path}/test/fixtures/*.yml #{dest_path}/test/fixtures/}
# Database YML
%x{cp #{source_path}/config/database.yml #{dest_path}/config/database.yml}
end
end
Related
I have just created a Project in Google Cloud, and attached a Cloud SQL Database instance to that project. I was able to deploy a Django app that is connected to that DB just fine.
However, I would like to create a separate Django app/Project that is attached to the same Cloud SQL Database that my first Django app is attached to.
Is this possible?
One Django app is responsible for web scraping and supplying constant data to the database while my second Django app (the one I have already deployed) analyzes and returns json on that data. It would be advantageous to separate the two apps because if I ever needed to revise my web scraping algorithm, the whole app would not be down.
You can use cloudsql proxy for both apps. Also as long as you authorize both of your applications with service account that have access to the cloudsql it should be fine.
You can use database that you want, in your current project or in an external one. If you use Cloud SQL proxy, the service account of your app
Either Default AppENgine service account if on App Engine
Or Compute Engine default service account
except is you have defined a specific service account on your component (Cloud Run, Compute engine)
I recommend strongly to use a specific service account on the component if it's possible (not possible with App Engine)
The role to grant on the service account is the following: roles/cloudsql.client
However, I recommend you to smartly think to your design. The current trend is to lock 1 database to 1 service (or microservice).
Think about the schema update: a synchronous update will be required between to 2 services when you update the schema or one, or the other app will fail.
Same thing in case of rollback, both apps need to be rollbacked.
If there is 2 teams, one on each app, their release planning must be sync, and you will lost in velocity and agility.
Maybe, it fits your requirement, or you can duplicate the data, inside the database (other schema). As you wish.
Context
I'm building a SaaS where users can create their own websites (like Wix or SquareSpace).
That's what happens behind scenes:
My app has its main database which stores users
When a user creates his website, an external database is created to store its data
SQL file runs in this external database to set default settings
Other users shall create their websites simultaneously
Approach
To create a new database and establish connection I do the following:
ActiveRecord::Base.connection.execute("CREATE DATABASE #{name}")
ActiveRecord::Base.establish_connection(<dynamic db data>)
Then I execute sql code in the db by doing:
sql = File.read(sql_file.sql)
statements = sql.split(/;$/)
statements.pop
ActiveRecord::Base.transaction do
statements.each do |statement|
connection.execute(statement)
end
end
Then I reestablish connection with main db:
ActiveRecord::Base.establish_connection :production
Problem
Establishing connection to dynamic database makes application's main database inacessible for a while:
User A is creating a website (establishes dynamic database connection)
User B tries to access his user area (which requires application's main db data)
Application throws an error because it tries to retrieve data of app-main-db (which connection is not established at the moment)
How can I handle many users creating their websites simultaneously without databases conflict?
In other words, how can I establish_connection with more than one database in parallel?
NOTE:
It is not the same as connecting to multiple databases through database.yml. The goal here is to connect and disconnect to dynamic created databases by multiple users simultaneously.
This gem may help. However,you may need to rename some of your models to use the external database namespace instead of ApplicationRecord
https://github.com/ankane/multiverse
I admit that this doesn't answer the core of your initial question but IMO this probably needs to be done via a separate operation, say a pure SQL script triggered somehow via a queue.
You could have your rails app drop a "create message" onto a queue and have a separate service that monitors the queue that does the create operations, and then pass a message with info back to the queue. The rails application monitors the queue for these and then does something with the information.
The larger issue is decoupling your operations. This will help you down the road with things like maintenance, scaling, etc.
FWIW here is a really cool website I found recently describing a lot of popular queuing services.
Probably not the best approach but it can be achieved by calling an external script that creates the database, in a separated ruby file:
Create create_database.rb file in lib folder
Put db creation script inside this file
ActiveRecord::Base.connection.execute("CREATE DATABASE #{name}")
ActiveRecord::Base.establish_connection(<dynamic db data>)
Execute with Rails Runner
rails runner lib/create_database.rb
or with system, if you want to call it from controller
system("rails runner lib/create_database.rb")
This way you can create and access multiple databases without stopping your main database.
Passing arguments
You can pass arguments to your script with ARGV:
rails runner lib/create_database.rb db_name
And catch it inside the script with ARGV[0]:
name = ARGV[0]
puts name
> db_name
My team is building a website in django.
We are using MySql and the database we created for the project is called 'vote'
We always share the code, but the problem is that whatever my project team has added to the database has to be added by me again,manually so as to use it.
Is there any way in which we can copy the whole database created by my team to my system?
Thanks
There are 3 approachs off the top of my head:
Export and Import the entire mysql database (using mysqldump or similar).
Use Django's fixtures system. This allows you to dump the contents of the DB to json/xml files which can be loaded again later by other members of the team via python manage.py loaddata .... These can be quite temperamental in reality and I generally find them more hassle then they are worth to implement.
Use South's data migrations. South is primarily concerned with managing schema migrations, i.e. gracefully handling the addition and deletion of fields on your models so that you don't end up with an inconstant DB. You can also use it to write data migrations which will allow you to programatically add data to the DB which you can share amongst your team mates.
I have a webapp which has user/group functions, and existing user/group data.
I want to use Activiti the process engine, however, it seems Activiti manage user/group info itself.
Should I:
Refactor my existing webapp, to reuse the user/group data from Activiti, or
Write some adapter code, to make Activiti reuse user/group data in my existing database? Maybe, another implmentation of RepositoryService, IdentityService, etc., and recompile? It seems RepositionServiceImpl is hard coded in the Activiti sources, and there isn't a setRepositionService() method in ProcessEngine.
I can't rename the existing db tables, because there are some other apps using them.
I have read the user guide, but I didn't found any information on how to integrate Activiti with existing apps.
I don't know what version you are currently using, but I used your second option successfully with version 5.5, overriding some Activiti classes:
Extend GroupManager and UserManager (from package org.activiti.engine.impl.persistence.entity), and implement the methods you need, using the required DAOs/EntityManager/whatever pointing to your database. Code here: GroupManager / UserManager.
Implement org.activiti.engine.impl.interceptor.SessionFactory.SessionFactory, for groups and users. Check out code here: ActivitiGroupManagerFactory / ActivitiUserManagerFactory.
Finally, in your activity config you have to set your new SessionFactory classes. I was using spring, so there is my activiti-config bean code: activiti-config.xml (check line 14)
Hope this helps in some way :)
You can check the Lim Chee Kin code to integrate activiti with spring security https://github.com/limcheekin/activiti-spring-security and maybe you can reuse your user/group data with spring security this way you can reuse his code.
I am working on an application which acts as a setup box for other child applications. I want to set up child applications from one central parent application. Set up includes database setup (db:create and db:migrate), subdomain set up etc for child apps.
This is going to work like this: a Subscriber will subscribe many applications. On subscription the application will be configured to work on subscribers provided subdomain (on my site). Every instance of a subscribed application will have its own database. So I need to set up database for each subscriber, and domain name too.
Currently I am creating database based on child application subdomain, using ActiveRecord::Base.connection.execute.
After creation of the database I want to load the schema of the child app to the database created. For this I had posted a question here
schema.sql not creating even after setting schema_format = :sql
Is there any good efficient method/approach that will help me?
Also I am a bit confused about subdomaining how its gonna be work?
Any help/thought appreciated...
Thanks,
Pravin
Since there is no real need for a separate database for each user and for each 'app', you may want to check out a term called multi tenant.
Also, subdomains can be handled in rails 3 and use something called Devise for User authentication. Github has a rails 3 sudomain devise authentication fork to get you started.
Until you really see a need for all these databases, keep it simple. One database per application, and connect to each application via Active Resource.
Be warned, that what you are undertaking can confuse even a hardened app builder, so i hope your experience begets that of which your current Stackoverflow rate is at.
All the best.