This site has been great for a Symfony newbie such as myself and hopefully this will be the same experience. I have searched a lot for this question so maybe I am not using the right terminology. I have read about using services but none seem to give an example of what I need using multiple databases with different tables. So here goes, first off I am at the discretion of the current database design and I can't merge databases or recreate them, I have to use them as is. Here is the mysql query I want to use:
select name, title, rank from db1.tbl1,
db2.tbl1,db2.tbl2
where db2.tbl1.id=db2.tbl.id
and db1.tbl1.person_id=db2.tbl2.person_id;
I have created connections to the db in parameters.yml and config.yml. I was thinking about creating a repository for one of the entities and then having it innerjoin the other tables from the same database but couldn't find any examples. I want to do this using best practice. I am all ears.
I should also mention all the databases are managed by the same server.
You can't use multiple databases in a single query because for multiple databases to work, you need a manager for each.
I can't think of a solution using arrays or objects that is not resource intensive. Because you need to load at least one entire table.
Related
We have data stored for our customers in MySQL (Web App) and other data stored in SQL Server (billing data) and now we have a need to report on this data inside our customer-facing application.
Does anyone have experience merging these two data sources? Is there an effective way to do this?
Are there existing solutions, preferably OSS, that can aggregate the data sources and allow them to be queried as though they were one (this would be ideal)?
Otherwise, without asking for the "best" solution, what is optimal in this situation? Should we merge the separate sources into one database nightly? This is the only thing I can think of off the bat, and am wondering (hoping) whether other, more elegant or robust solutions exist.
Ideally we'd be able to query the data in real-time, rather than working off of a daily upload or whatever.
if you want to write queries across the two db then you could link the MySQL to the SQL Server
- something like this
http://coresystems.ch/en/about-us/newsroom/category/blog/how-add-linked-server-connection-mysql-mssql/
If you don't mind using a third party reporting engine, then, you can give DBxtra a spin, it lets you combine different databases in one single query to produce a report, it even lets you do so graphically, so you don't have to write the query yourself.
One of my sites is a social networking site running on MySQL. I use postal code and country information to geolocate users using a webservice. This webservice also allows you to download all their many tables of information so that you can access it locally. My site has gotten big enough that I wish to do this now.
My question is, should I create a new database on my site for all of this postal code and country information and all its tables, or should I incorporate those tables into my existing database for my social networking site?
What are the pros/cons either way?
When you're talking about scaling and want to know about other databases like NOSQL, you might find this article interesting: http://highscalability.com/blog/2010/12/6/what-the-heck-are-you-actually-using-nosql-for.html
I'd vote in favor of a separate database if you planned to use the data as read-only and put a web service in front of it to access it. Users would search it based on a small handful of parameters (e.g. address info to get lat/lon data).
I'd say put it in the existing database if you planned to JOIN it with other information in your current schema.
it will live on the same disk probably.
so disk space is not an issue.
if you query the tables in a completely separate manner, then no impact on the existing site.
if you query things together, then easier when all in one database.
overall administration of one database vs 2 is easier.
i think it's a no brainer... they go in one db.
I have 2 different server running django in it. (using postgre)
Both has the same user table.
I wanted to synchronize the user table, as if I update or delete user in one server then the 2 db should also get updated.
I guess replication is not a solution for my case.
Can anyone point me in right directions. or any link or reference will be helpful.
Both server are running different django code.
Thanks,
I don't know how to do this in PostgreSQL, but in MySQL you could create database VIEW pointing to a table in another database. This way you could reuse the existing auth_user table in the other sever.
I would take a look at pgpool-II. I haven't used it myself, but it's been recommended to me for similar purposes and after a bit of research I came to the conclusion that it's one of the better projects out there.
I'm in the process of setting up a new WordPress 3.0 multisite instance and would like to use Sphinx on the database server to power search for the primary website. Ideally, this primary site would offer the ability to search against its content (posts, pages, comments, member profiles, activity updates, etc.) as well as all of the other sites that are a part of the network. Because we'll be adding new sites to the network on a regular basis, I'd like to be able to dynamically add those newly generated tables to the Sphinx .conf file (instead of editing the file and reindexing every time we add a new site).
Unfortunately, MySQL doesn't seem to support wildcards when specifying the table(s) in a query string. The best solution I've come across for grabbing a dynamic set of tables is grepping but I'm pretty certain I don't know how to do this within the .conf file (unless it's possible through magical sorcery).
Is it possible to dynamically specify tables to add to the Sphinx index? Or is this going to cause such performance issues that I'm using the wrong tool?
You could try to dynamically modify the .conf file instead.
You could query from a MySQL view that aggregates the many tables. You'd have to recreate the view with each change to the list of blogs, but I believe that all the hooks exist to support that and it should be easy enough to construct the view query.
The bigger problem may be in trying to find a suitable unique record ID for the posts in Sphinx. It has to be a straight INT, but the post IDs from the different blogs will collide with each other.
I think you can create triggers (INSERT/UPDATE/DELETE) in MySQL on the interested tables (e.g. posts, comments etc) and migrate the data to centralized global tables that are indexed by Sphinx in real time.
The point is how you can create those triggers automatically? Either you can run a cron job to scan for new tables in MySQL, or I believe you can write a simple Wordpress plugin that hook when a blog is activated.
I'm trying to split a database into two pieces -- a backend that updates automatically, and a front-end that allows searching and adding/editing comments. The data in the source database is pulled together from multiple tables into a pair of queries, and I want to use these queries as the source of the current database.
Access 2007 supports splitting a database into multiple pieces, but not in the way I'm looking for. It keeps the tables in the source database and puts all the forms, queries, reports, and macros into the new database. The tables and queries are already in the back-end, and this new database should just provide a good GUI to the end-user.
Access 2007 also supports linked tables, but these can only use a table as a source, not a query object.
I was thinking that the best way to do this would be to do a SQL query along the lines of
SELECT * FROM SourceQuery IN "C:\Path\To\ExternalDB.accdb";
Is what I'm working towards even possible, and would this be the best way to do it?
Since its still relatively early in the project, rearchitecting the database isn't out of the question, but is something I'd prefer to avoid.
You described the usual Access BE-FE division correctly: only tables in the back-end. I'm aware not all DB programs do it that way, but this is Access and my approach would be to honor the usual division. (And you hardly have a choice in that you can't "link to a query" in Access.)
Reviewing your comment ('There is a specific reason ...'), I think this would possibly mean
adding a few more tables to the back-end, essentially buckets (import-data in ready form; export 1; export 2) that allow all users to get to consistent processed data;
making a small admin FE that sits next to the BE and stores your modules, queries for export, and export routines; and
having some redundant queries on the user FE. This is vexing in my own work. I just try to design sturdy stable "building block" queries in those roles, and keep their number to a minimum.
Hope I'm understanding you correctly, but the most sensible solution would be to link the tables in the backend DB and copy the queries to the UI database. Those queries would still be able to access the uderlying tables (via the linked tables) without issues and would be accessible through normal means to your forms and VBA code.
Is there a particular reason you don't want the queries in the UI database?