Tables from two different databases in a DBML? - linq-to-sql

After dragging two tables in from one database, I switch to another and drag a table in. Now I get a message if I want to replace the connection string with the new one. I want tables from multiple databases in one DBML. Is this possible?

It is entirely possible to reference multiple databases within the same DBML, PROVIDED those databases reside on the same SQL Server.
In Visual Studio, right-click on the DBML, click "Open with..." , and select XML (Text) Editor with Encoding.
You will see your first table that you dragged in looks like this:
<Table Name="dbo.MyTable1fromMyDatabase1" Member="MyTable1fromMyDatabase1">
For your tables from other databases you wish to add, enter them like this:
<Table Name="MyDatabase2.dbo.MyTable1fromMyDatabase2" Member="MyTable1fromMyDatabase2">
This will work assuming the same login works for both databases, and your LINQ expressions can now query across both databases!

I don't believe that what you're looking for is possible, since the DataContext would then not have any easy way of resolving results from two separate databases.
If you're looking to create domain objects from two separate databases, then your best bet would be to have two separate DBML's, then use a bridge (GOF) or some other related design pattern to instantiate your domain objects.

Another option is to create a server link on on database that points to the other and make aliases to the remote tables from the "local" DB. I believe then you'd be able to reference them as if they were all in the same database.

We can also create a view that queries the table in the other database. We can select, insert and update this view, which will affect the table in the other database as well.

Related

Concept of schema in MySQL

Is there a suggested way to use "schemas" in mysql? For example, if I have one database called events and then I want to have two environments dev and prod, what might be a way to do that? Currently I add a table prefix, but it seems a a bit hack-ish:
you create a separate database for that, because MySQL does not have the concept of schema like e.g. PostgreSQL does.
You create one database for production e.g. prod_database with the table names event and event_type. and one database for dev e.g. dev_database, with the same table names event and event_type. As you always want to have the same table names in different environments.
You could (and should) even use the same database name, if you host the database on different servers. Which for production and development/staging would also make sense e.g. to test server version updates on one setup without affecting production.

How to compare two databases' altered tables

I have inserted new columns in one database, and I now want to add the same type of columns to a second database. I need to know which columns are in the first database that are not in the second. I have many tables in each database that needs column difference comparing. I searched the web and I can only find ways to see the difference of the contents of columns in two tables. I don't need to compare the contents, just different columns in all the tables in each database. Each database has the same tables.
Thanks!
I found that you can do a database dump that just has the structure from phpmyadmin.
Are you doing this manually? You could just use SHOW CREATE to see the structure of the tables, and then something like the diff command in Linux to compare them.
For a commercial product answer: I use Red Gate's SQL Compare which works great. It can compare the entire schema of two databases. It can also update your target database to match your source database.
Use redgate SQL compare to comapre schema of two tables.
sql-dbdiff works well too. Its an open source.

Setting up a master database to control the structure of other databases

I got a case where I have several databases running on the same server. There's one database for each client (company1, company2 etc). The structure of each of these databases should be identical with the same tables etc, but the data contained in each db will be different.
What I want to do is keep a master db that will contain no data, but manage the structure of all the other databases, meaning if I add, remove or alter any tables in the master db the changes will also be mirrored out to the other databases.
Example: If a table named Table1 is created in the master DB, the other databases (company1, company2 etc) will also get a table1.
Currently it is done by a script that monitors the database logs for changes made to the master database and running the same queries on each of the other databases. Looked into database replication, but from what I understand this will also bring along the data from the master database, which is not an option in this case.
Can I use some kind of logic against database schemas to do it?
So basicly what I'm asking here is:
How do I make this sync happen in the best possible way? Should I use a script monitoring the logs for changes or some other method?
How do I avoid existing data getting corrupted if a table is altered? (data getting removed if a table is dropped is okay)
Is syncing from a master database considered a good way to do what I wish (having an easy maintainable structure across several datbases)?
How will making updates like this affect the performance of the databases?
Hope my question was clear and that this is not a duplicate of some other thread. If more information and/or a better explantion of my problem is needed, let me know:)
You can get the list of tables for a given schema using:
select TABLE_NAME from information_schema.tables where TABLE_SCHEMA='<master table name>';
Use this list for a script or stored procedure ala:
create database if not exists <name>;
use <name>;
for each ( table_name in list )
create table if not exists <name>.table_name like <master_table>.table_name;
Now that Im thinking about it you might be able to put a trigger on the 'information_schema.tables' db that would call the 'create/maintain' script. Look for inserts and react accordingly.

What is the MySQL equivalent of a PostgreSQL 'schema'?

I have a PostgreSQL database whose tables are divided amongst a number of schemas. Each schema has a different set of access controls; for example, one schema might be read-only to regular users, while they are allowed to create tables on another. Schemas also act as namespaces, so users don't have to worry about duplicating existing tables when they create new ones.
I want to create a similar setup using MySQL. Does it have an equivalent concept? If not, how can I most closely simulate it? I would prefer not to use multiple databases.
Database should be the closest one.
Prefixing table names is what's done with most MySQL-driven apps.

MySQL to SQL Server transferring data

I need to convert data that already exists in a MySQL database, to a SQL Server database.
The caveat here is that the old database was poorly designed, but the new one is in a proper 3N form. Does any one have any tips on how to go about doing this? I have SSMS 2005.
Can I use this to connect to the MySQL DB and create a DTS? Or do I need to use SSIS?
Do I need to script out the MySQL DB and alter every statement to "insert" into the SQL Server DB?
Has anyone gone through this before? Please HELP!!!
See this link. The idea is to add your MySQL database as a linked server in SQL Server via the MySQL ODBC driver. Then you can perform any operations you like on the MySQL database via SSMS, including copying data into SQL Server.
Congrats on moving up in the RDBMS world!
SSIS is designed to do this kind of thing. The first step is to map out manually where each piece of data will go in the new structure. So your old table had four fields, in your new structure fileds1 and 2 go to table a and field three and four go to table b, but you also need to have the autogenerated id from table a. Make notes as to where data types have changed and you may need to make adjustments or where you have required fileds where the data was not required before etc.
What I usually do is create staging tables. Put the data in the denormalized form in one staging table and then move to normalized staging tables and do the clean up there and add the new ids as soon as you have them to the staging tables. One thing you will need to do if you are moving from a denormalized database to a normalized one is that you will need to eliminate the duplicates from the parent tables before inserting them into the actual production tables. You may also need to do dataclean up as there may be required fileds in the new structure that were not required in the old or data converstion issues becasue of moving to better datatypes (for instance if you stored dates in the old database in varchar fields but properly move to datetime in the new db, you may have some records which don't have valid dates.
ANother issue you need to think about is how you will convert from the old record ids to the new ones.
This is not a an easy task, but it is doable if you take your time and work methodically. Now is not the time to try shortcuts.
What you need is an ETL (extract, transform, load) tool.
http://en.wikipedia.org/wiki/Extract,_transform,_load#Tools
I don't really know how far an 'ETL' tool will get you depending on the original and new database designs. In my career I've had to do more than a few data migrations and we usually always had to design a special utility which would update a fresh database with records from the old database, and yes we coded it complete with all the update/insert statements that would transform data.
I don't know how many tables your database has, but if they are not too many then you could consider going the grunt root. That's one technique that's guaranteed to work after all.
If you go to your database in SSMS and right-click, under tasks should be an option for "Import Data". You can try to use that. It's basically just a wizard that creates an SSIS package for you, which it can then either run for you automatically or which you can save and then alter as needed.
The big issue is how you need to transform the data. This goes into a lot of specifics which you don't include (and which are probably too numerous for you to include here anyway).
I'm certain that SSIS can handle whatever transformations you need to do to change it from the old format to the new. An alternative though would be to just import the tables into MS SQL as-is into staging tables, then use SQL code to transform the data into the 3NF tables. It's all a matter of what your most comfortable with. If you go the second route, then the import process that I mentioned above in SSMS could be used. It will even create the destination tables for you. Just be sure that you give them unique names, maybe prefixing them with "STG_" or something.
Davud mentioned linked servers. That's definitely another way that you can go (and got my upvote). Personally, I prefer to copy the tables over into MS SQL first since linked servers can sometimes have weirdness, especially when it comes to data types not mapping between different providers. Having the tables all in MS SQL will also probably be a bit faster and saves time if you have to rerun or correct portions of the data. As I said though, the linked server method would probably be fine too.
I have done this going the other direction and SSIS works fine, although I might have needed to use a script task to deal with slight data type weirdness. SSIS does ETL.