Create a simplified versions of a database onto another server - mysql

I am required to create a test environment for some of our .Net applications, and some of these applications use only a small portion of some rather large databases. My idea is to create a 'small' database, which would only hold the tables, stored procedures, views, etc... that are being used by the application.
This will hopefully speed up refresh time on these 'small' databases, however I can't see a simple way of doing this, is there an option to do this easily within SQL server, or via a T-SQL script.
Currently the best method I have is to generate a script from the database, and only select the tables I require with the 'data only' option selected, then run these on the 'small' database to get the data up to date. However this as you can imagine is a lengthy process and I would prefer to use something a bit more automated.
Any suggestions you can provide are very much appreciated.
Thanks,
Michael Tempest

Database Mirroring Can be a solution for this problem, Only publish the items/Articles you want on your test Database, You can pause and restart mirroring when needed.
SSMS Script as Another wasy way would be go to your SSMS right click the objects you want to copy to test database and Script as Create, do it for all the items you want to move save the scripts in right order i.e (Creating Tables 1st and then relating objects) in one file and run it on the target database.
Since its only you who knows what items to move over to test db I think it will be difficult to find a script which suits your needs.
Some Useful tips for using SCRIPT AS Option
To generate the sql script for the objects:
SQL Server Management Studio > Databases > Database1 > Tasks >
Generate Scripts...
The SQL Server Scripts Wizard will start and
you can choose the objects and settings to export into scripts
By default the scripting of Indexes and Triggers are not
included so make sure to trun these on (and any others that you
are interested in).
To export the data from the tables:
SQL Server Management Studio > Databases > Database1 > Tasks >
Export Data...
Choose the source and destination databases
Select the tables to export •Make sure to check the Identity Insert
checkbox for each table so that new identities are not created.
Then create the new database, run the scripts to create all of the objects, and then import the data.

For Dev database we just keep a structural copy of Production one with some data. Periodically we compare databases with tool that compares and syncs database structure (there are plenty of such tools now - we use redgate's one).
For prod_copy database we just do backup-restore of prod db and then truncate biggest tables and shrink database if needed.
If you want completely automate the procedure you can script both SQL Compare or SQL Data Compare. I am not sure if other SQL tools vendors have such an option.

Related

How to Save an Append or Delete Query in MySQL

So I'm moving from MS Access to MySQL:
In MS Access you can store certain INSERT, DELETE, and UPDATE queries as objects alongside your tables. Thus for anyone who don't understand computers that well, they can click on the objects and automatically run the queries to alter the master table for various business functions.
In MySQL, where and how do you store these queries, I seem to be only able to make tables. When I write a piece of code using the SQL editor, I can only save it to a remote location (such as my local desktop) and not onto the MySQL database, where it's accessible for my coworkers.
If you can't save it onto the server, how would I write a piece of code and execute it within the database that would be easily usable by others.
Thanks
The answer to this question is going to depend on your environment, your users, and your bandwidth to support any given solution. You are gaining a lot by making the switch from Access to MySQL, however you are losing some of the the WYSIWYG features. (e.g., Access forms that can bind directly to your data source.)
There are many approaches:
If your users are more advanced, simply having access to the database using MySQL Workbench may suffice. From there they would have access to run views, stored procedures, or to create their own custom queries.
Another option would be to script your objects using Python and provide a simple gui using TkInter. Python is generally thought of as an easy to use language; with built in suppport for MySQL and TkInter is its "default" interface.
Using the LAMP architecture is another largely popular paradigm using MySQL as the backend database.
There is also nothing stopping you from using Access to link to your MySQL db using MySQL as an external data source.
I hope this provides enough info to help you begin whittling down your options.

Talend Data warehousing tool

Ques: I have two database one is client's database(live database) and another is mine.I am using MySQL database. I should not access client's database directly so I created my own database. By using 'Talend' data warehousing tool I created job for each table and by executing all jobs I can get all updated data from client's database(live database) to my database. I need to execute these jobs manually for getting updated data into my Database, But my question is: is there any process which will automatically remind me, when client insert or update data on there data base so I can execute those jobs manually to get updated data into my database ?? or if client update their any database table so automatically associated job will Execute/Run ?? Please help me on this.
You would need to set up a database trigger that somehow notifies the Talend job and runs it. To do this you'd typically call the job as a web service using a stored procedure or user defined function. This link shows a typical way that a web service may be called on an update trigger for example.
If your source data tables are large, rather than extracting all of the data from the table and then I guess dropping your table and recreating you could use a tMysqlCDC component to only affect changes. The built in tutorial for the component looks like it pretty much covers a useful example of this in practice. If you are seeing regular changes in the source database this could make your job much more performant.
If you have absolutely no access to your client's database then you could alternatively just run the job with some scheduler. The Enterprise versions of Talend come with the Talend Administration Console that allows you to set CRON triggers for a job and could easily be set to run every minute or any other interval (not seconds). Alternatively you could use your operating systems scheduling system to run the job at your desired intervals.
If you can't modify your clients database (i.e. add triggers), and there is no other way to identify changed records (i.e. some kind of audit table) then you're our of luck.

Import tables structure and no data from one database to another

I have database with multiple tables in Microsoft SQL Server with schema in tables as "xyz".
i am able to copy this database tables along with data from one sql server to another using export and import wizard of SQL server.
I want to do find a way to-
1. Copy only tables with no data.
2. is it possible to covert current database design to a script and then run the same on another server which will create all these tables with empty data ?
Thanks in advances.
Best Regards
Yes, you could do that with Management Studio. Right click your database and then select Tasks -> Generate Scripts.
There are some settings there you should tweak, like if it should generate scripts for indexes and statistics. They are all in plain sight.
An alternative is SQL Server Data Tools. It's relatively new (ex-Data Dude). It's not as straightforward, but better on a long term, for database versioning and for creating migration scripts.

Importing .sql into MS Access using OBDC

I currently have a database in MySQL, which I'd like to import in MS Access.
Is it possible to do this while keeping all relationships intact (i.e. without exporting to .csv, or by using ODBC)?
I'm a noob in this area so any help is greatly appreciated.
Thanks.
You need to solve two different problems:
Creating an empty MS Access database with a structure that matches the MySQL database structure.
Extracting the data from MySQL and loading it into MS Access.
This is not easy because different SQL databases offer different structural features, different datatypes, and so on. The more complex your use of MySQL is the more likely you'll run into some show-stopper during the conversion (for instance, Access doesn't support triggers at all). Conversely if you're using MySQL as a simple data store you may find the conversion fairly easy.
To get an MS Access database with the same structure as your MySQL database, your best bet is to find a database definition / diagramming tool that offers reverse engineering and supports both MySQL and MS Access. Use it to reverse engineer your MySQL database into a database diagram, then change the underlying database to MS Access and use the tool to generate a database.
Check out Dezign For Databases which (on paper, anyway) offers the features you would need to do this.
To pump the data across, there are any number of tools. This kind of operation is generically referred to as ETL (Extract, Translate, Load).
Do you mean SQL Server? A good starting point might be to check out SQL Server Integration Services (SSIS), which can be used for transferring data around like that.
Google will also be helpful, check out the first result:
http://support.microsoft.com/kb/237980
By the way, you said ".sql" in your question: a .SQL file is a script file, which could do anything from create a database, insert data, drop table, delete data, or given the right permissions, call system procedures and reboot a machine, format a drive, send an email.. Just for ref, .SQL files aren't the storage format used by SQL Server.
While you can script your database's schema into script files via something like SQLyog, you will find that the syntax varies enough from database to database (MySQL to Access, in your case) that you can't directly apply the scripts.
With much effort a conversion script could be created by editing the script (perhaps automated with a program, depending on the resulting script size). I think you would be better served using ODBC to copy the tables (and data) and then extracting and re-applying the relationships from the generated script by hand. Time consuming, but also a one time operation I would hope.
When both systems are the same database, there are tools that can do the comparison and script generation (TOAD for MySQL and RedGate Compare for Microsoft SQL), but they don't do cross database work (at least not the ones I am aware of).
If you create a ODBC DSN, you can use TransferDatabase to import from your MySQL database. You can do it manually with the GET EXTERNAL DATA command (or whatever it is in A2007/A2010) and see how well it works. It won't get all data types exactly right, but you could do some massaging and likely get it closer to what will work best.
Is there some reason you can't just link to the MySQL tables and use them directly? That is, why do you need to import into Access at all?
Access: run query. Just make sure to adapt the SQL code since every RDMS has its own sintaxis (despite SQL being an ANSI standard).

Migration strategies for SQL 2000 to SQL 2008

I've perused the threads here on migration from SQL 2000 to SQL 2008 but haven't really run into my question, so here we go with another one.
I'm building a strategy to move specific SQL 2000 databases to a new SQL 2008 R2 instance. My question comes with regards to the best method for transferring the schema and data. One way I know of is to do the quick 'n' dirty detach - copy - attach method, which should work so long as I've done my homework wrt compatibility and code and such.
What if, though, I wrote the schema and logins via script and then copied the data via SSIS? I'm thinking of trying that so I can more easily integrate some of my test cases into the package (error handling and whatnot). What would I be setting myself up for if I did this?
Since you are moving the data between servers or instances, I would recommend moving the data via data flows. If you don't expect to run the code more than once, then you can let the wizard generate your code for this move. However, when I did this once 2+ years ago, the wizard code generated combined execute sql tasks that combined many "create table" commands into one task and created a few data flow tasks that had multiple source and destinations in them to insert data in the destination. This was good to get up and running, but it was inadequate when I wanted to refresh the tables one more time after I modified the schema of the new target tables. If you expect to run the refresh more than once, then you may want to take the time to create the target schema first and then manually create the data flows.
Once you have moved the data, then you can enable full-text search on the new server. I don't believe you will need to have this enabled on your first load.
One reason I recommend against the detach-attach method for migration is that you bring all the dirty laundry from the 2000 database to the 2008 R2 database. If you had too lax security on the 2000 server or many ancient users that shouldn't exist, it could be easier to clean this up by starting from scratch. If you use the detach-attach method, then you have to worry about users.