Ques: I have two database one is client's database(live database) and another is mine.I am using MySQL database. I should not access client's database directly so I created my own database. By using 'Talend' data warehousing tool I created job for each table and by executing all jobs I can get all updated data from client's database(live database) to my database. I need to execute these jobs manually for getting updated data into my Database, But my question is: is there any process which will automatically remind me, when client insert or update data on there data base so I can execute those jobs manually to get updated data into my database ?? or if client update their any database table so automatically associated job will Execute/Run ?? Please help me on this.
You would need to set up a database trigger that somehow notifies the Talend job and runs it. To do this you'd typically call the job as a web service using a stored procedure or user defined function. This link shows a typical way that a web service may be called on an update trigger for example.
If your source data tables are large, rather than extracting all of the data from the table and then I guess dropping your table and recreating you could use a tMysqlCDC component to only affect changes. The built in tutorial for the component looks like it pretty much covers a useful example of this in practice. If you are seeing regular changes in the source database this could make your job much more performant.
If you have absolutely no access to your client's database then you could alternatively just run the job with some scheduler. The Enterprise versions of Talend come with the Talend Administration Console that allows you to set CRON triggers for a job and could easily be set to run every minute or any other interval (not seconds). Alternatively you could use your operating systems scheduling system to run the job at your desired intervals.
If you can't modify your clients database (i.e. add triggers), and there is no other way to identify changed records (i.e. some kind of audit table) then you're our of luck.
Related
What’s the best practice for integrating SQL Server with Active Directory (AD)?
NB. I’m using SQL Server 2016
Crux of the issue: I'm using SSRS 2016 and have several reports that need to be filtered based on the user accessing the reports. Originally I created a table of users that would need to access the reports. Then in the report builder I passed the UserID as a parameter within the query so that the resulting dataset would be limited to the data the user needed to see.
The problem this created is that the User table would have to be maintained, and Active Directories are dynamic. Now that I have some time to develop a better option, I’d like to link the LDAP data with SQL Server.
I’m wondering what the best practice for doing this is.
One way I pursued this was through an SSIS package ADO.Net connection. Then convert the data. Then load it into a table. Then schedule a job to run the package however often I needed it. This was problematic because for whatever reason I couldn’t get the data conversion process to work.
The second way I’ve been approaching this is to create a linked server instance for the AD. My research has indicated that I’ll need to create a function that overcomes the string limitation of the xp_sprintf Function. Then leverage temp tables and loop through LDAP data to get around the 1000 record limitation from the AD. I've been able to accomplish all this.
At this point though, there appears to be some other issues.
This ultimately increases the code necessary in the views for my reports which may make it harder for other database users to update if & when the time comes. To the point that I'd need to abandon the views and create stored procedures for the reports to pull from.
This also increases transaction counts beyond the SQL Server to include LDAP every time a user accesses a report.
So to resolve that I could wrap the original query of the LDAP data to create a table and then create a job to run that stored procedure every so often.
Either option solves the problem of maintaining the users table which is good, but it isn't perfect because AD changes can take place at any time.
Which option is better here?
If the SSIS package is the better route, I’m curious as to why that is the better route. I’m not opposed to going back and figuring out what it is I’m missing on the SSIS package to make it work.
Are there additional options I should consider if I want to get the most up-to-date Active Directory listing?
Thanks.
I'm kinda new to this kind of problem. I'm developing a web-app and changing DB design trying to improve it and add new tables.
well since we had not published the app since some days ago,
what I would do was to dump all the tables in server and import my local version but now we've passed the version 1 and users are starting to use it.
so I can't dump the server, but I still would need to update design of server DB when I want to publish a new version. What are the best practices here?
I like to know how I can manage differences between local and server in mysql?
I need to preserve data in server and just change the design, data on local DB are only for test.
Before this all my other apps were small and I would change a single table or column but I can't keep track of all changes now, since I might revert many of them later and managing all team members on this is impossible.
Assuming you are not using a framework that provides a migration tool for database, you need to keep track of the changes manually.
Create a folder sql_upgrades (or whatever name you name) in your code repository
Whenever a team member updates the SQL schema, he creates a file in this folder with the corresponding ALTER statements, and possibly UPDATE, CREATE TABLE etc. So basically the file contains all the statements used to update the dev database.
Name the files so that it's easy to manage, and that statements for the same feature are grouped together. I suggest something like YYYYMMDD-description.sql, e.g. 20150825-queries-for-feature-foobar.sql
When you push to production, execute the files to upgrade you SQL schema in production. Only execute the files that have been created since your last deployment, and execute them in the order they have been created.
Should you need to rollback a file, check the queries it contains, and write queries to undo what was done (drop added columns, re-create dropped columns, etc.). Note that this is "non-trivial", as many changes cannot be rolled back fully (e.g. you can recreate a dropped column, but you will have lost the data inside).
Many web frameworks (such as Ruby of Rails) have tools that will do exactly that process for you. They usually work together with the ORM provided by the framework. Keeping track of the changes manually in SQL works just as well.
I have databases in my system and also put database on web server also, so when I update my system database data I ll have to then replace or add data into web database.
but
problem is that I am doing changes in database to some specific record frequently for testing purpose.
So I want some mechanism that will used to export some specific records to sql file with insert statement.
Suppose I have made change in table tbl1 and added 10 records to it.
So right now I am manually adding or replacing whole table on web database.
So is there any mechanism in MySql or in Workbench using that I can export specific records.
Any Help for that.
The only automatic solution is to use replication, but that is probably not a good solution for your scenario. So what remains is some manual process. Here are some ideas:
Write a script that writes specific records into a dump file.
Then use a different script to load this dump file into your
target server.
If you frequently change the same records you could create a script
with insert statements that you edit for each new value and run
against both your local and your remote (web) server.
I am required to create a test environment for some of our .Net applications, and some of these applications use only a small portion of some rather large databases. My idea is to create a 'small' database, which would only hold the tables, stored procedures, views, etc... that are being used by the application.
This will hopefully speed up refresh time on these 'small' databases, however I can't see a simple way of doing this, is there an option to do this easily within SQL server, or via a T-SQL script.
Currently the best method I have is to generate a script from the database, and only select the tables I require with the 'data only' option selected, then run these on the 'small' database to get the data up to date. However this as you can imagine is a lengthy process and I would prefer to use something a bit more automated.
Any suggestions you can provide are very much appreciated.
Thanks,
Michael Tempest
Database Mirroring Can be a solution for this problem, Only publish the items/Articles you want on your test Database, You can pause and restart mirroring when needed.
SSMS Script as Another wasy way would be go to your SSMS right click the objects you want to copy to test database and Script as Create, do it for all the items you want to move save the scripts in right order i.e (Creating Tables 1st and then relating objects) in one file and run it on the target database.
Since its only you who knows what items to move over to test db I think it will be difficult to find a script which suits your needs.
Some Useful tips for using SCRIPT AS Option
To generate the sql script for the objects:
SQL Server Management Studio > Databases > Database1 > Tasks >
Generate Scripts...
The SQL Server Scripts Wizard will start and
you can choose the objects and settings to export into scripts
By default the scripting of Indexes and Triggers are not
included so make sure to trun these on (and any others that you
are interested in).
To export the data from the tables:
SQL Server Management Studio > Databases > Database1 > Tasks >
Export Data...
Choose the source and destination databases
Select the tables to export •Make sure to check the Identity Insert
checkbox for each table so that new identities are not created.
Then create the new database, run the scripts to create all of the objects, and then import the data.
For Dev database we just keep a structural copy of Production one with some data. Periodically we compare databases with tool that compares and syncs database structure (there are plenty of such tools now - we use redgate's one).
For prod_copy database we just do backup-restore of prod db and then truncate biggest tables and shrink database if needed.
If you want completely automate the procedure you can script both SQL Compare or SQL Data Compare. I am not sure if other SQL tools vendors have such an option.
I've perused the threads here on migration from SQL 2000 to SQL 2008 but haven't really run into my question, so here we go with another one.
I'm building a strategy to move specific SQL 2000 databases to a new SQL 2008 R2 instance. My question comes with regards to the best method for transferring the schema and data. One way I know of is to do the quick 'n' dirty detach - copy - attach method, which should work so long as I've done my homework wrt compatibility and code and such.
What if, though, I wrote the schema and logins via script and then copied the data via SSIS? I'm thinking of trying that so I can more easily integrate some of my test cases into the package (error handling and whatnot). What would I be setting myself up for if I did this?
Since you are moving the data between servers or instances, I would recommend moving the data via data flows. If you don't expect to run the code more than once, then you can let the wizard generate your code for this move. However, when I did this once 2+ years ago, the wizard code generated combined execute sql tasks that combined many "create table" commands into one task and created a few data flow tasks that had multiple source and destinations in them to insert data in the destination. This was good to get up and running, but it was inadequate when I wanted to refresh the tables one more time after I modified the schema of the new target tables. If you expect to run the refresh more than once, then you may want to take the time to create the target schema first and then manually create the data flows.
Once you have moved the data, then you can enable full-text search on the new server. I don't believe you will need to have this enabled on your first load.
One reason I recommend against the detach-attach method for migration is that you bring all the dirty laundry from the 2000 database to the 2008 R2 database. If you had too lax security on the 2000 server or many ancient users that shouldn't exist, it could be easier to clean this up by starting from scratch. If you use the detach-attach method, then you have to worry about users.