My company is migrating from MYSQL to Greenplum Database.
We have many jobs running through Talend and I have to manually change each component from MYSQL to Talend.
Is their an easier way to go about it through which all components are directly converted instead of having to convert each component individually?
With talend studio i am not sure theres is a way to automatically convert a componant but you can store credentials in the repo to make it fairly easy to re-configure the componant and make it a fairly easy process you can even create generic schemas that can be used by any db componant.
https://help.talend.com/r/en-US/7.3/repository-manager-user-guide/how-to-add-repository-connection
https://help.talend.com/r/en-US/7.3/studio-user-guide-data-fabric/setting-up-generic-schema-from-scratch
also to add a bit of extra value here there is some free training available on talend academy https://academy.talend.com/learn/register all you have to do is sign up for an account.
Theoretically one could create a Migration Task similar to existing ones:
https://github.com/Talend/tdi-studio-se/tree/maintenance/7.3/main/plugins/org.talend.repository/
There are many existing to get ideas from:
https://github.com/Talend/tdi-studio-se/blob/maintenance/7.3/main/plugins/org.talend.repository/src/main/java/org/talend/repository/model/migration/RenametDBInputToPostgresqlMigrationTask.java
I myself never tried this but I think this should be the most flexible way to do such. Use System properties to enable/disable this. You'd need to compile the org.talend.repository plugin, then replace the one in your studio.
Don't forget to remove the configuration/org.eclipse.osgi folder as it caches the plugins and your changes wouldn't be picked up!
If you're stuck you can also try https://community.talend.com/
Related
I'm creating a javafx app with using wamp server for mysql database, how to make automatic backup even if database become large for a specific location, Or what's best practice in this case?
What can I do by java for this issue?
I would suggest to create a cron job for this.
You can either invoke mysqldump tool directly or create a script in any language to export data you want to, which requires more work, but is more flexible.
Alternatively you can search the Internet for some ready tool (like this which I just found, but didn't check if it works). I'm pretty sure you would find some, since it's pretty common thing to do.
I'm trying to make some reports using meteor and raphael js. I have to report data from an existing MySQL database. I do not wish to write to that database. I need only the "R" from CRUD.
I have thought of various manual ways of: exporting .csv files from the MySQL db via the application itself (Limesurvey) and using mongoimport to populate a MongoDB collection, and then do my CollectionName.find() etc in Meteor.
or perhaps some way of exposing REST full endpoints only to consume data, and use the http package for Meteor.
Is there a good clean solution for using existing SQL data in a Meteor JS application?
How can one use pre-existing SQL data?
(I've no problem with duplication in MongoDB, mind you. however it has to be...)
Thank You
You can do it without any duplication completely from inside Meteor, but you will have to jump through a couple of hoops.
Firstly, use the mysql npm package to query the SQL database. Though Meteor provides Npm to require node packages, I find that using meteor-npm is an easier. Then to do the "R"eading form MySQL, create a Meteor.method on your server which queries the MySQL directly.
Then the second problem is that the mysql package is completely asynchronous. Hence, the execution of the SQL query returns value in a call back and by that point, your Meteor.method call would return leaving the client with an undefined. To fix that issue, we can use Future.
There are a couple of ways of smoothing over this step:
Using `meteor-sync-methods
Spinning out your own version from advice from the issue to allow this natively
Use this easy to implement one-time pattern: "fence has already activated -- too late to add writes"
Hope that helps.
I'm working with a project which is using mysql as the database. The application is hosted with many clients and we are doing upgrades for the current live systems often.
There are some instances where the client has change the database structure(adding new tables) and causes some unexpected db crashes.
I need to log all the structural changes which were done at that database, so we can find the correct root cause for that. We can't do it 100% correct with diff tool because it will not show the intermediate changes.
I found http://www.liquibase.org/ tool but seems little bit complex.
Is there any well known technique or a tool to track database structural changes only.
well from mysql studio you can generate all object's schema definition and compare them with your standard schema definition and this way you can compare two database schema...
generate scrips of both database (One is client's Database and One is master copy database) and then compare it using file compare tool would be the best practice according to me because this way you can track which collumn was added, which column was deleted, which index was added like wise without any tool download.
Possiable duplication of Compare two MySQL databases ?
Hope this helps.
If you have an application for your clients to manage these schema changes, you can use a mechanism at application level. If you have a Python and Django-based solution, you could probably use South which provides schema change tracking and rollbacks.
In our rails app we sometimes have db entries created by users that we'd like to make part of our dev environment, without exporting the whole table. So, we'd like to be able to have a special 'dev and testing' dump.
Any recommended best practices? mysqldump seems pretty cumbersome, and we'd like to pull in rails associations as well, so maybe a rake task would make more sense.
Ideas?
You could use an ETL tool like Pentaho Kettle. Once you have initial transformation setup that you want you could easily run it with different parameters in the future. This way you could also keep all your associations. I wrote a little blurb about Pentaho for another question here.
If you provide a rough schema I could probably help you get started on what your transformation would look like.
I had a similar need and I ended up creating a plugin for that. It was developed for Rails 2.x and worked fine for me, but I didn't have much use for it lately.
The documentation is lacking, but it's pretty simple. You basically install the plugin and then have a method to_sql available on all your models. Options are explained in README.
You can try it out and let me know if you have any issues, I'll try to help.
I'd go after it using a Rails runner script. That will allow your code to access the same things your Rails app would, including the database initializations. ActiveRecord will be able to take advantage of the model relationships you've defined.
Create some "transfer" tables in your production database and copy the desired data into those using the "runner" script. From there you could serialize the data, or use a dump tool, since you'll be dealing with a reduced amount of records. Reverse the process in the development environment to move the data into the database.
I had a need to populate the database in one of my apps from remote web logs and wrote a runner script that fired off periodically via cron, ftps the data from my site and inserts the data.
I have a small amount of experience using SVN on my development projects, and I have just as little experience with relational databases. I know the basic concepts like tables, and SQL statements, but I'm far from being an expert.
What I'd like to know is if there are any generic version control type systems like SVN, but that work with a database rather than files. I would like the same kind of features you get with SVN like the ability to create branches, create tags, and merge branches together. Rather than a revision number being associated to a version of a file repository it would be associated with a version of the database.
Are their any generic solutions available that can add this kind of functionality independent of the actual database schema? I'd be interested in solutions that work with MySQL or MS SQL Server.
I should also clarify that I'm trying to version control the data not the schema. I would expect the schema to remain constant. So really it seems like I want a way to create a log of all the INSERT, UPDATE, and DELETE requests sent the the database between each version of the data. That way any version could be recreated by resending all the SQL statements that have been saved up to the desired version.
You can script all your DDL, stored procedures and such to regular text files.
Then you can simply use SVN for database versioning.
I've never found a solution that works as well as Subversion, but here's a few things I've done that have helped:
Make scripts that will create the schema and populate any initial data. Then make an update script for each change after that. It's a fairly manual process, but it works. There's extra things that help like storing the current version number in a table in the db and making sure that the scripts are idempotent.
Store the full development db in Subversion. This doesn't usually work out too well for me if there is a lot of data or it is frequently changed. But in some projects is could work.
I keep and maintain create scripts in my version control system.
There are two things I can think of:
http://www.liquibase.org/ - provides a way of generally managing database changes. Creates files that get committed into source control, and it helps manage changes across different development databases, etc.
http://www.viget.com/extend/backup-your-database-in-git/ - this describes a strategy for backing up a database into source control, but the same strategy can be used just on the schema. In this scheme, the database would be in a separate area from your main code. (This can be used with other source control systems too.)