Exporting a large data from oracle - mysql

I want to export a large amount of data from oracle something like 7 million records and then import it in mysql. but during the exporting a data from oracle(using oracle sql developer), the sql developer freezed. Any ideas how to export a large amount of data from oracle and then import it in the mysql.

To do that you need an ETL tool (http://en.wikipedia.org/wiki/Extract,_transform,_load)
There is one open source good product called Pentaho Data Integration.
Website: http://community.pentaho.com/
Download: http://forums.pentaho.com/showthread.php?133266-Kettle-4-4-0-stable-has-been-posted-to-SourceForge-net
How to use it: http://pldwh.blogspot.co.uk/2013/01/pentaho-data-integration-how-to-use.html
NOTE: You do not have to create any repository (file/db) you can just create:
a new transformation -> add table input and table output -> define connections
and you are done.

SQL Developer doesn't handle large data sets very well, but SQL*Plus is very capable in that respect and has been used for this for decades. Spooling out a file of that size is no problem, and it's a free tool of course.
You have to pay attention to various settings to override the default headings, linesize, pagesize, feedback etc..
Run tests on a select of about 1,000 records to establish that you have the correct settings. Off the top of my head ...
set heading off
set feedback off
set echo off
set termout off
set linesize 2000
set define off
set trimspool on
There may be a few I've forgotten -- if you see undesirable formatting issues in the output post a followup if you can't resolve them yourself.

Related

No columns returned SSIS

I am implementing a SSIS package and currently trying to do the following.
Truncate the destination table
Fetch the data by executing the stored procedure and insert it into the destination table.
I have created an Execute SQL task to address step 1 and dataflow with oledb source and oledb destination to address the second point. It been working successfully so far but isn't working for one my stored procedure that uses temp tables.
When I edit the oledb source and click the preview button, I get the error no column returned
I know that SSIS has an issue with generating column while executing stored procedures that depend on temp tables. I have converted the stored proc to use temporary table variables and its now able to return columns in SSIS when I do a preview. The only downside is that the stored procedure is taking longer time to execute. Its taking 1 hour 15 mins as compared to 15 mins while using temp tables.
I did see a suggestion to use SET FMTONLY before executing the stored procedure as an alternate solution to changing to temp table variables but that didn't seem to work as I am getting syntax or permission denied error.
Could somebody tell me a solution to my problem which does not compromise on the performance.
Sounds like you've already read all the approaches to using Temp tables in SSIS, including the IF 1=0... trick? If you haven't seen that one yet, google it.
You say that using Table Variables causes your stored procedure to take about 5 times longer than using Temp Tables. The most likely reason for that is that you are indexing your temp tables but not your table variables. If you didn't know that table variables can be indexed, they can. You might try that.
Finally, a solution that you haven't mentioned is that you can replace your temporary table with a real table that gets truncated when you're done using it.
Short comment:
Try EXEC WITH RESULT SETS and specify the metadata yourself for a proc with temp tables; or use the Script Component as a source and specify the Output columns yourself.
Long comment:
Technically speaking, it is the driver/database you are using in SSIS that would decide the behavior when working with temp tables.
Metadata is an important factor when using SSIS's pipeline components. By metadata, I mean the names of the columns, their data types etc that a pipeline component uses. When designing a data flow, someone/something should provide this metadata to the components that require it.
In most cases, SSIS automatically retreives the metadata. Components that do not connect to a external data source, like Conditional Split etc, get their metadata from the other components they are connected to. For the pipeline components that connect to a external data source (like Oledb source, oledb destination, Lookup etc.), SSIS provides a mechanism to get this metadata without human involvement. This mechanism involves the driver connecting to the database and retrieving the metadata of the output. If the driver/database is capable of returning the metadata, then that metadata is used. If the driver/database is incapable, then you get the errors you are seeing. The rest of my comments are based on the assumption that you are using a SQL Server database in your question.
When working with a SQL Server database in SSIS, typically, we use the native client drivers provided by Microsoft. When trying to get the metadata, these drivers try to get the metadata without actually executing the SQL Statement (actual execution can have side effects; and also, might take more than a few seconds/minutes/hours; and you dont want side effects and long wait times during package design time.) So to get the metadata, the driver relies on the metadata of the actual objects used in the sql command. If the command uses a physical table or view, SQL Server already has the metadata available and can supply it to the driver. If it is a temp table, SQL Server does not have the metadata until it can create the temp table. If using FMT ONLY option, you can use it in such a way to create the temp tables, but avoid any heavy processing/side affects and thus be able to retrieve metadata without penalties. Post 2012, these native client drivers rely on some newer functionality to retrieve metadata than the drivers before 2012. In 2012 and after, the driver uses the sp_describe_first_result_set proc to retrieve metadata. So, whether you can get metadata or not is determined by the ability of the sp_describe_first_result_set proc.
So while SSIS can automatically get the metadata (because of the driver/database), it does not automatically get the metadata in some cases (again because of the driver/database). In cases involving the second scenario, some other process (typically a human) can help the driver infer metadata or provide the metadata to the component directly.
To help the driver, in case of SQL Server 2012 and after, you can use the WITH RESULTSETS clause to specify the output metadata. When this clause is present, the driver will use it and doesnt try to query the metadata from system objects; and thus avoid the error which you would otherwise get. If you are using the drivers that came with SQL Server 2008, you can use FMT ONLY. This option is at the driver/database level.
Another option could be to use a Script Component as the Source and in the Output columns, you can specify the columns/metadata. SSIS would not try to retrieve metadata from the datasource in this case, but would rely on the definitions you provided in the Output section of the Script Component.
As you can see, both options involve a human (or some other process) specifying the metadata instead of SSIS trying to retrieve the metadata in an automated fashion. I would prefer the first option if working with SQL Server and the second option if working with databases like MySql.

Database Version / Change Control for Data not Schema?

After reading a few articles here and around, I have realised that database version control in a development team is actually of high importance.
Until now I have been using a simple dump whole database each time there is an update, if only 1 table was altered sometimes we can get away with just dumping the single table then reimporting. Not the best but it works quite well, for additive changes and we haven't had any hiccups yet.
Now, I save a .mwb (Mysql Workbench diagram) file in the git repository of the project I'm working on.
Then I also use dbv for schema management, along with git, with each branch being named based on the project and it's working quite well. This allows me to version schematic changes with the ability to revert or rollback.
However, what about the data contained in the tables. How can this be maintained? Maybe I'm better off just sticking with the old method. I understand on projects with the same DB structure but different data that's fine but what about sites with specific database data that needs to be versioned and managed.
Also what about the base of already deployed sites that need database changes, how can this be seamless. Some have suggested the use of update/alter scripts and that works fine with default values and such. But what if I have made a change on a website platform that requires every websites database to be changed, and keep the data intact?
I've worked mostly in business application development and configuration management. Your question is representative for the challenges in such an environment; when you upgrade for instance Microsoft Word, you don't need to change all documents right away from doc to docx. And the documents even have a more simple structure a full relation database.
Not so for business applications; users skip releases, make unauthorized changes to the data model and the system needs to keep running and providing the correct numbers...
We use for our own applications (largest one is like 600 tables) a self-developed CASE tool which includes branching/merging, but the approach can also be done manually.
Versioning Datamodel
The data model can be written down in a structured way. For instance as table contents (CSV to be loaded in a table with meta data) or as code that detects the version in use and adds columns and tables when missing, including non-trivial migrations.
This even allows multiple users at the same time to change the data model.
When you use auto-detection (for instance, we use a call named "verify_column" instead of "add_column"), this even allows smooth migration independent of the release number the customer is starting the upgrade from. Such a procedure analyzes the table to be changed and issues the correct DDL such as alter table t1 add col1 number not null when a column is missing or alter table t1 modify col1 not null when the column was already present but nullable.
For Oracle and SQL Server I can provide you with a few sample procedures. In MySQL I would code this using a client side language, preferably OS independent to allow installations to run on Windows and Linux. Maybe using Apache Ant when you have experience with that.
Versioning Data
We split the tables in four categories:
R: referential data; data the application site must provide before he actually use the system. For instance, general ledger account codes. The referential data seldomly changes after go live and does not continuously grow in size. The contents reflect the site's business model where the application is used.
T: transaction data; data the site registers, changes and removes during use of the application. For instance, general ledger entries. The transaction data starts at 0 an grows continuously. When company doubles in revenues, transaction data also doubles.
S: seeded data; data NOT maintained by the user at the site but provided and maintained by the developing party. Essentially this is code turned into data. For example, 'F' stands for 'Female'. Errors in seeded data can lead to system errors.
O: the rest (ideally not needed, since they are technical, but some systems require a temporary table A or a scratch table B).
The contents of tables of category 'S' (seeded data) is placed under version control. We normally register these as metadata in our case tool, then named 'data sets', but you can also use for instance Microsoft Excel or even code.
For example, in Excel you would have a list of rows of seeded data. In column A you might enter an Excel function like =B..&"|"&C..& "|" & ... which concatenates everything and makes it suitable for loading by a loader tool.
For example in code, you might have a call like:
verifySeed('TABLE_A', 'CODE', 'VALUE')
The Excel is a little bit hard to bring under version control allowing multiple users to change contents at the same time. The approach with code is very simple.
Please remember to also add features to remove obsoleted seeded data. For instance, by explicitly listing obsoleted seeded data or by automatically removing all seeded data present in the tables but not touched by the last installation.
You would need to keep a journal of transactions on your datamodel that is synchronised to your code versions. For each update that adds information (i.e. a new field) you can simply enter the statements like 'ALTER TABLE x ADD COLUMN y ...' and provide a DEFAULT VALUE (with a function perhaps) in an update script. And a 'ALTER TABLE x REMOVE COLUMN y ...' for the downdate script. You would need to export your data before you truncate information in a table. You can convert the dumped table data to SQL for the inverse transaction so that you can add the missing information using these.
You can use a 'journal' table within your data-model to keep track of these transactions using simple ordinals that denote the applied scripts. Whenever the software is installed it can compare these numbers to create a list of transactions to play to move the database from state N to state X, backwards or forwards, without losing any data!

How to load excel data into sql server 2008 table?

DBMS - SQL server 2008 r2 with management studio (GUI)
Excel file - Columns exactly like the columns in destination table
How do I load the data from the excel table into the sql table? Are there any potential problems in this way of doing things ? For example, a column does not allow null, but the
excel has a null.
The simpliest way to import XLS data directly into SQL is to use the Import Wizard.
You can do this either by having the import wizard create the table for you when the wizard runs, alternatively, you could create the table before hand and then use the wizard and use that table as your target table.
I prefer the latter method as I like to control my table creation with my desired datatypes.
To find the Import Wizard, simply right-click on the Database name, Tasks -> Import Data
You can also use an Excel Data Source in SQL Server Integration Services.
Some caveats though:
If you are running in a 64-bit OS, make sure your package properties are set to disable the 64-bit runtime, or the import from Excel 2003-2007 or will fail. You will get a connection error.
Take the time to use the advanced editor to shrink your columns to the size you need, or every field will come out as nvarchar(255) by default, which is a waste of space. If you are mapping to an existing table, you may need to convert the data types as well.
The advantage of this approach is that you can re-use it more easily, though of course you can save the package the wizard creates as well.
Joey Morgan
Programmer/Analyst Principal I
WellPoint Medicaid Business Unit

Perl: How to copy/mirror remote MYSQL table(s) to another database? Possibly different structure too?

I am very new to this and a good friend is in a bind. I am at my wits end. I have used gui's like navicat and sqlyog to do this but, only manually.
His band info data (schedules and whatnot) is in a MYSQL database on a server (admin server).
I am putting together a basic site for him written in Perl that grabs data from a database that resides on my server (public server) and displays schedule info, previous gig newsletters and some fan interaction.
He uses an administrative interface, which he likes and desires to keep, to manage the data on the admin server.
The admin server db has a bunch of tables and even table data the public db does not need.
So, I created tables on the public side that only contain relevant data.
I basically used a gui to export the data, then insert to the public side whenever he made updates to the admin db (copy and paste).
(FYI I am using DBI module to access the data in/via my public db perl script.)
I could access the admin server directly to grab only the data I need but, the whole purpose of this is to "mirror" the data not access the admin server on every query. Also, some tables are THOUSANDS of rows and parsing every row in a loop seemed too "bulky" to me. There is however a "time" column which could be utilized to compare to.
I cannot "sync" due to the fact that the structures are different, I only need the relevant table data from only three tables.
SO...... I desire to automate!
I read "copy" was a fast way but, my findings in how to implement were too advanced for my level.
I do not have the luxury of placing a script on the admin server to notify when there was an update.
1- I would like to set up a script to check a table to see if a row was updated or added on the admin servers db.
I would then desire to update or insert the new or changed data to the public servers db.
This "check" could be set up in a cron job I guess or triggered when a specific page loads on the public side. (the same sub routine called by the cron I would assume).
This data does not need to be "real time" but, if he updates something it would be nice to have it appear as quickly as possible.
I have done much reading, module research and experimenting but, here I am again at stackoverflow where I always get great advice and examples.
Much of the terminology is still quite over my head so verbose examples with explanations really help me learn quicker.
Thanks in advance.
The two terms you are looking for are either "replication" or "ETL".
First, replication approach.
Let's assume your admin server has tables T1, T2, T3 and your public server has tables TP1, TP2.
So, what you want to do (since you have different table structres as you said) is:
Take the tables from public server, and create exact copies of those tables on the admin server (TP1 and TP2).
Create a trigger on the admin server's original tables to populate the data from T1/T2/T3 into admin server's copy of TP1/TP2.
You will also need to do initial data population from T1/T2/T3 into admin server's copy of TP1/TP2. Duh.
Set up the "replication" from admin server's TP1/TP2 to public server's TP1/TP2
A different approach is to write a program (such programs are called ETL - Extract-Transform-Load) which will extract the data from T1/T2/T3 on admin server (the "E" part of "ETL"), massage the data into format suitable for loading into TP1/TP2 tables (the "T" part of "ETL"), transfer (via ftp/scp/whatnot) those files to public server, and the second half of the program (the "L") part will load the files into the tables TP1/TP2 on public server. Both halfs of the program would be launched by cron or your scheduler of choice.
There's an article with a very good example of how to start building Perl/MySQL ETL: http://oreilly.com/pub/a/databases/2007/04/12/building-a-data-warehouse-with-mysql-and-perl.html?page=2
If you prefer not to build your own, here's a list of open source ETL systems, never used any of them so no opinions on their usability/quality: http://www.manageability.org/blog/stuff/open-source-etl
I think you've misunderstood ETL as a problem domain, which is complicated, versus ETL as a one-off solution, which is often not much harder than writing a report. Unless I've totally misunderstood your problem, you don't need a general ETL solution, you need a one-off solution that works on a handful of tables and a few thousand rows. ETL and Schema mapping sound scarier than they are for a single job. (The generalization, scaling, change-management, and OLTP-to-OLAP support of ETL are where it gets especially difficult.) If you can use Perl to write a report out of a SQL database, you probably know enough to handle the ETL involved here.
1- I would like to set up a script to check a table to see if a row was updated or added on the admin servers db. I would then desire to update or insert the new or changed data to the public servers db.
If every table you need to pull from has an update timestamp column, then your cron job includes some SELECT statements with WHERE clauses based on the last time the cron job ran to get only the updates. Tables without an update timestamp will probably need a full dump.
I'd use a one-to-one table mapping unless normalization was required... just simpler to my opinion. Why complicate it with "big" schema changes if you don't have to?
some tables are THOUSANDS of rows and parsing every row in a loop seemed too "bulky" to me.
Limit your queries to only the columns you need (and if there are no BLOBs or exceptionally big columns in what you need) a few thousand rows should not be a problem via DBI with a FETCHALL method. Loop all you want locally, just make as few trips to the remote database as possible.
If a row is has a newer date, update it. I will also have to check for new rows for insertion.
Each table needs one SELECT ... WHERE updated_timestamp_columnname > last_cron_run_timestamp. That result set will contain all rows with newer timestamps, which contains newly inserted rows (if the timestamp column behaves like I'd expect). For updating your local database, check out MySQL's ON DUPLICATE KEY UPDATE syntax... this will let you do it in one step.
... how to implement were too advanced for my level ...
Yes, I have actually done this already but, I have to manually update...
Some questions to help us understand your level... Are you hitting the database from the mysql client command-line or from a GUI? Have you gotten to the point where you've wrapped your SQL queries in Perl and DBI, yet?
If the two databases have different, you'll need an ETL solution to map from one schema to another.
If the schemas are the same, all you have to do is replicate the data from one to the other.
Why not just create identical structure on the 'slave' server to the master server. Then create a small table that keeps track of the last timestamp or id for the updated tables.
Then select from the master all rows changed since the last timestamp or greater than the id. Insert them into the matching table on the slave server.
You will need to be careful of updated rows. If a row on the master is updated but the timestamp doesn't change then how will you tell which rows to fetch? If that's not an issue the process is quite simple.
If it is an issue then you need to be more sophisticated, but without knowing the data structure and update mechanism its a goose chase to give pointers on it.
The script could be called by cron every so often to update the changes.
if the database structures must be different on the two servers then a simple translation step may need to be added, but most of the time that can be done within the sql select statement and maybe a join or two.

MySQL to SQL Server transferring data

I need to convert data that already exists in a MySQL database, to a SQL Server database.
The caveat here is that the old database was poorly designed, but the new one is in a proper 3N form. Does any one have any tips on how to go about doing this? I have SSMS 2005.
Can I use this to connect to the MySQL DB and create a DTS? Or do I need to use SSIS?
Do I need to script out the MySQL DB and alter every statement to "insert" into the SQL Server DB?
Has anyone gone through this before? Please HELP!!!
See this link. The idea is to add your MySQL database as a linked server in SQL Server via the MySQL ODBC driver. Then you can perform any operations you like on the MySQL database via SSMS, including copying data into SQL Server.
Congrats on moving up in the RDBMS world!
SSIS is designed to do this kind of thing. The first step is to map out manually where each piece of data will go in the new structure. So your old table had four fields, in your new structure fileds1 and 2 go to table a and field three and four go to table b, but you also need to have the autogenerated id from table a. Make notes as to where data types have changed and you may need to make adjustments or where you have required fileds where the data was not required before etc.
What I usually do is create staging tables. Put the data in the denormalized form in one staging table and then move to normalized staging tables and do the clean up there and add the new ids as soon as you have them to the staging tables. One thing you will need to do if you are moving from a denormalized database to a normalized one is that you will need to eliminate the duplicates from the parent tables before inserting them into the actual production tables. You may also need to do dataclean up as there may be required fileds in the new structure that were not required in the old or data converstion issues becasue of moving to better datatypes (for instance if you stored dates in the old database in varchar fields but properly move to datetime in the new db, you may have some records which don't have valid dates.
ANother issue you need to think about is how you will convert from the old record ids to the new ones.
This is not a an easy task, but it is doable if you take your time and work methodically. Now is not the time to try shortcuts.
What you need is an ETL (extract, transform, load) tool.
http://en.wikipedia.org/wiki/Extract,_transform,_load#Tools
I don't really know how far an 'ETL' tool will get you depending on the original and new database designs. In my career I've had to do more than a few data migrations and we usually always had to design a special utility which would update a fresh database with records from the old database, and yes we coded it complete with all the update/insert statements that would transform data.
I don't know how many tables your database has, but if they are not too many then you could consider going the grunt root. That's one technique that's guaranteed to work after all.
If you go to your database in SSMS and right-click, under tasks should be an option for "Import Data". You can try to use that. It's basically just a wizard that creates an SSIS package for you, which it can then either run for you automatically or which you can save and then alter as needed.
The big issue is how you need to transform the data. This goes into a lot of specifics which you don't include (and which are probably too numerous for you to include here anyway).
I'm certain that SSIS can handle whatever transformations you need to do to change it from the old format to the new. An alternative though would be to just import the tables into MS SQL as-is into staging tables, then use SQL code to transform the data into the 3NF tables. It's all a matter of what your most comfortable with. If you go the second route, then the import process that I mentioned above in SSMS could be used. It will even create the destination tables for you. Just be sure that you give them unique names, maybe prefixing them with "STG_" or something.
Davud mentioned linked servers. That's definitely another way that you can go (and got my upvote). Personally, I prefer to copy the tables over into MS SQL first since linked servers can sometimes have weirdness, especially when it comes to data types not mapping between different providers. Having the tables all in MS SQL will also probably be a bit faster and saves time if you have to rerun or correct portions of the data. As I said though, the linked server method would probably be fine too.
I have done this going the other direction and SSIS works fine, although I might have needed to use a script task to deal with slight data type weirdness. SSIS does ETL.