Our team has created a SSIS Package that imports data from an Oracle Source into a SQL Database, the package used Oracle Provicer for OLEDB to client to the Oracle SOR.
The major Data type difference between the Source and the Destiantion Databases is that while the Source has string columns has Unicode the Destination DB supports a non Unicode format.
We added Data conversion components and let the package run, while it works on the Development server (which has oracle 11g components) it does not seem to work on the Test server (Oracle 8 Installed)
Also we tried to add Cast Statements to the Source query, however the external and the output columns do not seem to pick up the Converted format.
Have tried, Dervied Columns, Data Conversions til now
Need Ideas badly
I got the code to work by Setting the ValidateExternal Meta Data Property for the Source Task, also before starting development with SSIS and Oracle ensure you do have the Oracle Provider for .NET ODTwithODAC112030 package installed.
There is a bug in one of the older versions of the oracle components -- to integrate with visual studio correctly (and still run under a 64bit environment post deploy) you need to use the ODAC112040 32bit -- note the older .30 version still had the bug;
Related
Maybe somebody can provide assistance for the following question:
I have a SSIS package with Target SQL Server Version 2014. I am not 100% sure which version my target server is running, but it is at least 2014, but i assume its 2016.
I have developed the package with SQL Server Data Tools 2015. It contains of:
Flat File source + Connection Manager
OLEDB Destination
Conversion Step
The source file is encoded in UTF-8, the target database uses ANSI 1252. The file is located on a network drive (essentially the same windows server the SQL Server with the target database is running on).
Contained in the file is a decimal field with precision, scale 18,2. The actual data in the field is always 0.00.
I have specified the input field in the connection manager as decimal.
Now my question: when i execute the package directly in Visual Studio (Data Tools), it works flawlessly, all rows are imported into the target table.
When I call the dtsx package from an SQL Server Agent Job (it is the only step) it fails with a conversion error regarding the decimal field:
Source: Data Flow Task Flat File Source [88] Description: Data conversion failed. The data conversion for column "xxx" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
The error also arises when i switch between target server versions 2014 / 2016.
A list of things i tried without being successful:
remove column headers from the input file
test it with only one data row... no success.
created a new solution with a new dtsx package and a slightly different input path
deleted the SQL Server agent job and created a new one
changed the encoding of the input file to UTF-8
I didnt find a solution for this exact error phenomenon on the net.
Can anyone provide help?
Many thanks in advance!
Got it. The reason lies in the different decimal formats being awaited when calling the package from the SQL Server Agent Job versus the execution directly in Visual Studio / SQL Server Data Tools.
The Locale Id of the SSIS / dtsx package was by default set to "German" (which is the default language of my OS) whereas the language in effect on the target SQL Server is "English (US)". The input file contains the decimal values in the US format (dotted).
As soon as i changed the locale ID of the package to "English (United States)" it worked. (it would be also possible to change the decimal separator from a dot to a comma in the input file and leave the SSIS locale ID on German).
Obviously the execution via Visual Studio ignores the language setting on the server (otherwise it would crash in that mode too...)
this seems to be a rather similar question (it lead me to the correct solution):
https://dba.stackexchange.com/questions/88895/difference-in-number-format-comma-and-dot-on-ssms-and-ssis
Im getting this error when running an SSIS project that moves all of our files from FoxPro into SSMS Dbs. The corresponding DBF file opens fine on FoxPro and the memo field which is the FPT file to knowledge works fine too, So I don't what the solution is, I tried making a little changes to the memos to see if that would update the file but that didnt work. I tired re indexing the DBF but no luck there either. How can i generate a new FPT file so I can run this project. How could it be invalid. There is no support online for this.
You are using a third party .NET tool (RatSql.DbfReader). Judging from the exception message this tool was written in a .NET language. It's not a Microsoft product and not part of Visual FoxPro.
DBF files are a family of formats that are similar but have subtle differences. Many tools that claim to support DBF files, will only support DBF files created by certain products, such as VFP 6, FP 2.6, dBase or Clipper.
The options I see for you:
Get in contact with the vendor and make sure that Visual FoxPro tables are supported by their product. They might have an update or can suggest alternative solutions.
If this is an in-house developed product you need to get into touch with the original developers.
Since you have Visual FoxPro, you can convert the file to an older version using the following commands:
USE table
COPY TO newTable TYPE FOX2X
replace table with the path and name of the DBF file, and newTable with the path and name of a new file. Then use the new file for your import. There's no guarantee this works, but the old FoxPro 2.x format is more likely to be supported by third party libraries.
If this is a repeated process, you can compile these commands into an EXE file and incorporate calling the EXE into your import process.
Replace the .NET component with the Visual FoxPro ODBC driver or OLEDB provider. The ODBC driver does not support features introduced after VFP 6, but should work in most cases. Both are only available as 32-bit drivers which might require you to use 32-bit tools to connect to the DBF file.
It might actually be a problem with one ore more records where the pointer to the memo field is wrong. You can find this with Visual FoxPro. Open the DBF file in Visual FoxPro and then open a Browse window. You can do this interactively in the menu or by executing USE and BROWSE commands.
Then double click the column saying "memo" or "Memo". If there are multiple columns, you need to repeat the following step for every column. Now click on the BROWSE window's title bar and hold down the down arrow key. Visual FoxPro should begin scrolling through the table and displaying every record. If there is an error in the memo field, you will get an error message.
Alternatively you can write a FoxPro program that uses a SCAN...ENDSCAN loop and accesses every memo field with =memofieldname if there are too many rows in the table to do this interactively.
I have a project moving data from a MySQL database to a DB in SQL2012. Right now there's nothing fancy, just a straight push of data. I'm access the MySQL instance via ODBC and the SQL DB is an OLE connection. When the packages are generated there is a metadata mismatch between the ODBC source and the OLE destination. (that's all there is to the data flow so far) The message states "Column "" cannot convert between unicode and non-unicode string types."
Examining the metadata in the path editor between the source & destination shows that the problem source columns are being read as DT_STR with a length of 255 and code page of 1252. In MySQL however they are collated as utf8_general_ci which is unicode.
The corresponding columns in the target SQL database tables are varchar of the same length.
If I open up the OLE destination and click OK the metadata refreshes and it works fine after saving the package. That defeats the whole purpose of using BIML to create the packages and takes forever to go open up 50+ packages to refresh the metadata and save.
I have tried several things with no changes in behavior:
Changing between the unicode and ansi MySQL odbc drivers.
Tried making the destination columns nvarchar. But I had to put in data conversion transforms to make that work.
Surrounded the schema/table names in [] in the ExternalTableOutput element of the OLEDBDestination.
Changed SQL Native Client Versions.
Put a 'COLLATE latin1_bin' statement at the end of the source queries pulling from MySQL.
Originally was in VS2015 w/ BimlExpress and tried it in SSDT 2012 w/ BimlExpress. (btw...had to install SSDT for 2014 in order to get BIML to compile in SSDT2012 because of missing Microsoft.DataWarehouse.Interfaces dll)
Any ideas would be welcome, I'm pretty much at the end of my imagination on this one.
Thanks!
I struggled with a similar problem. In the end I had to open the generated packaged in notepad and replaced the word bytes with wstr through the text file. I saved and reopened and it all works.
I have also encountered the problem to create a data source in VS2012 with MySQL connector, while many thanks for the connector updates from Oracle.
Than I installed the MySQL connector version 6.6.5, it looks work well while there is no directory in the "Data Source" sub-window but the Model has been created after I created the data source.
When I configuring the insert process to the data using Linq, VS2012 tells there is no such method "AddTo".
Therefore, according the things happened during my programming:
Is it common that the data source will not be shown in that sub window after I have already add the data entities?
Why there is no such method "AddTo" (Actually no any methods at all in the definition) when I using Linq with VS2012?
Many thanks in advance,
Harry
I'm trying to export data from a table in MS SQL Server 2008 R2 to a RDB Database.
But I'm having problems to export Hebrew strings to RDB because my SQL Server is Unicode and my RDB is Non-unicode.
Here are the details:
I'm using Oracle RDB Data Provider for .NET
I have a Non-Unicode Text field in the target table, and even if I convert the data to 28598 string, the data still get in Unicode format in RDB (and it becomes unreadable).
Here are the results:
I've added a Grid Data Viewer to check if the data is going in the right format and it looks like it is.
I went even further and configured manually the ADO.NET External Columns property for this specific Column to match the data type (automatically he recognizes it as Unicode, which is not).
For this I had check FALSE the External Data Validation. Otherwise SSIS would start to run.
If anyone have experience writing Non-Unicode data in RDB from SSIS please advise me. It can be Hebrew or any other language that uses Non-Unicode characters.
I found a solution to the problem. I replaced the connection provider from .Net Providers\ Oracle RDB Data Provider to ODBC.
For this I had to download the ODCB driver for RDB Databases from Oracle web site (32 bit version) and configure a System DSN in Windows ODBC Data Source Administrator.
In my opinion there's a bug in the Oracle RDB Data Provider for .NET that only allows Unicode data.
If Oracle updates this driver, I wish I could use the .NET provider instead of ODBC.