Mapping between Time and TimeSpan - linq-to-sql

in project I am mapping DbType Time to System.TimeSpan. It should by fine, but I am still getting error DBML1005. I have noticed that this is frequently asked topic on internet, but in all the other topics were about mapping custom datatypes.
Error 1 DBML1005: Mapping between DbType 'Time NOT NULL' and Type 'System.TimeSpan' in Column 'RelayOnTime' of Type 'Terminal' is not supported.
<Column Name="RelayOnTime" Type="System.TimeSpan" DbType="Time NOT NULL"
CanBeNull="false" />

The problem is in Visual Studio. I had Visual Studio 2008 RTM version, that still has some bugs. The solution is installing Service pack 1

Related

BIML: Issues about Datatype-Handling on ODBC-Source Columns with varchar > 255

I'm just getting into BIML and have written some Scripts to creat a few DTSX-Packages. In general the most things are working. But one thing makes me crazy.
I have an ODBC-Source (PostgreSQL). From there I'm getting data out of a table using an ODBC-Source. The table has a text-Column (Name of the column is "description"). I cast this column to varchar(4000) in the query in the ODBC-Source (I know that there will be truncation, but it's ok). If I do this manually in Visual Studio the Advanced Editor of the ODBC-Source is showing "Unicode string [DT_WSTR]" with a Length of 4000 both for the External and the Output-Column. So there everything is fine. But if I do the same things with BIML and generate the SSIS-Package the External-Column will still say "Unicode string [DT_WSTR]" with a Length of 4000, but the Output-Column is telling "Unicode text stream [DT_NTEXT]". So the mapping done by BIML differs from the Mapping done by SSIS (manually). This is causing two things (warnings):
A Warning that metadata has changed and should be synced
And a Warning that the Source uses LOB-Columns and is set to Row by Row-Fetch..
Both warnings are not cool. But the second one also causes a drasticaly degredation in Performance! If I set the cast to varchar(255) the Mapping is fine (External- and Output-Column is then "Unicode string [DT_WSTR]" with a Length of 255). But as soon as I go higher, like varchar(256) it's again treated as [DT_NTEXT] in the Output.
Is there anything I can do about this? I invested days in the Evaluation of BIML and find many things an increase in Quality of Life, but this issue is killing it. It defeats the purpose of BIML if I have to correct the Errors of BIML manually after every Build.
Does anyone know how I can solve this Issue? A correct automatic Mapping between External- and Output-Columns would be great, but at least the option to define the Mapping myself would be ok.
Any Help is appreciated!
Greetings
Marco
Edit As requested a Minimal Example for better understanding:
The column in the ODBC Source (Postegres) has the type "text" (Columnname: description)
I select it in a ODBC-Source with this Query (DirectInput):
SELECT description::varchar(4000) from mySourceTable
The ODBC-Source in Biml looks like this:
<OdbcSource Name="mySource" Connection="mySourceConnection"> <DirectInput>SELECT description::varchar(4000) from mySourceTable</DirectInput></OdbcSource>
If I now generate the dtsx-Package the ODBC-Source throws the above mentioned warnings with the above mentioned Datatypes for External and Output-Column
As mentioned in the comment before I got an answer from another direction:
You have to use DataflowOverrides in the ODBC-Source in BIML. For my example you have to do something like this:
`<OdbcSource Name="mySource" Connection="mySourceConnection">
<DirectInput>SELECT description::varchar(4000) from mySourceTable</DirectInput>
<DataflowOverrides>
<OutputPath OutputPathName="Output">
<Columns>
<Column ColumnName="description" SsisDataTypeOverride="DT_WSTR" DataType="String" Length="4000" />
</Columns>
</OutputPath>
<OutputPath OutputPathName="Error">
<Columns>
<Column ColumnName="description" SsisDataTypeOverride="DT_WSTR" DataType="String" Length="4000" />
</Columns>
</OutputPath>
</DataflowOverrides>
</OdbcSource>`
You won't have to do the Overrides for all columns, only for the ones you have mapping-Issues with.
Hope this solution can help anyone who passes by.
Cheers

how to fix CONVERSION errors after importing SSIS PROJECT

I'm importing a perfectly working SSIS project from TFS.
I have actually a problem with all the packages that contain a data FLOW with a date importation.
I get dozens of this error :
Validation error. DFT Get Date ODBC Source CodeDate2 [63]: The OLE DB provider used by the OLE DB adapter cannot convert between types "DT_BYTES" and "DT_DBDATE" for "Date".
and when I click on the odbc source editor, I have the following message:
the metadata of the following output columnsdoes not match the metadata of the external columns with which the output columns are associated:
Output "ODBC Source Output": "Date"
Do you want to replace the metadata of the output columns with the metadata of the external columns?
the fact is that works everywhere but on my computer.
is there an ole db provider component I'm lacking of something like that?
Downgrading will work, but if that's not possible for you, then rewriting your queries may also solve your problem.
In my case I had a Postgres query returning columns of type date. I just converted them all to timestamptz using ::timestamptz. At that point the columns changed from DT_BYTES to DT_DBTIMESTAMP, which was just fine for my purposes.
It might be related to the version of Visual Studio or SSDT.
Try to install SSDT 15.8.0(SSDT previous releases), and run the package in it.
I once saw similar posts on MSDN after the release of Visual Studio 15.9.2
Import from Teradata using ODBC gives VS_NEEDSNEWMETADATA error
ODBC Progress datatype problems after updating to VS 2017 15.9
Same here, I forced the type casting it in the select and it works :
SELECT
[...]
cast(release_date as datetime) as release_date,
[...]
FROM cm_wo

Entity Framework converts StartsWith to MySQL's Locate, MySQL's Locate doesn't use index

I'm using Entity Framework with MySQL, and my Linq Query:
db.Persons.Where(x => x.Surname.StartsWith("Zyw")).ToList();
..is producing the SQL:
SELECT PersonId, Forename, Surname
FROM Person
WHERE (LOCATE('Zyw', Surname)) = 1
...and it would seem that this doesn't make use of the index on Surname.
If LOCATE is replaced with the equivalent LIKE, the query speedily returns the required results. As it is it takes all afternoon.
Why is Entity Framework and its connecting drivers opting for this wierd LOCATE function / how can I make it use LIKE instead / why is MySQL making a poor index decision for the LOCATE function / how can I make it better?
Update:
I'm afraid I was guilty of over simplifying my code for this post, the Linq producing the error is in fact:
var target = "Zyw";
db.Persons.Where(x => x.Surname.StartsWith(target)).ToList();
If target term is hard coded, the SQL generated does indeed use LIKE, but with a variable term the SQL changes to use LOCATE.
This is all using the latest generally available MySQL for Windows as delivered by MySQL Installer 5.6.15.
Update:
A couple more notes to go with the bounty; am using:
Visual Studio 2010
EntityFramework 6.0.2
MySQL Installer 5.6.15,
which in turn gives:
MySql.Data 6.7.4
MySql.Data.Entities 6.7.4
The Entity Framework code is generated database first style.
I've also tried it with the latest connector from Nuget (MySql.Data 6.8.3) and the problem is still there.
It's likely your problem is caused by:
You are using an older connector with the bug.
You have a special case (using a variable to hold the .Contains search) described as a bug here
Does your case fall into any of those?
This looks like a regression of MySQL bug #64935 to me.
I can confirm that, using the same builds of EF6 and MySQL Connector, I'm getting the same SQL generated too:
context.stoppoints.Where(sp => sp.derivedName.StartsWith(stopName));
...logs as:
SELECT
`Extent1`.`primaryCode`,
...
`Extent1`.`stop_timezone`
FROM `stoppoints` AS `Extent1`
WHERE (LOCATE(#p__linq__0, `Extent1`.`derivedName`)) = 1
Entity Framework: 6.0.2
MySQL Connector.Net: 6.8.3
I have reported this as a MySQL bug regression.

How to specify null value in MS Access through the JDBC-ODBC bridge?

I am not able to call setNull on PreparedStatement using MS Access (sun.jdbc.odbc.JdbcOdbcDriver)
preparedStatement.setNull(index, sqltype).
Is there a workaround for this? For LONGBINARY data type, I tried the following calls, neither worked.
setNull(index, java.sql.Types.VARBINARY)
setNull(index, java.sql.Types.BINARY)
java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver]Invalid SQL data type
at sun.jdbc.odbc.JdbcOdbc.createSQLException(JdbcOdbc.java:6957)
at sun.jdbc.odbc.JdbcOdbc.standardError(JdbcOdbc.java:7114)
at sun.jdbc.odbc.JdbcOdbc.SQLBindInParameterNull(JdbcOdbc.java:986)
at sun.jdbc.odbc.JdbcOdbcPreparedStatement.setNull(JdbcOdbcPreparedStatement.java:363)
The answer that I have observed to work "quite well" for binding null to most data types with JDBC 4.1, Java 7, MS Access 2013 and the JDBC-ODBC bridge is this one, which I've built into jOOQ:
switch (sqlType) {
case Types.BINARY:
case Types.VARBINARY:
case Types.LONGVARBINARY:
case Types.BLOB:
stmt.setNull(nextIndex(), Types.VARCHAR);
break;
default:
stmt.setString(nextIndex(), null);
break;
}
I just tested this and for an OLE Object (LONGBINARY) field in an Access 2010 database I found that all five of these variations allowed me to specify a null value as the parameter to a PreparedStatement using vanilla JDBC/ODBC Driver={Microsoft Access Driver (*.mdb, *.accdb)}:
s.setNull(4, java.sql.Types.LONGNVARCHAR);
s.setNull(4, java.sql.Types.LONGVARCHAR);
s.setNull(4, java.sql.Types.NCHAR);
s.setNull(4, java.sql.Types.NVARCHAR);
s.setNull(4, java.sql.Types.VARCHAR);
It is particularly interesting that
s.setNull(4, java.sql.Types.LONGVARBINARY);
does not work, considering that when we retrieve an OLE Object from an Access database what we get is a java.sql.Types.LONGVARBINARY according to a ResultSetMetaData object:
String SQL;
SQL = "SELECT Photo FROM City WHERE City_ID = 12";
s = conn.createStatement();
s.executeQuery(SQL);
ResultSet rs = s.getResultSet();
ResultSetMetaData rsmd = rs.getMetaData();
String accessTypeName = rsmd.getColumnTypeName(1);
int javaType = rsmd.getColumnType(1);
String javaTypeName = (
javaType == java.sql.Types.LONGVARBINARY
? "java.sql.Types.LONGVARBINARY"
: "some other Type"
);
System.out.println(String.format("The database-specific type name for this column is '%s'", accessTypeName));
System.out.println(String.format("The SQL type for this column is: %d (%s)", javaType, javaTypeName));
That returns:
The database-specific type name for this column is 'LONGBINARY'
The SQL type for this column is: -4 (java.sql.Types.LONGVARBINARY)
The Wikipedia article on ODBC includes a history suggesting that after an earlier effort ("SQL/CLI") became part of the ISO SQL standard, Microsoft essentially forked their own version and eventually came up with ODBC. If that is the case, then early efforts to conform to an "ODBC 'standard'" may have faced the same difficulties as those trying to conform to Microsoft's RTF document "standard": the "standard" was whatever Microsoft implemented and was subject to change at Microsoft's sole discretion.
However, Microsoft's 1995 ODBC White Paper, available via the download link here, consistently refers to the "OLE Object" datatype as mapping to "*BINARY" or "raw" types (or, in the case of SQL Server, to the now-deprecated IMAGE datatype). So, the CHAR/BINARY discrepancy doesn't appear to be a case of some early ODBC quirk that just got perpetuated.
Certainly this mystery is not new. A forum thread here from ~11 years ago suggests that this issue arose when something changed after JDK 1.4 was released.
And finally, Oracle has stated that the JDBC-ODBC Bridge "will be removed in JDK 8" (ref: here). So, if there hasn't been an "official" explanation (or a fix, for that matter), it is becoming increasingly unlikely that any will be forthcoming.
I saw a similar error once when I was sending a SQL query with 2 conditions in the where clause. One of the conditions needed to be quoted. It was a number in varchar format. The MSSQL server required that the condition be quoted or else I saw the error You got in your question.

sql server 2008 table valued parameters linq2sql

Has anybody experimented with these. Is this supported?
Table Value parameters aren't supported yet in LINQ to SQL. There's a few posts about this on Microsoft's MSDN forums:
DbType.Structured not available
SQL Server 2k8.User Defined Table Type. Mapping between DbType 'Structured' and Type 'System.Object'
The second link refers to Entity Framework, but the underlying issue is the same.
I actually had to try this myself. :0
It does not seem to be supported. I get this error when adding a stored procedure using a tvp parameter into the dbml file
DBML1005: Mapping between DbType 'Structured' and Type 'System.Object' in Parameter 'TVP' of Function 'dbo.spTestTableTypeParm' is not supported
Sad but true, I thought this could be a killer feature of sql 2008 and linq2sql