Linq 2 SQL on shared host - linq-to-sql

I recently ran into an issue with linq on a shared host.
The host is Shared Intellect and they support v3.5 of the framework. However, I am uncertain to whether they have SP1 installed. My suspicion is that they do not.
I have a simple News table that has the following structure:
NewsID uniqueidentifier
Title nvarchar(250)
Introduction nvarchar(1000)
Article ntext
DateEntered datetime (default getdate())
IsPublic bit (default true)
My goal is to display the 3 most recent records from this table. I initially went the D&D method (I know, I know) and created a linq data-source and was unable to find a way to limit the results the way I desired, so I removed that and wrote the following:
var dc = new NewsDataContext();
var news = from a in dc.News
where a.IsPublic == true
orderby a.DateEntered descending
select new { a.NewsID, a.Introduction };
lstNews.DataSource = news.Take(3);
lstNews.DataBind();
This worked perfectly on my local machine.
However, when I uploaded everything to the shared host, I recieved the following error:
.Read_<>f__AnonymousType0`2
(System.Data.Linq.SqlClient.Implementation.ObjectMaterializer`1<System.Data.SqlClient.SqlDataReader>)
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.MethodAccessException:
.Read_<>f__AnonymousType0`2
(System.Data.Linq.SqlClient.Implementation.ObjectMaterializer`1<System.Data.SqlClient.SqlDataReader>)
I tried to search the error on Google, but met with no success. I then tried to modify my query in every way I could imagine, removing various combinations of the where/orderby parameters as well as limiting my query to a single column and even removing the Take command.
My Question therefore comes in 3 parts:
Has anyone else encountered this and if so, is there a "quick" fix?
Is there a way to use the datasource to limit the rows?
Is there some way to determine what version of the framework the shared host is running short of emailing them directly (which I have done and am awaiting an answer)

System.MethodAccessException is thrown by the framework when it is missing an assembly, or one of the references are of the wrong version.
The first thing I would do is try uploading and referencing your code to the LINQ assemblies in your BIN, instead of the shared hosting providers GAC.

Related

RTweet new lookup_users continued excess error

I was using R's RTweet lookup_users to successfully pull data on 6 IDs
test<-c("ID1","ID2","ID3","ID4","ID5","ID6")
detail_followers <- lookup_users(users=test,
parse=TRUE,
token=[bearer_token here],
retryonratelimit=TRUE,
verbose=TRUE)
Update: I created a loop so I could get more visibility on the run, which made it profoundly slow. This would be tolerable, except for two new problems. First, some lookup_user are missing the "profile_banner_url" variable. I built a function to check for this problem and insert a NULL value for those users missing it, further slowing the run.
Finally, many users in this set appear to be bots, as they are deleted within 24 hours of my having just pulled them from the API. The error provided from lookup_users is "Error: $ operator is invalid for atomic vectors". is.atomic eval is not useful, because the fault occurs when lookup_user is called. There are far too many of these fake/suspended/deleted users for manual prompts.
These may be issues with the new RTweet from GitHub.

SSIS Errors for simple CSV Data Flow

Sorry to darken your day with my troubles, but SSIS has broken me! I am new to SSIS and I just seem to be misunderstanding it.
For background: I have a few versions of a basic package that includes a Foreach Loop container and a Data Flow with a few Derived Columns that imports CSV files into a SQL Server Staging table. It is very straightforward and does include an Execute SQL task and a File Move but those work fine. The issues are with the Foreach loop and the Data Flow.
I have one version of this package (let’s call it “A”) that seemed to be working fine. It would process multiple files in a folder, insert records into the staging table, properly execute the SQL Statements, and move the files to Archive. Everything seemed fine until I carefully QA’d the process. Turns out it was duplicating the data from one file, and never importing the data from a second Source File! Yet, the second/dupe round of data included the Source Filename (via a derived column) of the second file (but the data from the first). So it looked like I had successfully processed BOTH files until I looked at the actual data and saw that none of the values from the second source file were ever written to the Staging table.
Once I discovered this, I figured that the problem was in the Foreach loop and how I setup the different file path & name variables. So, I decided to try to make a new version of the package. I started by copying package A and created package B. In B, I deleted the Source Connection manager and created a new Connection Manager along with all new file & path variables. I then tried to cleanup/fix/replace various elements in my Data Flow and Foreach loop. In the process, I discovered that the Advanced Mappings from A – which DID work – were virtually all setup as String (even the Currency and Date columns). That did not seem right, so I modified each source money column by changing to data type Currency, and changed each date-related column to data type Date.
What followed has been dozens and dozens of Errors and I cannot get Package B to run. I have even changed all of the B data types back to String (mirroring the setup in Package A which DID work). But, still no joy.
This leads me to ask a few questions to those of you smarter than I:
1) Why can’t SSIS interpret Source CSV data using the proper data type? I.e. why do I need to set every Input column as a STRING when some columns are clearly & completely Numeric, Currency or Dates? (Yes, the Source CSV files are VERY clean – most don’t even have NULLS)
a. When I do change the Advanced mapping for a date-related Source column to Date, I get the ever present error message: [Flat File Source [30]] Error: Data conversion failed. The data conversion for column "Settle Date" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
2) When I reset the data types back to String in package B, I still get errors – usually Truncation errors (and Yes – I have adjusted the length to 250 in one of these columns).
a. Error Message: "The value could not be converted because of a potential loss of data.".
b. When I reset the Mappings to ignore the column (as a test), it throws a similar error at the next column.
3) Any ideas why Package A would dupe a file’s data and not process the second file, yet throw no errors and move both to Archive?
4) Why does the Data Viewer appear to have parsing errors (it shows data in the wrong columns) but when you use the Copy data feature in the data viewer and paste it into Excel, all of the data lines up perfectly?
5) Are there any tips & tricks that a rookie SSIS user needs to understand and which might not be apparent through the documentation and searching web articles as well as this site?
I can provide further details if they will help, but these packages are really very simple and should not be causing me this much frustration.
THANKS for any insights.
DGP
Wow seems like you have a lot of ssis issues... I think the reason for the same file being extracted is because of the the way your 'variable mappings' is defined.
Have you had a look and followed this guide:
https://www.simple-talk.com/sql/ssis/ssis-basics-introducing-the-foreach-loop-container/
Hope this helps.
Shaheen
Thanks Tab & Shaheen,
To all SSIS rookies - please learn from my mistakes!
It appears that my issue was actually in how I identified the TEXT QUALIFIER in the Connection Manager. I had entered "" and that was causing problems with how my columns were being parsed. The parsing issues caused unexpected values to appear in some of the columns and that was causing the errors in the package.
When I tried changing the the Text Qualifier to only ONE double quote - " - the whole thing worked!
As I mentioned - and as Shaheen suspected - my initial issues with the duplicate processing was probably due to how I setup the foreach loop. I had already fixed that, bit was still getting errors until I fixed the Text Qualifier.
I have only tested it a few times but it looks like that was the issue.
Thanks for the contributions.
DGP

PowerBuilder Find function throws an error "expression is not valid"

What is wrong with this code. I am checking whether there is an available record in the database before inserting a new serial number. When I enter any record whether available or not, it throws an error message:
"Expression is not valid". (PowerBuilder Classic 12.5 and SQL Server
2008)
If This.GetColumnName() = "serial_No" Then
long ll_serial
ll_serial=dw_newrecord.find(data, 1, dw_newrecord.rowcount())
if ll_serial>0 then
messagebox("validation error", "The record already exists")
return 1
end if
End If
It is likely that your data expression has a syntax error. It can be some misformed code -like missing quotes- or maybe that the column name is incorrect.
To help for tuning a filter or find expression, you can test it in the datawindow design screen via the Rows / filter menu.
A better solution for long-term coding design would be to integrate the Datawindow Debug Machine (made by a colleague of mine) to your project. It is a precious tool to prototype datawindow expressions for finding, filtering but also for dynamic objects creation / modification in a datawindow. And while correctly interfaced with a datawindow ancestor of your project, it can help with filters and find expression errors like here.
EDIT: As RealHowTo noticed, the tool has been updated. Here is the current latest version (but there is no updated demo screencast though).

Classic ASP/MySQL - Parameter Issue

I have been trying to insert a huge text-editor string in to my database. The application I'm developing allows my client to create and edit their website terms and conditions from the admin part of their website. So as you can imagine, they are incredibly long. I have got to 18,000+ characters in length and I have now received an error when trying to add another load of text.
The error I am receiving is this:
ADODB.Command error '800a0d5d'
Application uses a value of the wrong type for the current operation
Which points to this part of my application, specifically the Set newParameter line:
Const adVarChar = 200
Const adParamInput = 1
Set newParameter = cmdConn.CreateParameter("#policyBody", adVarChar, adParamInput, Len(policyBody), policyBody)
cmdConn.Parameters.Append newParameter
Now this policy I am creating, that is currently 18,000+ characters in length, is only half complete, if that. It could jump to 50 - 60,000! I tried using adLongVarChar = 201 ADO type but this still didn't fix it.
Am I doing the right thing for such a large entry? If I am doing the right thing, how can I fix this issue? ...or if I'm doing the wrong thing, what is the right one?
Try to avoid putting documents in your database if you can. Sometimes it's a reasonable compromise, serialised objects, mark up snippets and such.
If you don't want to query the document with sql the only benefit is the all in one place thing. ie back up your db, you back up your documents as well, and you can use your db connectivity exclusively.
That said nothing is free, carting all that stuff about in your database costs you.
If you can.
have a documents table, User name for the file, and internal name in your documents directory, so the file name is unique in the file system, and a path description, if there could be more than one.
Then just upload and download the selected document as a file, on a get or set of the related database entity.
You'll need to dal with deployment issues, document directory exists, and the account you are running mysql daemon as can see it, but most of the time, the issues you have keeping documents seperate fromthe db, are much easier to deal with than the head scratchers you are running into now.

Problem with the row count transform

I currently deployed an SSIS package (Developed on the 2005 version) (developed on my local server) in a pre production environment for testing. I have used the Row count transform to get a count of good/bad records. It works fine on my local system . However when i deploy this on the pre prod server, the row count does not work! (as in it does not recognize the vairbales i have assigned to the relevant transofm - no drop down abvaliable in the variables attribute part. tried deleting and adding a new transoform.. no luck.
Strangely this does not work for any of the other packages also present/deployed on the same server (tried this out by dropping an rc tramsform onto an existing package... same problem)
Any suggestions?
Thanks a tonne
If you are having problem with the row count transform, another alternative that we use here at my company is creating a script component and incrementing a rowcnt variable by one. The performance is just as good-just add this code:
Public Overrides Sub PostExecute()
MyBase.PostExecute()
Me.Variables.rowcnt = Me.Variables.rowcnt + 1
End Sub
This certainly seems odd. Are you saying that when you are in the Advanced Editor for Row Count, under Custom Properties, that the drop down beside VariableName has no options? You should at least see all of the System:: variables.
If the User:: variables are not listed, my first suspicion is that they do not have the correct scope to be visible in the Data Flow Task where you have your RowCount.
When you go up to the Control Flow, and get the Variables list, do you see your user variable there? What is the scope of it?
Note that I recognize that none of this fits with "it works locally but doesn't work when copied to the server", but it is at least where I would start...