Meteor throws throwIfSelectorIsNotId exception - exception

When running some code Meteor throws a throwIfSelectorIsNotId exception. I have two clients running the same code and the exception is thrown when the second client is running the same pice of code.
Cant figure out what this exception means and why it is thrown. Hopefully someone will be able to explain it.

For certain operations on the client (since version 0.57 I think it was). When doing an update operation e.g
MyCollection.update({name:"John Doe"},{$set:{age:50}});
You need to split it into two parts, on the client. (Only on the client).
var doc_id = MyCollection.findOne({name:"John Doe"})._id;
MyCollection.update({_id:doc_id,{$set:{age:50}});
You need to find the document by the _id first then update that document. The selector can only be an _id for update & remove operations.
This is because of a security risk with meteor's design, if there were to be a client side mongodb database it could arbitrarily get information from the server on other operations while determining on whether to allow the update or not. It was introduced in Meteor 0.57.

Related

FindAndModify using Couchbase

I have a collection of documents that need to be processed by multiple client nodes.
Basically, each document should be processed by only 1 client node.
So what I'm thinking of is creating a unique clientId for each node and set the clientId to the document being processed to tell other clients that this document is being processed.
I already implemented this approach using Mongodb a couple of years ago by using the findAndModify operator which guarantees the atomicity of both setting the clientId to the document and returning it.
Now I'm looking for a maybe similar approach in Couchbase but couldn't find.
Any idea on how to do it?
I think what you are searching for is the method called getAndLock. It will guarantee that only one server is reading this document.
Updating the document with an attribute might be a bad idea if the server fails during this process, as no other server will take over those documents that have already have been assigned to the faulty one.
We have handeled similar kind of scenario in our project. What we do is to create one document and save other processing doc in that. If that document is not present in that, then it you can prevent it getting updated by other client.

PetaPoco Should I use MultipleActiveResultSets=True?

From time to time we receive the following database connection error from PetaPoco in an ASP.NET MVC 4 app:
There is already an open DataReader associated with this Command which must be closed first.;
System.Data; at System.Data.SqlClient.SqlInternalConnectionTds.ValidateConnectionForExecute(SqlCommand command)...
It seems like this happens as we get more load to the system.
Some suggestions we found as we researched were:
Do a PetaPoco Fetch instead of a Query
Add MultipleActiveResultSets=True to our connection string
Can someone with PetaPoco experience verify that these suggestions would help?
Any other suggestions to avoid the Exception would be appreciated.
Update 06/10/2013 We changed the Query to a Fetch and we have seen some improvement however we still sometimes see the error.
Does anyone know what drawbacks changing the connection string to MultipleActiveResultSets=True might have?
Be sure that you are creating the PetaPoco DB per request (not a static).
See: how to create a DAL using petapoco
Update 06/10/2013 All Fetch methods calls the Query method (see the source)
So changing one for the other has no effect on the error.
The drawbacks are listed on the MSDN and includes warnings with:
Statement Interleaving
Session Cache
Thread Safety
Connection Pooling
Parallel Execution
I have tried it personally and didn't got any drawbacks (depends on your app), but didn't get rid of the errors also.
The only thing that you can do to remove the error, it's follow your request code to find where in the code the statement is called twice, and then use other DB connection in that function.
Also, you can catch the error and then create a new db connection and try with that new one.
Sorry but not magic bullet here.

Intermittent "Out of present range" from Classic ASP after migration from SQL Server 2000 to 2008 & IIS6-IIS7

Background: I have just completed a move of approximately 50 classic ASP sites from an IIS6/Sever 2003 and SQL Server 2000 environment to a new virtual environment of 2 machines behind an nginx load balancer. Each MS machine is running IIS7.5 and SQL Server 2008 R2. They current each have 6Gb & 2 VCPUs. The databases are set up in a mirroring configuration (currently without a witness).
During testing all sites appeared to function correctly.
Once live traffic started to hit the sites it became apparent quite quickly that the initial resource allocation (2Gb & 1 VCPU was way too low and was quickly increased). The main problem has come from an intermittent ASP error occuring on approximately 10 (and probably including the busiest) sites on the servers. They will produce a 500 response from an ASP error of
Provider error '8002000a' Out of present range.
All research has pointed to causes such as numbers too large to fit into an integer variable and some people have mentioned some correlation with the newer implementation of RAND and NEWINT() in SQL Server 2008 compared to 2000. The stored procedures that appear to cause the error are relatively simple, with some as simple as accepting a single VARCHAR parameter (well within the limits) and doing a single column select on a table. Most do not even involve INTs at all and if they do, the values are well within range.
The error can appear on one machine for a given amount of time while during this same time the other server will not necessarily have the error, it sometimes will though. After a while the error will stop occurring, this doesn't seem to correlate with excessively overloaded system resources either.
ASP to database is done via a DSN using SQL Server Client 10 drivers. The code is using the ADODB connection and command objects. This code has been working happily for 6+ years on the previous servers. The databases are set to compatibility mode 80 (SQL Server 2000).
Can anyone shed any light on where I should be looking to try and solve this please? If there is any other information I can share, specific code snippets etc please just let me know.
Update:
I thought the UPDATEUSAGE answer below had got it but unfortunately it reared up again a little later. After some thinking I've had the following thoughts... There are two instances of IIS, independent of each other, they both talk to a single database whether it be local at the time or not, they both execute identical sync'd code with code that has been working with the same syntax and valid variables for a long time. As the ASP execution through IIS is the only layer in this equation that is not a single point as it were this is where I've headed. When the problem reoccurred, I restarted IIS on the machine at that point that was showing the error (the situation is often that it is only occurring on one of the two servers). The restart of IIS appeared to cure the problem. It then happened on the other server with a different site, again restarting IIS appeared to sort the issue.
Further reading has now lead me to the "Managed pipeline" modes of the app pools. They are currently set to "Integrated". I've done some reading and I'm wondering if they should be set to classic to emulate IIS6. Does anyone have any more thoughts on this?
Many thanks
Eric
Did you:
(1) Update usage counters: In earlier versions of SQL Server, the values for the table and index row counts and page counts can become incorrect. To correct any invalid row or page counts, run DBCC UPDATEUSAGE on all databases following the upgrade.
(2) Rebuild all Indexes
Upgrading from SQL Server 2000 to 2008
I had the same problem and tracked it down to a field definition in my database i had defined as a long integer. the value i had in there was some like 53435534126262 , immediately changed it to a text field and the problem disappeared
try that??
I thought it might be useful to post my findings and solution to this problem as I found no where on the web that mentions the same situation I had.
I went through a number of steps that each seemed to reduce the frequency of the errors but not eliminate them. Firstly I changed the database authentication method to SQL instead of Windows based. At first I changed all the sites to use the same login but later on I changed them to all use a unique login.
I updated the SQL Server with service pack 2 and cumulative update pack 3.
As mentioned, the above steps reduced the frequency of the errors but didn't stop them. I started looking through the class that all the sites use to manage their database connections and use of stored procedures. I came across the line adocommand.parameters.refresh I read up on what this actually does and when called it makes a call to the database to retrieve the parameters of a given stored procedure so that they can be accessed as an object in ASP rather than the parameters having to be given in a particular order and manually assigning the types to them. On the Microsoft page that details this method it has a little footnote that says
Parameters.refresh will fail in some situations or return information that is not entirely correct. Parameters.refresh is particularly vulnerable when used on ASP pages.
This was all it gave and I couldn't find any other details about this. I increased the logging on my sites to, on error, output what parameters.refresh had returned. I caught it in one instance returning the two variables from the stored procedure, with the correct names, but not with the correct variable types. They should have been a VARCHAR and an INT but they came back as both being CURRENCY. Obviously this then errors when you try and assign a string to a CURRENCY. I only managed to catch this one instance of an error before I fixed the problem.
The only way I found that seemed to fix the problem was to change from using an ODBC based driver, both DSN or DSNless, and use the SQL Native Client OLE DB driver with the "PROVIDER" keyword. This had the added benefit of appearing to enable connection pooling when it previously didn't appear to have been working.
One side effect of changing to the driver is that the stored procedures and ASP became susceptible to intermediate results being returned from the stored procedure if there were multiple statements within it and it didn't have SET NOCOUNT ON explicitly set at the top. Rather than trying to update 1000+ stored procedures, I found that the NOCOUNT flag can be set at the database instance level for all databases which solved this problem.
I hope this helps someone, as it was an incredibly frustrating 3 weeks that I spent tracking down this problem. Feel free to ask any further questions and I'll help if I can.
Thanks
Eric

Business logic exception.example

I have this scenario in my 3-tier app with a Service Layer that serves to a MVC presentation layer:
I have an operation that creates, for example, an Employee whose email must be unique in the set of Employees. This operation is executed in the MVC presentation layer through a service.
How do I manage the intent to create a Employee whose email is already registered in the Database for another Employee?
I am thinking in 2 options:
1) Have another operation that queries if there's an Employee with the same email given for the new Employee.
2) Throw an exception in the service CreateEmployee for the duplicate email.
I think it is a matter of what I think is best or most suitable for the problem.
I propose the 1) option because I think that this is a matter of validation.
But the 2) option only needs 1 call to the service, therefore its (?) more efficient.
What do you think?
Thanks!
I would definitely go with second option:
as you mentioned it avoids 1 call to the service
it keeps your service interface clean with just one method for the employee creation
it is consistant from a transactional point of view (an exception meaning "transaction failed"). Keep in mind that validation is not only one of the many reasons that can make the transaction fail.
imagine your validation constraints evolve (ex: other employee attributes...), you won't want to make all you implementation evolve only for this.
Something to have in mind: Make sure to make your Exception verbose enough to identify clearly the cause of the failure.
If by 'Presentation' layer you really mean presentation, you should not be creating a new employee in that layer. You should only be preparing any data to be cleanly presented in the HTTP response object.
Generally a good way to think of this sort of problem is to consider what your service objects should do if called by a command-line program:
> createEmployee allison.brie#awesome.com
Error! 'allison.brie#awesome.com' is already registered.
In this there is some terminal management layer that calls the service. The service does something to realize there is another user with the same email, and throws an appropriate exception (ie DuplicateUserException). Then the terminal management layer interprets that exception and prints out "Error! " + exception.getMessage();
That said, note that your two options are actually the same option. Your service must still query the database for the duplicate. While this is 'validation' it is not input validation. Input validation means checking to see if it's a valid email address. (Is there an '#' symbol and a valid TLD?)

Use single Elmah.axd for multiple applications with single DB log

We have a single SQL Log for storing errors from multiple applications. We have disabled the elmah.axd page for each one of our applications and would like to have a new application that specifically displays errors from all of the apps that report errors to the common SQL log.
As of now, even though the application for all errors is using the common SQL log, it only displays errors from the current application. Has anyone done this before? What within the elmah code might need to be tweaked?
I assume by "SQL Log" you mean MSSQL Server... If so, probably the easiest way of accomplishing what you want would be to edit the stored procedures created in the SQL Server database that holds your errors.
To get the error list, the ELMAH dll calls the ELMAH_GetErrorsXML proc with the application name as a parameter, then the proc filters the return with a WHERE [Application] = #Application clause.
Just remove the WHERE clause from the ELMAH_GetErrorsXML proc, and all errors should be returned regardless of application.
To get a single error record properly, you'll have to do the same with the ELMAH_GetErrorXML proc, as it also filters by application.
This, of course, will affect any application retrieving errors out of this particular database, but I assume in your case you'll only ever have the one, so this should be good.
CAVEAT: I have not tried this, so I can't guarantee the results...
It's not a problem to override the default Elmah handler factory so that it will filter Elmah logs by applications. I wrote a sample app that shows how to do it with MySql: http://diagnettoolkit.codeplex.com/releases/view/103931. You may as well check a post on my blog where I explain how it works.
Yes, it easily works. However you can't see app name in Elmah/Default.aspx. I haven't found if it is confugurable - just display one column more.