Different results based on different client communicating with sql server - sql-server-2014

I have a very weird scenario occur at work today in our production system. Wondering if anyone has seen anything like this and have a good explanation for me.
We have a stored procedure in Sql Server 2014 and it was not returning any data when our .NET system called it.
We captured the call using Sql Profiler and replayed it in Sql Management Studio using the same Sql Authentication credentials and it returned results as expected.
No matter how many times we tried each interchangeably, they were consistent in that when the client was the .NET Client, it gave no results and when it was SSMS it worked fine. Keep in mind, its the exact same sp, params, etc.
We were able to resolve the issue by doing an SP Recompile but that feels like a temporary solution and not know the original cause means that it can recur without warning. Furthermore, I was under the impression that sp recompile only affects performance issues not differing results.
Has anyone seen this before? Can you explain why an sp recompile fixed it?
Many Thanks!

Usually what you see is that the query will execute just fine with SSMS, but with the .Net client it will time out. That might be what you're describing here.
There's some debate about the exact cause of this issue.
For one side, problem is that SSMS and the .Net Client have different defaults. The most common offender is ARITHABORT, which SSMS set to ON but most SQL Server providers leave at the server default (OFF):
WARNING
The default ARITHABORT setting for SQL Server Management Studio is ON.
Client applications setting ARITHABORT to OFF can receive different
query plans making it difficult to troubleshoot poorly performing
queries. That is, the same query can execute fast in management studio
but slow in the application. When troubleshooting queries with
Management Studio always match the client ARITHABORT setting.
This results in a cached query plan (a fairly complex topic) that works well in SSMS, but not so well with the.Net client.
For the other side, the problem is just parameter sniffing, meaning your stored procedure has a bad plan cached. This side argues that the ARITHABORT setting causes the server to select a different plan, skipping the bad one. But the core problem is the parameter sniffing, and the ARITHABORT setting is actually a workaround.
This SO question covers a lot of the possible solutions (setting ARITHABORT ON, using OPTION RECOMPILE, using OPTIMIZE FOR UNKNOWN, etc.). That question also links to the seminal work of Erland Sommarskog, Slow in the Application, Fast in SSMS?: Understanding Performance Mysteries, which is probably more than you'll ever, ever want to know.

Related

Classic ASP + MSAccess extremely slow on IIS7.5

I migrated my classic asp site from IIS6 to a much powerful server with Windows Server 2008 R2 and IIS7.5, but it actually runs even slower.
Every simple call to the MSAccess database is taking forever. Many times the request is dropped because of Session timeout (120 seconds).
Any idea what can cause the problem and how to solve it?
Thank You.
Before blaming Access and moving to SQL Server Express or another database, you need to make sure you know where the slowdown occurs.
From what you are motioning, it looks like at least some of the queries don't even work (IIS times out after 120s).
Access databases, especially if they are accessed locally by a handful of concurrent users, are fast.
Moving to another database may or may not solve the problem, but it will be probably be a lot more work than solving your issue with your current Access database.
That said, if your website needs to server lots of concurrent users (say more than 50 at a time) you may need to look into moving to a full database server like MySQL, SQL Server Express or PostgreSQL for instance.
A few things to make sure you double check:
Corrupted database. Make sure you use Compact and Repair regularly as a regular maintenance measure (make a backup first).
Incorrect filesystem rights.
Make sure the your IIS process has read/write rights to the folder where the database is located so that it is able to create the lock file (.ldb or .laccdb depending on whether you are using .mdb or the new .accdb database format).
A related issue is that the IIS process must be able to create temporary files in the temporary folder, for instance %SystemDrive%\Windows\ServiceProfiles\NetworkService\AppData\Local\Temp.
Bad queries. Open the database with Access and run the queries to check how long they really take and if they return any errors.
If there are data integrity issues, it could be that the query returns unexpected results that could have strange side-effect to the code in your asp page.
Check your IIS logs for errors. Also check the OS Event Log.
Make sure there are no other errors that could incorrectly cause the behaviour.
Make sure you profile your asp code to find out exactly which queries and parts of your code are slow and which are fine.
Once you have solved your issues. Improve performance by keeping the database open to avoid the lock file being create/deleted all the time (this can have a huge impact on performance).
A good reference with more detailed information on some of the topics above: Using Classic ASP with Microsoft Access Databases on IIS

MySQL stored procedures mature? A good way to go or not for my scenario

I'm porting a reporting application from .net / MSSQL to php / MySQL and could use some advice from the MySQL experts out there.
I haven't done anything with MySQL for a few years, and when I was last using it stored procedures were brand new and I was advised to stay away from them because of that newness.
So, now that it's 2011, I was wondering if there's anything inherently "bad" about using them, as they worked so well for this app in MSSQL. I know it will depend on the needs of my app, so here are the high level points: (This will run on Linux if that matters)
The app generates a very complex report, however it is NOT a high concurrency app, typically 1-2 users at a time, 5 concurrent would shock me. I can even throttle it to prevent more than 2 or so users from using it simultaneously, so a lot of concurrent users is not going to be a concern.
Virtually 100% of the heavy lifting in this app in in the MSSQL stored procedure. The data is uploaded via the web front end, the stored procedure then takes it from there, and eventually spits out a csv / excel file for the user a few minutes later.
This works great using an MSSSQL stored procedure. However it's a good 2000 lines of sql code and I'm hesitant to submit the sql statements one at a time via php as opposed to using a stored procedure. Most importantly, it works fine with the current architecture, I'm not looking to change it unless I have to in order to accommodate MySQL / PHP.
Any gotchas in using a MySql stored procedure? Are they buggier than submitting sql statements or anything odd like that?
Thanks in advance for everyone's thoughts on this.
Stored procedures in MySQL are quite verbose in syntax, and are hard to debug or profile. Personally I think they are very useful in some cases, but I would be very hesitant to try to maintain a 2000+ line stored procedure in MySQL.

Sql Server query optimisation

I am optimising a large SP in Sql Server 2008 which uses a lot of dynamic Sql. It is a query which searches the database with a number of optional parameters and short of coding for every possible combination of parameters dynamic sql has proven to be the most efficient method of executing this. The sql striung is built including parameters and then passed to sp_executesql with the param list. When running this in SSMS with any combination of parameters it runs very quickly (<1s) and returns results. When running from a windows forms application however, it sometimes takes considerably longer.
I have read that the difference in the ARITHABORT option can cause this (ON as default in SSMS and OFF in ADO) however I am unsure as to whether turning this on fixes the issue or whether it masks it? Does the difference in settings make a difference to the query itself or does it just mean that Sql Server will use different cached execution plans? If so should clearing the cache and statistics reset the playing field?
I have also read differing points of view on the OPTION RECOMPILE setting. My understanding is that when sp_executesql is used with a parameter list then each combination of parameters will produce an execution plane however as the possible combinations of parameters are finite this will result in optimised queries. Other sources say it should be set to ON at the start of any SP that uses dynamic sql.
I realise that different situations require different settings however I am looking to understand these further before trying the arbritraily on my very busy 24x7 production server. Apologies for the ramblings, I guess my question boils down to:
What causes sql to run differently in SSMS and Window Forms?
If it is ARITHABORT then is this an issue related to execution plans or should I turn it on as a server default?
What is the optimal way to run queries with dynamic sql?
Run a trace in SQL Profiler to see what's actually being submitted to the server. Of course, you need to be aware of the impact of traces on production servers. In my experience very short traces that are limited to a small set are not a big problem for servers that don't have a very high transactions per second load. Also, you can run a trace server-side which reduces its impact so that's an option for you.
Once you see what's actually being submitted to the database this may help you understand the problem. For example, sometimes DB libraries prepare statements (getting a handle to a sort of temporary stored proc) but this can be costly if it is done for each issuance of the query, plus it's not needed with sp_executesql. Anyway, there's no way of knowing for sure whether it will be helpful until you try it.

MS SQL - MySQL Migration in a legacy webapp

I wish to migrate the database of a legacy web app from SQL Server to MySQL. What are the limitations of MySQL that I must look out for ? And what all items would be part of a comprehensive checklist before jumping into actually modifying the code ?
First thing I would check is the data types - the exact definition of datatypes varies from database to database. I would create a mapping list that tellme what to map each of the datatypes to. That will help in building the new tables. I would also check for data tables or columns that are not being used now. No point in migrating them. Do the same with functions, job, sps, etc. Now is the time to clean out the junk.
How are you accessing the data through sps or dynamic queries from the database? Check each query by running it aganst a new dev database and make sure they still work. Again there are differences between how the two flavors of SQl work. I've not used my sql so I'm not sure what some of the common failure points are. While you are at it you might want to time new queries and see if they can be optimized. Optimization also varies from database to database and while you are at it, there are probably some poorly performing queries right now that you can fix as part of the migration.
User defined functions will need to be looked at as well. Don't forget these if you are doing this.
Don't forget scheduled jobs, these will need to be checkd and recreated in myslq as well.
Are you importing any data ona regular schedule? All imports will have to be rewritten.
Key to everything is to use a test database and test, test, test. Test everything especially quarterly or annual reports or jobs that you might forget.
Another thing you want to do is do everything through scripts that are version controlled. Do not move to production until you can run all the scripts in order on dev with no failures.
One thing I forgot, make sure the dev database you are running the migration from (the sql server database) is updated from production immediately before each test run. Hate to have something fail on prod because you were testing against outdated records.
Your client code is almost certain to be the most complex part to modify. Unless your application has a very high quality test suite, you will end up having to do a lot of testing. You can't rely on anything working the same, even things which you might expect to.
Yes, things in the database itself will need to change, but the client code is where the main action is, it will need heaps of work and rigorous testing.
Forget migrating the data, that is the last thing which should be on your mind; the database schema can probably be converted without too much difficulty; other database objects (SPs, views etc) could cause issues, but the client code is where the focus of the problems will be.
Almost every routine which executes a database query will need to be changed, but absolutely all of them will need to be tested. This will be nontrivial.
I am currently looking at migrating our application's main database from MySQL 4.1 to 5, that is much less of a difference, but it will still be a very, very large task.

We're using JDBC+XMLRPC+Tomcat+MySQL to execute potentially large MySQL queries. What is a better way?

I'm working on a Java based project that has a client program which needs to connect to a MySQL database on a remote server. This was implemented is as follows:
Use JDBC to write the SQL queries to be executed which are then hosted as a servlet using Apache Tomcat and made accessible via XML-RPC. The client code uses XML-RPC to remotely execute these JDBC based functions. This allows us to keep our MySQL database non-public, restricts use to the pre-defined functions, and allows Tomcat to manage the database transactions (which I've been told is better than letting MySQL do it alone, but I really don't understand why). However, this approach requires a lot of boiler-plate code, and Tomcat is a huge memory hog on our server.
I'm looking for a better way to do this. One way I'm considering is to make the MySQL database publicly accessible, re-writing the JDBC based code as stored procedures, and restricting public use to these procedures only. The problem I see with this are that translating all the JDBC code to stored procedures will be difficult and time consuming. I'm also not too familiar with MySQL's permissions. Can one grant access to a stored procedure which performs select statements on a table, but also deny arbitrary select statements on that same table?
Any other ideas are welcome, as are thoughts and or sugguestions on the stored procedure solution.
Thank you!
You can probably get the RAM upgraded in your server for less than the cost of even a few days development time, so don't write any code if that's all you're getting from the exercise. Also, just because the memory is used inside of tomcat, it doesn't mean that tomcat itself is using it. The memory could be used up by data or by technical flaws in your code.
If you've tried additional RAM and it is being eaten up, then that smells like a coding issue, so I'd suggest using a profiler, or log data to try and work out what the root cause is before changing anything. If the cause is large data sets then using the database directly will only delay the inevitable, instead you'd need to look at things like paging, summarisation, client side caching, or redesigning clients to reduce the use of expensive queries. Using a profiler, or simply reviewing the code base, will also tell you if something is creating too many objects (especially strings, or XML nodes) or leaking memory.
Boiler plate code can be avoided by refactoring creatively, and its good that you do avoid repetition. Its unclear how much structure you might already have, but with a little work its easy to centralise boilerplate JDBCs calls. There is no fundamental reason JDBC code should be repeated, perhaps you could tell us what code is being repeated?
Finally, I'll venture that there are many good reasons to put a web tier over your database. Flexibility (of deployment), compatibility, control (over the SQL) and security are all good reasons to keep the web tier.
MySQL 5.0.3+ does have an execute privilege that you can set (without setting select privileges) that should allow you to get the functionality you seek.
However, note this mysql bug report with JDBC (well and a lot of other drivers).
When calling the [procedure] with JDBC, I get "java.sql.SQLException: Driver requires
declaration of procedure to either contain a '\nbegin' or '\n' to follow argument
declaration, or SELECT privilege on mysql.proc to parse column types."
the workaround is:
See "noAccessToProcedureBodies" in /J 5.0.3 for a somewhat hackish, non-JDBC compliant
workaround.
I am sure you could implement your solution without much boiler-plate, esp. using something like Spring's remoting. Also, how much memory is Tomcat eating? I frankly believe that if it's just doing what you are describing, it could work in less than 128mb (conservative guess).
Your alternative is the "correct by the book" way of solving the problem. I say build a prototype and see how it works. The major problems you could have are:
MySQL having some important gotcha in this regard
MySQL's Stored Procedure support being too primitive and forcing you to do a lot of work
Some other strange hiccup
I'm probably one of those MySQL haters, so the situation might be better than I think.