I am getting some transaction blocking in ReportserverTempdb.
Below is the message I am getting;
"The transactions span multiple databases on the same instance"
You cannot have two datasets that are from different databases are showing in one report at the same time.
Related
I am trying to optimize a report for SSRS 2012 and using SQL Profiler I can see that the datasets are being processed one at a time instead of in parallel.
The checkbox to request one transaction is NOT checked.
I can't find any other setting on parallel execution.
The data source is an embedded data source.
Every item I find on the internet about parallel execution quotes a Microsoft BLOG from about a decade ago that states 2008 defaulted to parallel unless that single transaction box is checked, and the assumption is that nothing ever changes so this is still default behavior.
It would appear that the box has a different purpose since running in one transaction allows a temp table created in one dataset to be referenced in a later dataset - they are not only serialized but processed in their listed order (top to bottom). So that is about persistence of objects and data instead of parallel vs serialized.
Without the box checked it appears they are called in the order the fields are processed, but profiler results indicate that only one dataset is retrieved at a time.
So, is there a verified way to fetch multiple datasets simultaneously?
No, there aren't any other settings to control this behavior besides the one you described. Of course there are always other ways around this if efficiency is an issue for you. For example, you could look into caching the results before the report runs.
We have got 3 REST-Applications within a cluster.
So each application server can receive requests from "outside".
Now we got timed events, which are analysing the database and add/remove rows from the database, send emails, etc.
The problem is, that each application server does start this timed events and it happens that 2 application server are starting this analysing job at the same time.
We got a sql table in the back.
Our idea was to lock a table within the sql database, when starting the job. If the table is locked, we exit the job, because an other application just started to analyse.
What's a good practice to insert some kind of semaphore ?
Any ideas ?
Don't use semaphores, you are over complicating things, just use message queueing, where you queue your tasks and get them executed in row.
Make ONLY one separate node/process/child_process to consume from the queue and get your task done.
We (at a previous employer) used a database-based semaphore. Each of several (for redundancy and load sharing) servers had the same set of cron jobs. The first thing in each was a custom library call that did:
Connect to the database and check for (or insert) "I'm working on X".
If the flag was already set, then the cron job silently exited.
When finished, the flag was cleared.
The table included a timestamp and a host name -- for debugging and recovering from cron jobs that fail to finish gracefully.
I forget how the "test and set" was done. Possibly an optimistic INSERT, then check for "duplicate key".
What is the effect of multiple tadoconnection?
Here is what I did :
I put a TADOConnection to almost every form in my application.
Those TADOConnection will connect to the database(MySQL) everytime I create an instance of a form.
In an average use of the application, about 15 forms will be used(15 tadoconnections connected to the database). So far my application is running smooth. But yesterday, a user complained of an error "MySQL has gone away".
I've encountered that error in the past and it was because the data is too large, or hardware problem. But today, the data is not big and the hardware is in excellent condition. By the way, the connection is local. Does the multiple tadoconnection produced the error?
The effect of multiple ADOConnections is that you, open multiple independent Session in the Database. I wouldnt recommend your solution, in consideration of Transactionmanagement and table locking
Server has gone away: http://dev.mysql.com/doc/refman/5.1/en/gone-away.html
Is there anyway you can stop a DDS? I have a process that adds a .pdf in 400 different folders. I was just wondering if there way anyway to stop it because sometimes it interferes with other things on the server I'm working on it and it slows it down significantly.
There's no supported way to do this. There is a hack you can try, but it involves manipulating the tables in the ReportServer database, which is unpredictable at best. Here are the steps:
Stop the SSRS server. This will make it quit sending new subscriptions, but it doesn't actually stop the DDS.
Find the relevant row in the ActiveSubscriptions table.
Delete all of the rows in the Notifications table with an ActivationID that corresponds to the ActiveID in ActiveSubscriptions.
Delete the row in ActiveSubscriptions.
Restart the SSRS server
Any subscriptions that SSRS had already queued up with the SQL Server Agent will still be processed, but it should stop sending new ones. As I said, this is a hack, and it's difficult to say what else you might break by doing this.
My problem is I have a website that customers place orders on. That information goes into orders, ordersProducts, ...etc tables. I have a reporting Database on a DIFFERENT server where my staff will be processing the orders from. The tables on this server will need the order information AND additional columns so they can add extra information and update current information
What is the best way to get information from the one server (order website) to the other (reporting website) efficiently without the risk of data loss? Also I do not want the reporting database to be connecting to the website to get information. I would like to implement a solution on the order website to PUSH data.
THOUGHTS
mySQL Replication - Problem - Replicated tables are strictly for reporting and not manipulation. Example what if customer address changes? Need products added to order? This would mess up the replicated table.
Double Inserts - Insert into Local tables and then insert into Reporting Database. Problem - If for whatever reason the reporting database goes down there is a chance I lose data because the mySQL connection wont be able to push the data. Implement some sort of query log?
Both Servers use mySQL and PHP
Mysql replication sounds exactly like what you are looking for, I'm not too sure I understand what you've listed as the disadvantage there.
The solution to me sounds like a master to read-only slave where the slave is the reporting database. If your concern is changes to the master then making the slave out of sync then this shouldn't be too much of an issue, all changes will be synced over. In the situation of a loss of connectivity then the slave would track how many seconds it is behind master and execute the changes until the two are back in sync.