SSRS is running low with timeout set - reporting-services

I have two reports with the timeout (inside datasets) set to 240s. Those reports fails by timeout with small data. When I set timeout to 0s the reports run in a few seconds. Do you know why?

Related

Golang sql.DB WaitCount greater than 0 even when there are enough idle connections in the pool

I am looking at the DBStats of a web application in Golang. The metrics is exported to prometheus every 10s by sqlstats.
In the application, MaxOpenConns is set to 100, and MaxIdleConns is set to 50. And when I look into the metrics, I notice the number of open connections is stable around 50. This is expected, which means we are keeping 50 idle connections. However, the number of InUse connection is hovering between 0 and 5, and is 0 for most of the time. This is strange to me, because there is a constant inflow of traffic, and I don't expect the number of InUse connections to be 0.
Also, I notice WaitCount and MaxIdleClosed are pretty large. WaitCount means there is no idle connections left and sql.DB cannot open more connections due to MaxOpenConns limit. But from the stats above, there seems to be more than enough of headroom for sql.DB to create more connections (OpenConnections is way below MaxOpenConnections ). The big number of MaxIdleClosed also suggests sql.DB is making additional connections even when there are enough idle connections.
At the same time I am observing some driver: bad connection errors in the app and we are using MySQL.
Why does the app try to open more connections when there are enough idle connections around, and how should I tune the db param to reduce the issue?
However, the number of InUse connection is hovering between 0 and 5, and is 0 for most of the time. This is strange to me, because there is a constant inflow of traffic, and I don't expect the number of InUse connections to be 0.
It is not strange. The number of InUse connections moves like spikes. Since you get stats only at every 10s, you just don't see the spike.
Why does the app try to open more connections when there are enough idle connections around,
See https://github.com/go-sql-driver/mysql#important-settings
"db.SetMaxIdleConns() is recommended to be set same to (or greater than) db.SetMaxOpenConns(). When it is smaller than SetMaxOpenConns(), connections can be opened and closed very frequently than you expect."
and how should I tune the db param to reduce the issue?
Follow the recommendation of the go-sql-driver/mysql README.
Use db.SetConnMaxLifetime(), and set db.SetMaxIdleConns() same to db.SetMaxOpenConns().
db.SetMaxOpenConns(100)
db.SetMaxIdleConns(100)
db.SetConnMaxLifetime(time.Minute * 3)

MySql - client timed out

I am using C3P0 (0.9.5.2) connection pool to connect to to MySQL DB. I have set default statement timeout of 1 seconds. I see that during high load, some of the connection request timing out (checkoutTimeout is 1 sec), though the max pool capacity was not reached. On analyzing thread stack, I saw 'MySQL Cancellation timer' threads in runnable state. Probably there is bulk timeout which is causing a non responsive DB and not creating new connection within 1 sec.
Is there a way to minimize the cancellation timer impact and to ensure client timeout is not happened if the max pool capacity is not reached?
Even if the pool is not maxPoolSize, checkout attempts will time out if checkoutTimeout is set and new connections cannot be acquired within the timeout. checkoutTimeout is just that — a timeout — and will enforce a time limit regardless of cause.
If you want to prevent timeouts, you have to ensure connections can be made available within the time alotted. If something is making the database nonresponsive to connection requests, the most straightforward solution obviously is to resolve that. Other approaches might include setting a larger acquireIncrement (so that connections are more likely to be prefetched) or a larger minPoolSize (same).
Alternatively, you can choose a longer timeout (or set no timeout at all).

MySQL connection loss during query

I am working on converting unix time to readable time.
It is necessary to insert a 6gb .txt file into my database
(XAMPP V3.2.2 , MYSQL workbench 5.2.34).
I have written the SQL query to convert unix time but whenever i run the query, Mysql workbench will crash
(error:2013.lost connection to database during query.).why?
my SQL query:UPDATE database.database SET readable_time=from_unixtime(unix_time);
Increasing net_read_timeout solves this problem
From doc:
Sometimes the “during query” form happens when millions of rows are being sent as part of one or more queries. If you know that this is happening, you should try increasing net_read_timeout from its default of 30 seconds to 60 seconds or longer, sufficient for the data transfer to complete.
Click here for more info.
Please check this post - Error Code: 2013. Lost connection to MySQL server during query
As you are talking about an insert, understand that the 'Workbench' loses the connection, but the query continues to execute in the 'server'. That is, the workbench can no longer update you on the status changes for that query execution. But, the execution of the query keeps continuing behind screens.
You might want to run show processlist to see if the insert process is still running or not.
However, while fetching data from the database, you might have to update your timeout settings.

Mysql lock with parallel update on same entry (with sidekiq)

I have a concurrency problem of access to my mysql db when using parallel workers with Sidekiq :
I have in my output sidekiq console a freeze during 50 seconds in average (after this, the job is done: "INFO: done: 50.954 sec", and the others can begin). I have a problem of lock, I don't understand why and how resolve it?
Error in output : "WARN: Mysql::Error: Lock wait timeout exceeded; try restarting transaction: UPDATE actors SET x = ? WHERE actor_id = 1"
There are simultaneous write in same entry in my db, but why lock system don't manage it to do update sql, one by one, in order of arival ? Why wait 50 seconds !!??
How can I resolve this problem?
thx.
for people who vote +1 on question :
I slove my problem when I see the stupid misstake: I scheduled job with variable, but without put the good value into, so I scheduled job with 0 second, and in each job, I reenque a scheduled job in 0s... etc
that a infinite loop causing irremediably a conflict on db access.

Is it possible to specify time out value on an Execute SQL Task?

I have a Stored Procedure which was scheduled as a job. A Timeout expired issue occurs when the job scheduled is executed. I am going to implement this Stored Procedure to SSIS Package by calling the Stored Procedure in a Execute SQL Task or apply Stored Procedure script into a Script Task. What should I do/apply for avoiding Time Out Expiry issue in the Package? Is there property related to time out?
Yes, there is a TimeOut property on the Execute SQL Task. It is right in the first section named General. Refer the screenshot #1. Ideally, you should work on fine tuning the stored procedure.
Microsoft TechNet documentation states the following definition for the TimeOut property:
Specify the maximum number of seconds the task will run before timing out.
A value of 0 indicates an infinite time. The default is 0.
Note: Stored procedures do not time out if they emulate sleep functionality by
providing time for connections to be made and transactions to complete that is
greater than the number of seconds specified by TimeOut. However, stored
procedures that execute queries are always subject to the time restriction
specified by TimeOut.
Hope that helps.
Screenshots:
#1: Execute SQL Task TimeOut Property
was getting this 0xC020801c when running the SSIS job as SQL server agent pulling data from Oracle db into sql server db. Setting the timeout to 300 seconds did the trick. Also had to set the Run64bit = false but did not have to click run 32-bit under advance options on SQL Server job (after setting up the step to run integration service package in connection manager -> advance) .