My ReportServer execution logs are deleted after 60 days. I want to deactive it.
With this command
Update ConfigurationInfo SET Value='90'
Where NAME='ExecutionLogDaysKept'
My question is. Is this enough or do I need really execute the following command as well? If "yes" then why?
EXEC ReportServer.dbo.ExpireExecutionLogEntries
Related
I have a very long SQL script with roughly 250,000 updates.
I run it by piping it to the mysql client in linux. Three times now I hit errors in the SQL caused by the way the script was generated.
cat updates.sql | mysql -h kkwea1sprageyxq.poopopakte0b2.pu-west-1.rds.amazonaws.com \
-u mama_ --disable-reconnect --max-allowed-packet=536M \
--default-character-set=cp1250 --skip-column-names -D mydb > update.log
The mysql client seems to hang when there is an error, often for hours, before erroring out itself.
I could set it to ignore the errors, but I would like to know whether I can add a flag to the command line in mysql client to cause it to error quicker after an SQL error.
I can see several potentially interesting command-line flags for mysql, but I find the description for them ambiguous.
My script runs in auto-commit mode OFF, with COMMIT statements every 25 update commands. I have hunch that it errored much quicker when I was using auto-commit mode ON, which is obviously desirable, if there's an error.
Here's an example of the update statement in case it helps:
UPDATE data SET value = 0.9234 WHERE fId = 47616 AND modDate = '2018-09-24' AND valueDate = '2007-09-01' AND last_updated < '2018-10-01';
We have 8 phusion passengers with 20 connections each. should be 160 max connection. Today our Mysql connection crossed 300 and our server stopped responding.
What would happen if the thread dies unnaturally ? how do db connections associated with it get cleaned-up?
How to debug this type of scenario ?
What would happen if the thread dies unnaturally ?
If you are using your statements under transaction then all statements under un-successful transactions will be rolled back but if you are executing your statements individually then how many have been executed will be saved in db but other un-successful will not and data can be in-consistant.
how do db connections associated with it get cleaned-up?
As per my assumption your application is opening connections more than required and also not closing properly. In this case if you check connections in mysql admininstrator then you will get so many connections in sleep mode. So you can kill all sleep connections to clean them.
How to debug this type of scenario ?
Step1: Enable general logs on your server:
Step2: execute below command in any gui tool like sqlyog etc.
show full processlist;
Step3: Take first sleep process from above command:
Step3: find/grep above process id (suppose it is 235798) in your general log. You can use below command.
$ cat /var/log/mysqldquery.log | grep 235798
Note: there can be different file name and path for your general log file.
Above command will show you few lines, check if you are closing connection at the end, should show "quite" statement at the end of line. In this way you need to check few processes those are in sleep mode and you can jugde which type statements/from which module opening extra connectiions and not closing and accordingly you can take action.
I tried to cleanup my expired django sessions using ./manage.py cleanup and after hitting enter it seems to be doing something for a few seconds and then all it returns is 'killed'.
I also tried running the mysql shell and navigate to that table and do 'select * from django_sessions;' and I get kicked from the shell and back to bash with the same message: 'killed'.
What is wrong here? How can I debug that?
It seems like something kills long running commands. This is usual sitation on shared hosting. If you are not the owner/administrator of this server, the answer should be given by the actual owner/administrator.
Maybe these answers will help you: Who "Killed" my process and why?
I have a problem with a hang using xp_cmdshell.
The executable is called, performs its work, and exits. It is not hanging because of a ui prompt in the exe. The exe is not hanging at all. The exe disappears from the process list in task manager, and internal logging from the exe confirms that it executed the very last line in the main function
the call to xp_cmdshell does NOT return control in SQL. It hangs on that line (it is the last line of the batch). Killing the process is ineffective. It actually requires a restart of sql server to get rid of the hung process (ugh)
The hang only happens the first time it is run. Subsequent calls to the procedure with identical parameters work and exit correctly so long as the first one is hung. Once SQL is restarted, the first subsequent call will hang again.
If it makes any difference, I am trying to receive the return value from the exe -- my sql procedure ends with:
exec #i = xp_cmdshell #cmd;
return #i;
Activity Monitor is reporting the process to be stuck on a wait type of PREEMPTIVE_OS_PROCESSOPS (what the other developer saw) or PREEMPTIVE_OS_PIPEOPS (what I'm seeing on my current testing)
Any ideas?
Just came across this situation myself where I've run an invalid comment via xp_cmdshell.
I managed to kill it without restarting SQL, what I've done was to identify the process that run the command and kill it from Task Manager.
Assume your SQL was running in Windows 2008 upward:
Under Task Manager, Processes tab. I enabled the column to show Command Line of each process (e.g.: View -> Select Columns..).
If you unsure what command you've run via xp_cmdshell, dbcc inputbuffer(SPID) should give you a clue.
We had the same issue, with SQL Server 2008, also with calls involving xp_cmdshell and BCP. Killing the sql process ID didn't help, it would just stay stuck in "KILLED/ROLLBACK" status.
Only way to kill it was to kill the bcp.exe process in Windows task manager.
In the end we traced the issue down to wrong SQL in sproc that was calling the xp_cmdshell. It was by mistake opening multiple transactions in a loop and not closing them. After BEGIN/COMMIT trans issues were fixed, PREEMPTIVE_OS_PROCESSOPS never came back again.
We actually did eventually figure out the problem here. The app being called was used to automatically dump some documents to a printer when certain conditions happened.
It turns out that a particular print driver popped up a weird little window in the notification tray on a print job. So it was hanging because of a ui window popping up -- but our app was exiting properly because it wasn't our window, it was a window triggered by the print driver.
That driver included an option to turn off that display window. Our problem went away when that option was set.
Recently I have noticed that mysql connections are timing out, increasing wait_timeout has helped this. However it still happens.
We have also enabled mysqli.reconnect in an attempt to catch the issue and allow the script to continue running. However I can't see anywhere if the SQL which was run and failed due to a timeout, would automatically get re-run on the reconnection, as I would hope. Any ideas?
Reading the documentation, it seems that you need to use mysqli::ping() to automatically reconnect. If you run ping() before any query, the reconnect will happen at that time.