I have SQL Server 2008 R2, configured for log shipping. But backup, copy, restore are successful. But alert at primary and monitor instance say, out of sync error 14421.
I have schedule of 15 min set for all 3 jobs. and alert set for 3times=45 mins.
but I still get an error,
Please suggest what to do.
I would recommend carefully looking through the history for each of the Log Shipping jobs to see if you can find any errors. I have seen there be errors within the history message even though the overall job shows as having completed successfully. Hopefully, you'll find some indication as to why it's getting out of sync.
For more information, you can also right-click your SQL Server instance name -> Reports -> Standard Reports -> Transaction Log Shipping Status
Related
I am currently discovering the new SQL Server 2022 features and found this message in the error log
cleanup of stale db entries skipped because master db is not memory optimized
At first I thought this message was due to a non-memory-optimized TempDB, but after enabling MO tempdb (and a restart) I still get this message.
My friend google doesn't provide me with an answer. Is this a future extension?
I am receiving the following error in SSIS "An OLE DB error has occurred. Error code: 0x80004005" for each of my data flow tasks.
When I set 'delay validation' to 'True' for all data flow tasks and execute my packages the integration works okay.
However the SQL agent job doesn't run.
As far as I can tell, the reason for this is due to the 'to_update' temporary tables I have set up to act as a middle man. The Microsoft article below seems to back this up.
https://support.microsoft.com/en-us/topic/error-message-when-an-ssis-package-runs-that-is-scheduled-to-run-as-a-sql-server-agent-job-an-ole-db-error-has-occurred-error-code-0x80004005-6a687a1f-917a-d3ae-4d3a-44e7dae82988
As the article says my next step would be to 'change the permissions for the Temp directory of the SQL Server Agent Service startup account. Grant the Read permission and the Write permission to the SQL Server 2005 Agent proxy account for this directory.' however I honestly have no idea where I would do this (I'm new to the world of SSIS!)
If someone could point me in the right direction that would be much appreciated.
There are a few things to take into account, the delay validation might be a red herring and nothing to do with the error or it might be that the Agent is calling an older version of the package that doesn't delay validation. So first of all make sure the package with the delay validation is deployed correctly and it is the one being run by SQL Agent.
Then, if the problem continues it could be permission or that the Temp directory is running out of space, worth checking it.
Finally, if it comes to permission problems then whoever is in charge of maintaining the DB, I'm assuming there is a DBA, should check what account is running the package from SQL Agent and make sure it has the right permissions.
another thing to check:
make sure the Run as is correct and if not running as 32 bit try it or viceversa.
After spending some more time it looked like the issue was actually related to disk space, the memory on the server I was using was running at 100% and the SQL agent jobs were running but very slowly and never actually ran successfully. I ended several processes and staggered when each of the SQL jobs run so they do not all trigger at once, and these are running successfully again.
For the past week, multiple SSIS packages running on SQL Server Agent that load data into Snowflake have started returning the follow message randomly.
"Failed to acquire connection "snowflake". Connection may not be configured correctly or you may not have the right permissions on this connection."
We are seeing this message across multiple jobs and each of the jobs is loading multiple tables and its not happening on each call to Snowflake within the projects, but just on one or two tasks in jobs that have 100s.
We are using the 2.20.2 drivers from Snowflake
We have ran the jobs while WireShark was capturing network traffic and were received by the network team. They didn't have much luck because the ACK messages were not being shown.
We also ran Process Monitor while the jobs ran and we did not find anything that alluded to any issues
We also dug though the logs from the Snowflake driver and found the calls right before and right after, but no messages for the task that failed. Since those logs bounce around on which file they are sending to, its a bit hard to track sequential actions when multiple task on a job are running together.
We also installed SnowCD and ran it and it returned a full success message.
The user that runs the jobs on SQL Server Agent is an Admin on the server and has SysAdmin rights on the Sql Sever instance.
The warehouse the drivers are connected to are a size Large with a max of 3 clusters (was at 1 when the issue started, but upped it to 3 to see if that helped)
Jobs are running on Windows Server 2016 DataCenter in Azure
SQL Server instance is Sql Sever 2016 13.0.4604.0
We cannot figure out why we are suddenly and randomly using connection to Snowflake.
Some ideas to help get these packages working:
Add a retry to the tasks that are failing. The task would move onto the next step only upon success:
https://www.mssqltips.com/sqlservertip/5625/how-to-retry-sql-server-integration-services-ssis-control-flow-tasks/
You can also combine the truncate and insert into one step using the insert overwrite into command which will allow your package to run quicker and have one less task for failure:
https://docs.snowflake.net/manuals/sql-reference/sql/insert.html#insert-using-overwrite
Once the SSIS packages are consistently completing, you can analyze the logs at the point of failure to see if there is any pattern to help you identify the root cause.
I realize that SQL Server 2016 RC1 and all of its RC1-refreshed accessories may be half-baked.
That said, let me ask this: In SSRS, what may be causing flaky connectivity and timeouts?
No default-installation timeout settings (wherever I found them...SSMS server connection, SSRS query/report setting, etc.) have been lowered. On the contrary, to troubleshoot this issue, they have been doubled or trebled. SQL-related services are all running, and firewall ports are open wide:
SQL Server - Firewall Settings - Inbound
TCP/IP
Port Description
==== ======================================================
80 SQL Server Reporting Services Web Services
443 SQL Server Reporting Services Mobile Reports via HTTPS
135 SQL Server Transact-SQL Debugger/RPC
1433 SQL Server Default Instance
1434 SQL Server Admin Connection
2382 SQL Server Browser
2383 SQL Server Analysis Services
4022 SQL Server Service Broker
UDP
Port Description
==== ======================================================
1434 SQL Server Browser Multicast Response
Symptoms:
(1) Often, I cannot save a simple report. Even though the Status Bar shows a "Current report server http://XXX/ReportServer" connection, the crazy, on-save-attempt error states that "http://XXX:80/ReportServer" (notice that it adds the ":80") is unknown. I will restart SS-related services, and that seems to make a difference, for a short time.
(2) A simple, boring SELECT — that flies in SSMS — times out when used as the dataset for a report run via the SSRS Web portal or Report Builder. Even the prompt for ONE parameter (against a tiny lookup dataset) takes a couple of minutes to appear, while that same lookup logic (another boring, trivial SELECT) flies in SMSS. (Once the parameter is chosen, the aforementioned timeout is the result.)
Microsoft's useless, cryptic timeout messages appear as follows:
For Save or Save As:
"The wait operation timed out at
Microsoft.ReportingServices.Library.ReportingService2010Impl.CreateReport..."
Oh, good, some ASPy error trash, with no clues on how to fix the problem.
For Save or Save As:
"The report 'http://XXX:80/ReportServer/Test' contains a reference to a
data source that is not valid. Verify that the shared data sources and
models that are required for this report are deployed to the report server."
I went back into the respective property pages for the datasets and re-selected/refreshed them from the same report server where they have lain undisturbed and remain fully functional (tested via the Preview menu choice on the report server).
For Run:
"The operation has timed out. Details: The operation has timed out."
Gee, thanks, Captain Obvious.
For Run:
"Failed to preview report. An error occurred within the report server
database. This may be due to a connection failure, timeout, or low disk
condition within the database. Details: ...The wait operation timed out at
Microsoft.ReportingServices.Library.ReportingService2005Impl.SetReportDefinition..."
Well, there is plenty of disk space.
For Run:
"The report execution md0spy55neszejbw04zqwq45 has expired or cannot be found.
(rsExecutionNotFound)"
What are the best ways to keep SSRS's connection persistent so that I may save my reports without issue and run them without watching them crawl and then time out?
Last night we migrated to a brand new server and moved all our data from SQL 2000 to SQL 2008 R2.
Everything worked out fine except any jobs that have steps where a linked server is referenced are failing and we can't figure out why.
If I log in as 'sa' and browse the linked servers I can expand and browse the objects, etc. I can also run the jobs' code in a query window logged in as 'sa' with no problems.
However when I run the job I get the following:
Execute of job 'ARUpdate' failed. See the history log for details.
History log:
The job failed. The Job was invoked by User sa. The last step to run was step 1 (Step 1).
Executed as user NT AUTHORITY\SYSTEM. Login failed for user 'ANSAC_NT\ANSAC-SQL$'
[SQLSTATE 28000] (Error 18456). The step failed
Sql Severity: 14
Sql Message ID: 18456
Any ideas as to why this is happening? I assume it's something with the sa account?
Happens even if I recreate the job from scratch logged in as sa or with a new sysadmin account created.
Note: we ported over the logins from 2000-2008 as well if that helps.
Thanks!