I have a requirement to run a job continuously which includes a stored procedure. This stored procedure does a critical task where it processes huge load of data as they come. As I know, it is not allowed to run 2 or more instances of a job in the same time by SQL Server it self. So, my questions are
Is there a way to run SQL Sever job continuously?
Do continuously running jobs hurt performance of the server?
There are continuous replication jobs; however, those are continuous because of an inline switch used in the command line and not due to the job being scheduled as continuous.
The only way to emulate a continuous job is to simply have it run often. There is an option under scheduling to run the job down to every second 24/7/365. With that said, you will need to be careful that the job isn't overrunning itself and that it is efficient enough to not cause issues with your server.
Whether it will effect performance is going to be reliant on what it does. If the job only selects the current date/time (not a very useful thing to do but an example), I would not expect an issue; however, if it runs complicated algorithms then it almost certainly going to cause issues.
I would recommend running this on a test server before putting it into production.
Related
Scenario - you have hundreds of reports running on a slave machine. These reports are either scheduled by MySQL's event scheduler or are called via a Python/R or Shell script. Apart from that, there are fifty odd users who are connecting to MySQL slave running random queries. These people don't really know how to write good queries and that's fair. They are not supposed to. So, every now and then (read every day), you see some queries which are stuck because of read/write locks. How do you fix that.
What you do is that you don't kill whatever is being written. Instead, you kill all the read queries. Now, that is also tricky because, if you kill all the read queries, you will also let go off OUTFILE queries, which are actually write queries (they just don't write to MySQL, but write to disk).
Why killing is necessary (I'm only speaking for MySQL, do not take this out of context)
I have got two words for you - Slave lag. We don't want that to happen, because if that happens, all users, reports, consumers suffer.
I have written the following to kill processes in MySQL based on three questions
how long has the query been running?
who is running the query?
do you want to kill write/modify queries too?
What I have intentionally not done yet is that I have not maintained a history of the processes that have been killed. One should do that so as to analyse and find out who is running all the bad queries. But there are other ways to find that out.
I have create a procedure for this. Haven't spend much time on this. So, please suggest if this is a good way to do it or not.
GitHub Gist
Switch to MariaDB. Versions 10.0 and 10.1 implement several limits and timeouts: https://mariadb.com/kb/en/library/query-limits-and-timeouts/
Then write an API between what the users write and actually hitting the database. In this layer, add the appropriate limitations.
I need to run a long backend job with long MySQL queries regularly, which will take several hours to complete. I set up Delayed Job gem to schedule this job.
When this process is running:
Will this job slow down my Rails front-end server (i.e., it will take much longer to response to a simple user's request)?
Where heavy computation happens: in my Rails server, or in MySQL server?
Will MySQL server be occupied by my scheduled job, and no one can access MySQL at the same time?
Thank you.
The answer to your question is: It depends
If your task is processor intensive it could slow down the rails server. If you are concerned about the DJ workers impacting the front end box, move them to another box with access to a shared DB. Your worker box needs the project setup but does not need to be the same box you are serving pages from.
This is completely dependent on how you wrote your task. Typically a rails app does simple select / insert / update / delete. the actual computation is done in rails. But you can specify select statements that involve complex joins or take advantage of functions in the DB. This can offload the computation of complex fields to the DB
This is dependent on the number of connections your DB is configured to accept. Typically in a production level server, you wouldn't see an issue here from the size of your query. But you should take into account how many active connections there are and how many are permitted. Each rails instance counts as a connection, as well as each worker for DJ.
In each case the actual performance is going to depend on several factors. How many connections are you creating, how much data are you transmitting between worker and DB. Where are you doing the work.
If the rails server is on the same machine as the mysql server, then there will be some impact. But your OS, and MySQL together, are pretty capable of minimizing the effects without much other intervention by you. Depending how you're deployed, you can always utilize the 'nice' command, and lower the priority of the delayed job, minimizing it's impact on your site's responsiveness.
Albeit all its greatness, it is very annoying that MySQL Workbench 5.2 freezes each time it submits a query, instead of allowing it to be performed asynchronously.
It is not even possible to launch a second instance to do other tasks in the mean time.
Do you know if there is a setting somewhere to adjust this behaviour, or is it a "feature"?
Pretty sure it's a feature. You can run more than one query in a script. There are a lot of cases where you would want/need queries to run sequentially. I don't know of any query editor tools that allow for what you want.
If you're using php you could fire off several AJAX requests to pages that each ran one of the queries you need ran, but unless you are doing something like this often; it wouldn't be worth the time to set up.
I have an exe configured under windows scheduler to perform timely operations on a set of data.
The exe calls stored procs to retrieve data and perform some calcualtions and updates the data back to a different database.
I would like to know, what are the pros and cons of using SSIS package over scheduled exe.
Do you mean what are the pros and cons of using SQL Server Agent Jobs for scheduling running SSIS packages and command shell executions? I don't really know the pros about windows scheduler, so I'll stick to listing the pros of SQL Server Agent Jobs.
If you are already using SQL Server Agent Jobs on your server, then running SSIS packages from the agent consolidates the places that you need to monitor to one location.
SQL Server Agent Jobs have built in logging and notification features. I don't know how Windows Scheduler performs in this area.
SQL Server Agent Jobs can run more than just SSIS packages. So you may want to run a T-SQL command as step 1, retry if it fails, eventually move to step 2 if step 1 succeeds, or stop the job and send an error if the step 1 condition is never met. This is really useful for ETL processes where you are trying to monitor another server for some condition before running your ETL.
SQL Server Agent Jobs are easy to report on since their data is stored in the msdb database. We have regualrly scheduled subscriptions for SSRS reports that provide us with data about our jobs. This means I can get an email each morning before I come into the office that tells me if everything is going well or if there are any problems that need to be tackled ASAP.
SQL Server Agent Jobs are used by SSRS subscriptions for scheduling purposes. I commonly need to start SSRS reports by calling their job schedules, so I already have to work with SQL Server Agent Jobs.
SQL Server Agent Jobs can be chained together. A common scenario for my ETL is to have several jobs run on a schedule in the morning. Once all the jobs succeed, another job is called that triggers several SQL Server Agent Jobs. Some jobs run in parallel and some run serially.
SQL Server Agent Jobs are easy to script out and load into our source control system. This allows us to roll back to earlier versions of jobs if necessary. We've done this on a few occassions, particularly when someone deleted a job by accident.
On one ocassion we found a situation where Windows Scheduler was able to do something we couldn't do with a SQL Server Agent Job. During the early days after a SAN migration we had some scripts for snapshotting and cloning drives that didn't work in a SQL Server Agent Job. So we used a Windows Scheduler task to run the code for a while. After about a month, we figured out what we were missing and were able to move the step back to the SQL Server Agent Job.
Regarding SSIS over exe stored procedure calls.
If all you are doing is running stored procedures, then SSIS may not add much for you. Both approaches work, so it really comes down to the differences between what you get from a .exe approach and SSIS as well as how many stored procedures that are being called.
I prefer SSIS because we do so much on my team where we have to download data from other servers, import/export files, or do some crazy https posts. If we only had to run one set of processes and they were all stored procedure calls, then SSIS may have been overkill. For my environment, SSIS is the best tool for moving data because we move all kinds of types of data to and from the server. If you ever expect to move beyond running stored procedures, then it may make sense to adopt SSIS now.
If you are just running a few stored procedures, then you could get away with doing this from the SQL Server Agent Job without SSIS. You can even parallelize jobs by making a master job start several jobs via msdb.dbo.sp_start_job 'Job Name'.
If you want to parallelize a lot of stored procedure calls, then SSIS will probably beat out chaining SQL Server Agent Job calls. Although chaining is possible in code, there's no visual surface and it is harder to understand complex chaining scenarios that are easy to implement in SSIS with sequence containers and precedence constraints.
From a code maintainability perspective, SSIS beats out any exe solution for my team since everyone on my team can understand SSIS and few of us can actually code outside of SSIS. If you are planning to transfer this to someone down the line, then you need to determine what is more maintainable for your environment. If you are building in an environment where your future replacement will be a .NET programmer and not a SQL DBA or Business Intelligence specialist, then SSIS may not be the appropriate code-base to pass on to a future programmer.
SSIS gives you out of the box logging. Although you can certainly implement logging in code, you probably need to wrap everything in try-catch blocks and figure out some strategy for centralizing logging between executables. With SSIS, you can centralize logging to a SQL Server table, log files in some centralized folder, or use another log provider. Personally, I always log to the database and I have SSRS reports setup to help make sense of the data. We usually troubleshoot individual job failures based on the SQL Server Agent Job history step details. Logging from SSIS is more about understanding long-term failure patterns or monitoring warnings that don't result in failures like removing data flow columns that are unused (early indicator for us of changes in the underlying source data structure) or performance metrics (although stored procedures also have a separate form of logging in our systems).
SSIS give you a visual design surface. I mentioned this before briefly, but it is a point worth expanding upon on its own. BIDS is a decent design surface for understanding what's running in what order. You won't get this from writing do-while loops in code. Maybe you have some form of a visualizer that I've never used, but my experience with coding stored procedure calls always happened in a text editor, not in a visual design layer. SSIS makes it relatively easy to understand precedence and order of operations in the control flow which is where you would be working if you are using execute sql tasks.
The deployment story for SSIS is pretty decent. We use BIDS Helper (a free add-in for BIDS), so deploying changes to packages is a right click away on the Solution Explorer. We only have to deploy one package at a time. If you are writing a master executable that runs all the ETL, then you probably have to compile the code and deploy it when none of the ETL is running. SSIS packages are modular code containers, so if you have 50 packages on your server and you make a change in one package, then you only have to deploy the one changed package. If you setup your executable to run code from configuration files and don't have to recompile the whole application, then this may not be a major win.
Testing changes to an individual package is probably generally easier than testing changes in an application. Meaning, if you change one ETL process in one part of your code, you may have to regression test (or unit test) your entire application. If you change one SSIS package, you can generally test it by running it in BIDS and then deploying it when you are comfortable with the changes.
If you have to deploy all your changes through a release process and there are pre-release testing processes that you must pass, then an executable approach may be easier. I've never found an effective way to automatically unit test a SSIS package. I know there are frameworks and test harnesses for doing this, but I don't have any experience with them so I can't speak for the efficacy or ease of use. In all of my work with SSIS, I've always pushed the changes to our production server within minutes or seconds of writing the changes.
Let me know if you need me to elaborate on any points. Good luck!
If you have dependency on Windows features, like logging, eventing, access to windows resources- go windows scheduler/windows services route. If it is just db to db movement or if you need some kind of heavy db function usage- go SSIS route.
I have an SSIS package that works fine. The package runs every night and takes about 4 hours to complete. I have am a newb to SSIS, so I want to see what my options are. I am not finding anything on the web about these two issues, so any advice is greatly appreciated.
What to do when I have an external
issue such as a power
failure/accidental restart. Is there
a way to alert someone or have the
package begin again on restart.
A couple weeks ago there was a
process that got hung and locked
table, making the process not
execute. How is the best way to
handle ensuring I have the proper
access before starting and if not,
get the access. I am ok with killing
the processes etc.
Looking for best practice info. Thanks
For #1 - there is no inherent "restart" mechanism in SSIS, since to start with, there is no inherent "start" mechanism. You'll have to look at the process that you've got managing the scheduled execution of your packages, which I assume could be SQL Agent.
Given that, your options for determining if a SQL Agent job failed, and/or restarting that job are the same whether the contents of the job are SSIS packages or not. There are quite a few stored procedures for monitoring and querying job execution and results. You could also implement your own mechanism for recording job/package status.
SSIS does offer "checkpoints" to help you restart packages from certain points, but the general concensus on that feature is that it is limited in it's applicability - your mileage may vary.
Personally, I always include a failure route in my job to email someone on failure of the job, and configure my jobs and packages to be idempotent - that is, they can be re-run without fear of improperly conducting the same operations twice. They either "reset" the environment (delete and reload), or they can detect exactly where they left off.
Item #2 is a difficult question and depends greatly on your environment and scenario. You can use simple Tasks like an Execute SQL Task to run "test" commands that are tested to fail if sufficient privileges or locks exist. Or you may be able to inquire directly through SPs or other mechanisms to determine if you need to take remedial action before you attempt to run the meat of your package.
Using Precedence Constraints "on failure" can assist with that kind of logic. So can Event Handlers.