I have sql server script and i want to automatic run it on the database every week.
any help.
i tried sql server agent job but i have alot of data bases on my server and i should make step to every database and it will run in the same day and same time.
You were on the right track, in management studio:
(1) Find SQL Agent and right click and select Job.
(2) Give you job a name and description and then choose the Steps option on the left
(3) In the Step options you can name your step, insert your script code, AND select which database you want the job to be performed on.
(4) Next go to the Schedules option on the left and designate how often you want the job to run (pretty self-explanatory)
(5) The Alerts, Notifications, etc options should also be utilized if you want to be informed on failure/success/etc.
That's it pretty much.
If you want one job with one step that performs the same action on every database (or perhaps a subset of databases), consider using system procedure sp_msForEachDB. I spelled out one way to use this in this prior answer.
Related
I'm a java dev who uses Mysql Workbench as a database client and IntelliJ IDEA as an IDE. Every day I do SQL queries to the database from 5 up to 50 times a day.
Is there a convenient way to save and re-run frequently used queries in Mysql Workbench/IntelliJ IDEA so that I can:
avoid typing a full query which has already been used again
smoothly access a list of queries I've already used (e.g by auto-completion)
If there is no way to do it using Mysql Workbench / IDEA, could you please advise any good tools providing this functionality?
Thanks!
Create Stored Procedures, one per query (or sequence of queries). Give them short names (to avoid needing auto-completion).
For example, to find out how many rows in table foo (SELECT COUNT(*) FROM foo;).
One-time setup:
DELIMITER //
CREATE PROCEDURE foo_ct
BEGIN;
SELECT COUNT(*) FROM foo;
END //
DELIMITER ;
Usage:
CALL foo_ct();
You can pass arguments in in order to make minor variations. Passing in a table name is somewhat complex, but numbers of dates, etc, are practical and probably easy.
If you have installed SQLyog for your mysql then you can use Favorites menu option in which you can save your query and in one click it will automatically writes the saved query on Query Editor.
The previous answers are correct - depending on the version of the Query Browser they are either called Favorites or Snippets - the problem being you can't create sub-folders to group them. And keeping tabs open is an option - but sometimes the browser 'dies' - and you're back to ground 0. So the obvious solution I came up with - create a database table! I have a few 'metadata' fields for descriptions - the project a query is associated to; problem the query solves; and the actual query.
You could keep your query library in an SQL file and load that when WB opens (it's automatically opened when you restart WB and that file was open on last close). When you want to run a specific query place the caret in it's text and press Ctrl+Enter (Cmd+Enter on Mac) to run only this query. The organization of that SQL file is totally up to you. You have more freedom than any "favorites" solution can give you. You can even have more than one file with grouped statements.
Additionally, MySQL Workbench has a query history (see the Output Tab), which is saved to disk, so you can return to a query even month's after you wrote it.
I'm really new to SSIS and I would like to automate some of my workflow. However, most of my tasks require that other jobs/packages in SQL Server Job Agent have previously been run for the day/week by other team members. My question is: how can I direct my workflow based on the status of these other packages? Control Flow doesn't have a Conditional Split tool, and I can't kick off a separate job/package from Data Flow. I can easily write a SQL statement to determine whether any or all of the related packages have been run, and even have it return a boolean value if needed, but I'm at a loss for how to make this direct the workflow to either proceed to the next step or fail the entire package if any the others haven't been run (desired result).
I really appreciate any help in this. Thank you!
In SSIS:
You can set conditions in the Control Flow by double-clicking on the arrow that links one task to another. If you set up an Execute SQL Task to run a query to check whether a package has run and then return a result into a variable, you can then set up a condition to check that variable, and only continue to the next task if it is (or isn't) a certain value.
Here's an example of how you might set up a precedence constraint so that the next step would only happen if the previous one was successful, and also a particular variable did not have a value of C:
In SQL Server Agent:
You could also - if you want to avoid running an entire package - set something up in the SQL Server Agent job instead. You could add an initial step which checks on the status of a package, then quits the job reporting success. This does mean that if the step fails for another reason (say the database it's trying to run the select statement on is down), it will still quit and report success, so be careful - you might want to set up some other specific steps first which check for such situations.
Here's an article I used a while back when I wanted to understand how to set up a SQL Server Agent job in this way:
http://sqlactions.com/2012/08/05/how-to-create-custom-schedule-for-sql-server-agent-job/
so my question is, what runs the subscribed report in SSRS ? I mean when I subscribe to report and give it a desired time when it should run and send me the file. something does this right ? so I want to know what runs it ? is it a procedure in SQL function ? well the reason why I want to know this is that I want to run SQL update before each time this scheduled report starts.
I can just create procedure that will do the update I want before the scheduled time but, still it will be more practical to integrate it within the job itself
Short answer, these subscriptions are run as database jobs through the SQL Server Agent.
They are created with GUID type names:
The one job step will have a command like:
exec [ReportServer].dbo.AddEvent #EventType='SharedSchedule', #EventData='8df4ff30-97d3-41f7-b3ef-9ce48bfdfbfa'
You can trace these jobs/GUIDs back to the subscription and report through the ReportServer database using the Subscriptions table and its MatchData column (matches the job GUID) and the Catalog table which includes the report data (i.e. linked through the Subscriptions.Report_OID column.
You can use this information to check what's scheduled and based on this schedule your update appropriately.
I haven't tried it myself, but one option could even be to hook into the existing database jobs, but I would approach this with caution; I can't see any issues but maybe it's best not to update any system created jobs like these.
I would like to ask a question for connoisseurs of SQL (MySQL, to be specific).
I have a table reservation schedules. And when a customer makes a reservation there is a time to let the client to use my service. Therefore, the reservation that he did have to leave the table reservations.
Once the time limit of use is reached, there is some method (trigger,
I believe), which automatically erase the record of this book on the
table?
If so, can someone give me some idea of how to start my search for it, or it is also totally welcome some help as some more advanced lines of code.
There is also the possibility that this only be possible to be implemented via Server-Side (PHP, ASP ...), which does not believe is so true because SQL is a language very complete (to my knowledge).
Edit1: The problem is that I believe this is a task of the DBMS, so I wanted to leave this responsibility to the MySQL The problem is: how?
A trigger is triggered by either before or after an insert , update or delete event (at least in MySQL according to the docs)
What you want is some sort of scheduled job either through your application be it php, asp.net, etc.. or cron job that calls some sort of SQL script.
So to answer, it can't be done purely with triggers.
You can use SQL jobs, but if the removal logic is to complex to manage it with queries I suggest you to use a PHP script that does all that work for you.
Just write down the data check/remove logic in PHP and set up a simple cron operation for it.
The advantage of this solution is that you can access to your scripts/classes/db providers and save your time and your can log all the operations separately (instead of logging to MySQL logs, no matter what script language you are relying on).
If you have a full control of your server the scheduled operation will look like this (if you want to check your DB entries every day at 00:01):
cat /etc/cron.d/php5
0 1 * * * php /path/to/your/script.php >> /path/to/your_script.log
..otherwise you will have to check the control panel of your hosting account and figure out how to manage
You can create one more column in your table where you will create the expiration date. Then you can on your sql server create the job that will erase all records that have expiration date less than curent date.
CREATE EVENT db_name
ON SCHEDULE EVERY 10 SECOND
DO
DELETE FROM myschema.mytable WHERE expiration_date < NOW()
I hope that will help.
I am very new to this and a good friend is in a bind. I am at my wits end. I have used gui's like navicat and sqlyog to do this but, only manually.
His band info data (schedules and whatnot) is in a MYSQL database on a server (admin server).
I am putting together a basic site for him written in Perl that grabs data from a database that resides on my server (public server) and displays schedule info, previous gig newsletters and some fan interaction.
He uses an administrative interface, which he likes and desires to keep, to manage the data on the admin server.
The admin server db has a bunch of tables and even table data the public db does not need.
So, I created tables on the public side that only contain relevant data.
I basically used a gui to export the data, then insert to the public side whenever he made updates to the admin db (copy and paste).
(FYI I am using DBI module to access the data in/via my public db perl script.)
I could access the admin server directly to grab only the data I need but, the whole purpose of this is to "mirror" the data not access the admin server on every query. Also, some tables are THOUSANDS of rows and parsing every row in a loop seemed too "bulky" to me. There is however a "time" column which could be utilized to compare to.
I cannot "sync" due to the fact that the structures are different, I only need the relevant table data from only three tables.
SO...... I desire to automate!
I read "copy" was a fast way but, my findings in how to implement were too advanced for my level.
I do not have the luxury of placing a script on the admin server to notify when there was an update.
1- I would like to set up a script to check a table to see if a row was updated or added on the admin servers db.
I would then desire to update or insert the new or changed data to the public servers db.
This "check" could be set up in a cron job I guess or triggered when a specific page loads on the public side. (the same sub routine called by the cron I would assume).
This data does not need to be "real time" but, if he updates something it would be nice to have it appear as quickly as possible.
I have done much reading, module research and experimenting but, here I am again at stackoverflow where I always get great advice and examples.
Much of the terminology is still quite over my head so verbose examples with explanations really help me learn quicker.
Thanks in advance.
The two terms you are looking for are either "replication" or "ETL".
First, replication approach.
Let's assume your admin server has tables T1, T2, T3 and your public server has tables TP1, TP2.
So, what you want to do (since you have different table structres as you said) is:
Take the tables from public server, and create exact copies of those tables on the admin server (TP1 and TP2).
Create a trigger on the admin server's original tables to populate the data from T1/T2/T3 into admin server's copy of TP1/TP2.
You will also need to do initial data population from T1/T2/T3 into admin server's copy of TP1/TP2. Duh.
Set up the "replication" from admin server's TP1/TP2 to public server's TP1/TP2
A different approach is to write a program (such programs are called ETL - Extract-Transform-Load) which will extract the data from T1/T2/T3 on admin server (the "E" part of "ETL"), massage the data into format suitable for loading into TP1/TP2 tables (the "T" part of "ETL"), transfer (via ftp/scp/whatnot) those files to public server, and the second half of the program (the "L") part will load the files into the tables TP1/TP2 on public server. Both halfs of the program would be launched by cron or your scheduler of choice.
There's an article with a very good example of how to start building Perl/MySQL ETL: http://oreilly.com/pub/a/databases/2007/04/12/building-a-data-warehouse-with-mysql-and-perl.html?page=2
If you prefer not to build your own, here's a list of open source ETL systems, never used any of them so no opinions on their usability/quality: http://www.manageability.org/blog/stuff/open-source-etl
I think you've misunderstood ETL as a problem domain, which is complicated, versus ETL as a one-off solution, which is often not much harder than writing a report. Unless I've totally misunderstood your problem, you don't need a general ETL solution, you need a one-off solution that works on a handful of tables and a few thousand rows. ETL and Schema mapping sound scarier than they are for a single job. (The generalization, scaling, change-management, and OLTP-to-OLAP support of ETL are where it gets especially difficult.) If you can use Perl to write a report out of a SQL database, you probably know enough to handle the ETL involved here.
1- I would like to set up a script to check a table to see if a row was updated or added on the admin servers db. I would then desire to update or insert the new or changed data to the public servers db.
If every table you need to pull from has an update timestamp column, then your cron job includes some SELECT statements with WHERE clauses based on the last time the cron job ran to get only the updates. Tables without an update timestamp will probably need a full dump.
I'd use a one-to-one table mapping unless normalization was required... just simpler to my opinion. Why complicate it with "big" schema changes if you don't have to?
some tables are THOUSANDS of rows and parsing every row in a loop seemed too "bulky" to me.
Limit your queries to only the columns you need (and if there are no BLOBs or exceptionally big columns in what you need) a few thousand rows should not be a problem via DBI with a FETCHALL method. Loop all you want locally, just make as few trips to the remote database as possible.
If a row is has a newer date, update it. I will also have to check for new rows for insertion.
Each table needs one SELECT ... WHERE updated_timestamp_columnname > last_cron_run_timestamp. That result set will contain all rows with newer timestamps, which contains newly inserted rows (if the timestamp column behaves like I'd expect). For updating your local database, check out MySQL's ON DUPLICATE KEY UPDATE syntax... this will let you do it in one step.
... how to implement were too advanced for my level ...
Yes, I have actually done this already but, I have to manually update...
Some questions to help us understand your level... Are you hitting the database from the mysql client command-line or from a GUI? Have you gotten to the point where you've wrapped your SQL queries in Perl and DBI, yet?
If the two databases have different, you'll need an ETL solution to map from one schema to another.
If the schemas are the same, all you have to do is replicate the data from one to the other.
Why not just create identical structure on the 'slave' server to the master server. Then create a small table that keeps track of the last timestamp or id for the updated tables.
Then select from the master all rows changed since the last timestamp or greater than the id. Insert them into the matching table on the slave server.
You will need to be careful of updated rows. If a row on the master is updated but the timestamp doesn't change then how will you tell which rows to fetch? If that's not an issue the process is quite simple.
If it is an issue then you need to be more sophisticated, but without knowing the data structure and update mechanism its a goose chase to give pointers on it.
The script could be called by cron every so often to update the changes.
if the database structures must be different on the two servers then a simple translation step may need to be added, but most of the time that can be done within the sql select statement and maybe a join or two.