I have a stored procedure to simply run a series of UPDATE statements on a CRM2011 SQL Server. The goal is to have it run every 30 minutes via a SQL Server Agent job. The stored procedure does not expect any parameters.
I create the job and add a step to call a T-SQL statement "EXEC mystoredprocname". I right click and "Start Job at this Step" and it completes successfully. However, none of the updates are reflected in the database.
If I run "EXEC mystoredprocname" manually in a query line, it executes fine and the database is updated as expected.
This seems like something that should be incredibly simple, so I am not sure where the breakdown in my process is.
As you mention in your comments that your stored procedure uses a filtered view, I'm fairly willing to wager that you are not running the schedule as a user who authenticates via Windows Authentication and also has the correct CRM permissions, because, as has oft been noted, filtered views implement the CRM's Windows-based authentication model.
So I have three suggestions:
Double check to make sure the schedule is running under the Windows account of a CRM user who has the correct read permissions.
Since you're committed to updating the tables directly, the only reason why you'd want to use a filtered view is because it packages the retrieval of the string representations of OptionSets for you. You can instead query the StringMap tables directly and reference the regular views, for which you don't need to be a CRM user to access. You'll notice a speed improvement as well, as filtered views are slowed down by the security checks.
If you're not committed to updating the tables directly, why not rewrite your stored procedure as a small app that you can schedule that will do the updating every 30 minutes? Unless you have a massive delta, this should be the preferred approach. You gain the advantages of the built-in validation model in the CRM web service, and though you lose the benefits of a set-based approach, I think the pros of working with a third-party system outweigh the cons of potential hacks and breaks in the system. If you are not a .NET developer (and even if you are), the CRM SDK has many examples that could help you get started.
Below are some other questions that relate to my points above and may help you.
How to get option set values from sql server in an application outside crm
Schedule workflows via external tool
Scheduling tasks in Microsoft CRM 2011
How to save a record and immediately use its GUID
Related
I'm trying to make a live react control panel, so when you push a button on the web control panel the data (true or false) goes to the SQL database (phpmyadmin) and the when the data changes te SQL database should trigger a script on the raspberry pi that will turn the light on.
I know how to write data to the SQL database and how to control a lamp with a raspberry pi but I dont know how to trigger or execute something when data in the SQL database gets updated.
It needs to live, like react in max 20 ms or something. Can anyone help me with this?
The SQL Database runs on Ubuntu and is phpmyadmin based.
Greets,
Jules
Schematic:
DataUpdateGraphical
It's not a good idea to use a trigger in MySQL to activate any external process. The reason is that the trigger fires when the INSERT/UPDATE/DELETE executes, not when the transaction commits. So if the external process receives the event, it may immediately go query the database to get other details about that data change, and find it cannot see the uncommitted data.
Instead, I recommend whatever app is writing to the database should be responsible for creating the notification. Only then can the app wait until after the transaction is confirmed to be committed.
So your PHP code that handles the button press would insert/update some data the database, and check that the SQL completed without errors (always check the result of executing an SQL statement) and the transaction committed.
Then the same PHP code subsequently calls your script, or posts an even to a message queue that the script is waiting for, or something like that.
Just don't use the MySQL as a poor man's message queue! It's not the right tool for that.
The same advice applies to any other action you want to do external to the database. Like sending an email, writing a file, making an http API call, etc.
Don't do it in an SQL trigger, because external actions don't obey transaction isolation. The trigger or one of the cascading data updates could get rolled back, but the effect of an external action cannot be rolled back.
MySQL doesn't have a way to deliver an event to external software from within a trigger. That's what you need to have your database push events to your app.
(Actually, it's possible to install a user-defined function that sends an industry-standard stomp messsage to a message queue system like rabbitmq . But you will have to control the entire server, AND your database administrator, to get that installed.)
The alternative: run a query every so often to retrieve changed information, and push it to your app. That's a nasty alternative: polling is a pain in the xxx neck.
Can you get your server app to detect changes as it UPDATEs the database? It'll take some programming and testing, but it's a good solution to your problem.
You could use redis instead of / in addition to MySql. redis sends events to web servers whenever values change, which is close to perfect for what you want to do. https://redis.io/topics/notifications
I am trying to execute a Batch File in the trigger in MySQL. Whenever data is inserted in a table Insert On Trigger should be invoked and it should execute/Call the Batch file which is given in Trigger. Is this Scenario possible in MySQL? I did this in SQL Server. In SQL Server it is possible.
By default, it is not possible to run external commands (start external processes) in MySQL queries or stored procedures.
However, you can use user defined functions (UDF) for the purpose. With UDF, you can write a custom function in C that then you can use in any query or stored procedure. Such a function can obviously also run external processes, with a command line potentially received as a parameter.
Fortunately, this is already implemented, but you have to install it. You can search on http://www.mysqludf.org/, for example this one will allow you to execute OS commands.
Note though, that this may introduce a serious security risk. If there is a SQL Injection vulnerability in your application, having these UDFs loaded greatly increase the risk of a full server compromise (as opposed to just a full application compromise).
An alternative approach to UDF could be to populate a separate table (maybe in-memory) with an "after insert trigger" and then poll the table periodically with a cron job, invoking your script for every row.
Your script will be also responsible of removing the row after the operation is completed successfully.
As an upside the script can run on a different machine than the db one.
It is usually a good practice to keep triggers implementation as simple as possible because it can drastically impact the performance of every insert.
You can also take a look at this question or this FAQ page on MySQL triggers.
I'm porting a reporting application from .net / MSSQL to php / MySQL and could use some advice from the MySQL experts out there.
I haven't done anything with MySQL for a few years, and when I was last using it stored procedures were brand new and I was advised to stay away from them because of that newness.
So, now that it's 2011, I was wondering if there's anything inherently "bad" about using them, as they worked so well for this app in MSSQL. I know it will depend on the needs of my app, so here are the high level points: (This will run on Linux if that matters)
The app generates a very complex report, however it is NOT a high concurrency app, typically 1-2 users at a time, 5 concurrent would shock me. I can even throttle it to prevent more than 2 or so users from using it simultaneously, so a lot of concurrent users is not going to be a concern.
Virtually 100% of the heavy lifting in this app in in the MSSQL stored procedure. The data is uploaded via the web front end, the stored procedure then takes it from there, and eventually spits out a csv / excel file for the user a few minutes later.
This works great using an MSSSQL stored procedure. However it's a good 2000 lines of sql code and I'm hesitant to submit the sql statements one at a time via php as opposed to using a stored procedure. Most importantly, it works fine with the current architecture, I'm not looking to change it unless I have to in order to accommodate MySQL / PHP.
Any gotchas in using a MySql stored procedure? Are they buggier than submitting sql statements or anything odd like that?
Thanks in advance for everyone's thoughts on this.
Stored procedures in MySQL are quite verbose in syntax, and are hard to debug or profile. Personally I think they are very useful in some cases, but I would be very hesitant to try to maintain a 2000+ line stored procedure in MySQL.
I have an exe configured under windows scheduler to perform timely operations on a set of data.
The exe calls stored procs to retrieve data and perform some calcualtions and updates the data back to a different database.
I would like to know, what are the pros and cons of using SSIS package over scheduled exe.
Do you mean what are the pros and cons of using SQL Server Agent Jobs for scheduling running SSIS packages and command shell executions? I don't really know the pros about windows scheduler, so I'll stick to listing the pros of SQL Server Agent Jobs.
If you are already using SQL Server Agent Jobs on your server, then running SSIS packages from the agent consolidates the places that you need to monitor to one location.
SQL Server Agent Jobs have built in logging and notification features. I don't know how Windows Scheduler performs in this area.
SQL Server Agent Jobs can run more than just SSIS packages. So you may want to run a T-SQL command as step 1, retry if it fails, eventually move to step 2 if step 1 succeeds, or stop the job and send an error if the step 1 condition is never met. This is really useful for ETL processes where you are trying to monitor another server for some condition before running your ETL.
SQL Server Agent Jobs are easy to report on since their data is stored in the msdb database. We have regualrly scheduled subscriptions for SSRS reports that provide us with data about our jobs. This means I can get an email each morning before I come into the office that tells me if everything is going well or if there are any problems that need to be tackled ASAP.
SQL Server Agent Jobs are used by SSRS subscriptions for scheduling purposes. I commonly need to start SSRS reports by calling their job schedules, so I already have to work with SQL Server Agent Jobs.
SQL Server Agent Jobs can be chained together. A common scenario for my ETL is to have several jobs run on a schedule in the morning. Once all the jobs succeed, another job is called that triggers several SQL Server Agent Jobs. Some jobs run in parallel and some run serially.
SQL Server Agent Jobs are easy to script out and load into our source control system. This allows us to roll back to earlier versions of jobs if necessary. We've done this on a few occassions, particularly when someone deleted a job by accident.
On one ocassion we found a situation where Windows Scheduler was able to do something we couldn't do with a SQL Server Agent Job. During the early days after a SAN migration we had some scripts for snapshotting and cloning drives that didn't work in a SQL Server Agent Job. So we used a Windows Scheduler task to run the code for a while. After about a month, we figured out what we were missing and were able to move the step back to the SQL Server Agent Job.
Regarding SSIS over exe stored procedure calls.
If all you are doing is running stored procedures, then SSIS may not add much for you. Both approaches work, so it really comes down to the differences between what you get from a .exe approach and SSIS as well as how many stored procedures that are being called.
I prefer SSIS because we do so much on my team where we have to download data from other servers, import/export files, or do some crazy https posts. If we only had to run one set of processes and they were all stored procedure calls, then SSIS may have been overkill. For my environment, SSIS is the best tool for moving data because we move all kinds of types of data to and from the server. If you ever expect to move beyond running stored procedures, then it may make sense to adopt SSIS now.
If you are just running a few stored procedures, then you could get away with doing this from the SQL Server Agent Job without SSIS. You can even parallelize jobs by making a master job start several jobs via msdb.dbo.sp_start_job 'Job Name'.
If you want to parallelize a lot of stored procedure calls, then SSIS will probably beat out chaining SQL Server Agent Job calls. Although chaining is possible in code, there's no visual surface and it is harder to understand complex chaining scenarios that are easy to implement in SSIS with sequence containers and precedence constraints.
From a code maintainability perspective, SSIS beats out any exe solution for my team since everyone on my team can understand SSIS and few of us can actually code outside of SSIS. If you are planning to transfer this to someone down the line, then you need to determine what is more maintainable for your environment. If you are building in an environment where your future replacement will be a .NET programmer and not a SQL DBA or Business Intelligence specialist, then SSIS may not be the appropriate code-base to pass on to a future programmer.
SSIS gives you out of the box logging. Although you can certainly implement logging in code, you probably need to wrap everything in try-catch blocks and figure out some strategy for centralizing logging between executables. With SSIS, you can centralize logging to a SQL Server table, log files in some centralized folder, or use another log provider. Personally, I always log to the database and I have SSRS reports setup to help make sense of the data. We usually troubleshoot individual job failures based on the SQL Server Agent Job history step details. Logging from SSIS is more about understanding long-term failure patterns or monitoring warnings that don't result in failures like removing data flow columns that are unused (early indicator for us of changes in the underlying source data structure) or performance metrics (although stored procedures also have a separate form of logging in our systems).
SSIS give you a visual design surface. I mentioned this before briefly, but it is a point worth expanding upon on its own. BIDS is a decent design surface for understanding what's running in what order. You won't get this from writing do-while loops in code. Maybe you have some form of a visualizer that I've never used, but my experience with coding stored procedure calls always happened in a text editor, not in a visual design layer. SSIS makes it relatively easy to understand precedence and order of operations in the control flow which is where you would be working if you are using execute sql tasks.
The deployment story for SSIS is pretty decent. We use BIDS Helper (a free add-in for BIDS), so deploying changes to packages is a right click away on the Solution Explorer. We only have to deploy one package at a time. If you are writing a master executable that runs all the ETL, then you probably have to compile the code and deploy it when none of the ETL is running. SSIS packages are modular code containers, so if you have 50 packages on your server and you make a change in one package, then you only have to deploy the one changed package. If you setup your executable to run code from configuration files and don't have to recompile the whole application, then this may not be a major win.
Testing changes to an individual package is probably generally easier than testing changes in an application. Meaning, if you change one ETL process in one part of your code, you may have to regression test (or unit test) your entire application. If you change one SSIS package, you can generally test it by running it in BIDS and then deploying it when you are comfortable with the changes.
If you have to deploy all your changes through a release process and there are pre-release testing processes that you must pass, then an executable approach may be easier. I've never found an effective way to automatically unit test a SSIS package. I know there are frameworks and test harnesses for doing this, but I don't have any experience with them so I can't speak for the efficacy or ease of use. In all of my work with SSIS, I've always pushed the changes to our production server within minutes or seconds of writing the changes.
Let me know if you need me to elaborate on any points. Good luck!
If you have dependency on Windows features, like logging, eventing, access to windows resources- go windows scheduler/windows services route. If it is just db to db movement or if you need some kind of heavy db function usage- go SSIS route.
As I said in a previous post, our Rails app has to interface with an E-A-V type of table in a third-party application that we're pulling data from. I had created a View to make the data normal but it is taking way too long to run. We had one of our offshore PHP developers create a stored procedure to help speed it up.
Now we run into the issue that we need to call this stored procedure from the Rails app, as well as provide searching and filtering. The view could do this because Rails was treating it as a traditional Rails model. How could I do this with the stored proc? Would we need to write custom searching and ordering (we were using Searchlogic)? Management is incapable of understanding the drawbacks of using a stored proc from Rails; all they say is that the current method is taking too long to load the data and needs to be fixed, but searching and filtering are critical functions.
EDIT I posted the code for this query here: Optimizing a strange MySQL Query. What is funny is that when I run this query in a GUI (Navicat) it runs in about 5 seconds, but on the web page it takes over a minute to run; the view is complicated for reasons I outline in the original post but I would think that MySQL optimizes and caches views like SQL Server does (or rather, how I read that SQL Server does) to improve performance.
You can call stored procedures from Rails, but you are going to lose most of the benefits of ActiveRecord, as the standard generated SQL will not work. You can use the native database connection and call it, but it's going to be a leaky abstraction. You may want to consider DataMapper.
Looking back at your last question, I would get the DBA to create a trigger to create a more relational structure from the data. The trigger would insert the EVA data into a table, which is the only way I know of to do materialized views in MySQL. This way you only pay a small incremental background cost on insert, and the application can run normally.
Anyway...
ActiveRecord::Base.connection.execute("call SP_name (#{param1}, #{param2}, ... )")
But there's an open ticket out there on lighthouse indicating this approach may not work with out changing some of the parameters to use the connection.