SQL Server sp_recompile on a table - sql-server-2008

When sp_recompile is run against a table, I understand that all stored procedures and triggers dependent on that table will be recompiled.
What I don't understand is what parameters SQL Server uses for this recompile. I can't see how parameter sniffing would factor in here. Does it compile an execution plan that is 'generic' using something similar to OPTIMIZE FOR UNKNOWN?
I feel like I'm missing something really obvious.
Does anyone have an understanding of this?

sp_recompile do not executes a recompilation of the objects itself. It deletes only all saved execution-plans. This forces an recompilation by the next call of the object (with the parameters of this next call).

Related

ssis temp table exec proc

SSIS newbie here.
I have an SSIS package I created based on the wizard. I added a SQL task to run the script I was running previously separately, in order to reduce the process to one step. The script uses lots of temp tables, and one global ##temp at the end to make the result accessible outside the process.
When I try to execute the package, I get a complex "Package Validation Error" (error code 0x80040E14). I think the operative part of the error message is "Invalid object name '##roster5'."
I just realized it was the Data Flow task that was throwing the error, so I tried to put another SQL Task before everything else to create the table so that Data Flow task would see that the table is there; but it is still giving me the error: "Invalid object name '##ROSTER_MEMBER_NEW5'."
What am I missing/doing wrong? I don't know what I don't know. It seems like this shouldn't be that complicated (As a newbie, I know that this is probably a duplicate of...something, but I don't know how else to ask the question.)
Based on your responses, another option would be to add a T-SQL step in a SQL Agent job that executes stand-alone T-SQL. You would need to rethink the flow control of your original SSIS package and split that into 2 separate packages. The first SSIS package would execute all that is needed before the T-SQL step, the next step would execute the actual T-SQL needed to aggregate, then the last step would call the second package, which would complete the process.
I'm offering this advice with the caveat that it isn't advisable. What would work best is to communicate with your DBA, who will be able to offer you a service account to execute your SSIS package with the elevated privileges needed to truncate the staging table that will need to exist for your process to manage.
I actually want to post a non-answer. I tried to follow the advice above as well as I could, but nothing worked. My script was supposed to run, and then the data pump was supposed to, essentially copy the content of a global temp to another server/table. I was doing this as two steps, and tried to use SSIS to do it all in one step. there wasn't really a need to pass values within SSIS from component to component. It doesn't seem like this should be that hard.
In any event, as I said nothing worked. Ok, let me tell what I think happened. After making a lot of mistakes, a lot of undo's, and a lot of unsuccessful attempts, something started working. One of the things I think contributes is that I had set the ResultSetType to ResultSetType_None, since I wouldn't be using any results from that step. If anyone thinks that's not what happened, I'm happy to hear the actuality, since I want to learn.
I consider this a non-answer, because I have little confidence that I'm right, or that I got it by anything other than an accident.

SQL Server: stored procedure saves with errors

I am using SQL Server 2008 and SSMS 2012. I have a stored procedure that references a table that does not exist. The editor displays red underlines on the offending table to indicate that something is wrong.
However when I execute the query, I get the message
Command(s) completed successfully.
This is extremely annoying. I also connected to the engine from another machine and it experienced the same problem, which implies its on the server, not ssms. Is there some kind of setting on the database that determines whether the database checks the syntax of stored procedures? PLEASE HELP!
Clarification:
I know that the syntax is wrong. The problem is that SSMS allows me to execute the CREATE or ALTER statement without error even when it references a table that does not exist. I want it to fail. Usually it does, but for some reason it suddenly stopped giving errors. I want it to give me errors. How do I do this?
Your syntax is fine and that is checked when you create the stored procedure. The existence of tables is however not checked until the stored procedure is compiled and that happens when the stored procedure is executed.
What's going on is that the IDE in the management studio hasn't had the schema model refreshed. Since the local instance of SSMS doesn't know the table exists, it throws a redline under the table name; when you actually run the sproc/query, the code sent to the database evaluates properly and runs.
To refresh the SSMS local data, try pressing Ctrl-Shift-R, as described here.
Edit:
You might want to look into Deferred Name Resolution
You will not get an error message when you CREATE or ALTER, but you can check your SPs for missing dependencies with a script afterwards.
Please check my answer to the related question here (I just post a link to avoid duplication):
I'm looking for a reliable way to verify T-SQL stored procedures. Anybody got one?

Put trigger in MySQL database to update Oracle database?

I want to create an insert trigger on MySQL which will automatically insert the record into an Oracle database. I would like to know if there are people that have experience to share on this topic.
Cheers
Invoke a script as is done in this example that calls the Oracle code.
Note: you lose support for transactions (there will be no built-in rollback for the Oracle database) when you perform this type of cascading, and you also will take a likely very large performance hit in doing so. The script could turn around and simply call Java code or some other executable that invokes your some generic code to insert into Oracle, or it could be a raw query that gets passed arguments from the script.
This is almost certainly a bad idea because of the odd side-effect behavior, but it's one that can be implemented. I think that you would be much better off having the code to do this against two different DataSources (in Java/.NET speak) rather than have a hidden script in a MySQL trigger that screams unmaintainable, as well as hidden failure for future developers.

Use single Elmah.axd for multiple applications with single DB log

We have a single SQL Log for storing errors from multiple applications. We have disabled the elmah.axd page for each one of our applications and would like to have a new application that specifically displays errors from all of the apps that report errors to the common SQL log.
As of now, even though the application for all errors is using the common SQL log, it only displays errors from the current application. Has anyone done this before? What within the elmah code might need to be tweaked?
I assume by "SQL Log" you mean MSSQL Server... If so, probably the easiest way of accomplishing what you want would be to edit the stored procedures created in the SQL Server database that holds your errors.
To get the error list, the ELMAH dll calls the ELMAH_GetErrorsXML proc with the application name as a parameter, then the proc filters the return with a WHERE [Application] = #Application clause.
Just remove the WHERE clause from the ELMAH_GetErrorsXML proc, and all errors should be returned regardless of application.
To get a single error record properly, you'll have to do the same with the ELMAH_GetErrorXML proc, as it also filters by application.
This, of course, will affect any application retrieving errors out of this particular database, but I assume in your case you'll only ever have the one, so this should be good.
CAVEAT: I have not tried this, so I can't guarantee the results...
It's not a problem to override the default Elmah handler factory so that it will filter Elmah logs by applications. I wrote a sample app that shows how to do it with MySql: http://diagnettoolkit.codeplex.com/releases/view/103931. You may as well check a post on my blog where I explain how it works.
Yes, it easily works. However you can't see app name in Elmah/Default.aspx. I haven't found if it is confugurable - just display one column more.

Executing shell command from MySQL

I know what I'm looking for is probably a security hole, but since I managed to do it in Oracle and SQL Server, I'll give it a shot:
I'm looking for a way to execute a shell command from a SQL script on MySQL. It is possible to create and use a new stored procedure if necessary.
Notice: I'm not looking for the SYSTEM command which the mysql command line tool offers. Instead I'm looking for something like this:
BEGIN IF
COND1...
EXEC_OS cmd1; ELSE
EXEC_OS cmd2; END;
where EXEC_OS is the method to invocate my code.
This isn't so much an answer to the question as it is justification for this sort of functionality - hence negating those who would say "you should do something else" or "why would you want to".
I have a database which I am trying to keep strict rules on - I don't want orphans anywhere. Referential integrity checks help me with this on the table level, but I have to keep some of the data as files within the filesystem (this is a result from a direct order from my boss to not store any binary data in the database itself).
The obvious solution here is to have a trigger which fires on deletion of a record, which then automatically deletes the associated external file.
Now, I do realise that UDF's may provide a solution, but that seems like a lot of C/C++ work to simply delete a file. Surely the database permissions themselves would provide at least some security from would-be assailants.
Now, I do realise that I could write a shell script or some such which could delete the table record and then go and delete the related file, but again, that's outside the domain of the database itself. As an old instructor once told me "the rules of the business should be reflected in the rules of the database". As one can clearly see, I cannot enforce this using MySQL.
You might want to consider writing your scripts in a more featureful scripting language, like Perl, Python, PHP, or Ruby. All of these languages have libraries to run SQL queries.
There is no built-in method in the stored procedure language for running shell commands. This is considered a bad idea, not only because it's a security hole, but because any effects of shell commands do not obey transaction isolation or rollback, as do the effects of any SQL operations you do in the stored procedure:
START TRANSACTION;
CALL MyProcedure();
ROLLBACK;
If MyProcedure did anything like create or edit a file, or send an email, etc., those operations would not roll back.
I would recommend doing your SQL work in the stored procedure, and do other work in the application that calls the stored procedure.
see do_system() in http://www.databasesecurity.com/mysql/HackproofingMySQL.pdf
According to this post at the forums.mysql.com, the solution is to use the MySQL_Proxy.