TCL Get The Name Of The Command To Be Deleted - tcl

I am extending TCL with C++ and my understanding is that when a command is about to be deleted the Tcl_CmdDeleteProc specified at the creation of the command is called. But only the client data is supplied to the Tcl_CmdDeleteProc when it is called. I would like to know the name of the command to be destroyed in the Tcl_CmdDeleteProc, is this possible?

It's possible, but you have to be a bit tricky about it.
If you're in the delete callback for a command, that pretty much means that you must have created it in the first place (it's tremendously bad form to insert deletion callbacks for commands created by others) and that means that the clientData can really be a pointer to a structure you control. If you put the Tcl_Command token (that you got when you created the command) into the structure somewhere, you can fish that out when the command is deleted and use that with Tcl_GetCommandName or Tcl_GetCommandFullName to get the current name of the command. (The deletion callback comes at a point when the command still has a name, BTW.)
Be very careful! The delete callback is called at a time when the command is actually only partially there; it's possible to trigger crashes if you aren't careful. You probably shouldn't call the command from the callback, or rename it, or delete it. It's possibly better to set a trace on the command so that you get notified of name changes. (This is also a mechanism you can use to find out about name changes and deletions of commands that you don't control.) That is because those traces happen slightly earlier, though still when the command is marked for death.

Related

ssis temp table exec proc

SSIS newbie here.
I have an SSIS package I created based on the wizard. I added a SQL task to run the script I was running previously separately, in order to reduce the process to one step. The script uses lots of temp tables, and one global ##temp at the end to make the result accessible outside the process.
When I try to execute the package, I get a complex "Package Validation Error" (error code 0x80040E14). I think the operative part of the error message is "Invalid object name '##roster5'."
I just realized it was the Data Flow task that was throwing the error, so I tried to put another SQL Task before everything else to create the table so that Data Flow task would see that the table is there; but it is still giving me the error: "Invalid object name '##ROSTER_MEMBER_NEW5'."
What am I missing/doing wrong? I don't know what I don't know. It seems like this shouldn't be that complicated (As a newbie, I know that this is probably a duplicate of...something, but I don't know how else to ask the question.)
Based on your responses, another option would be to add a T-SQL step in a SQL Agent job that executes stand-alone T-SQL. You would need to rethink the flow control of your original SSIS package and split that into 2 separate packages. The first SSIS package would execute all that is needed before the T-SQL step, the next step would execute the actual T-SQL needed to aggregate, then the last step would call the second package, which would complete the process.
I'm offering this advice with the caveat that it isn't advisable. What would work best is to communicate with your DBA, who will be able to offer you a service account to execute your SSIS package with the elevated privileges needed to truncate the staging table that will need to exist for your process to manage.
I actually want to post a non-answer. I tried to follow the advice above as well as I could, but nothing worked. My script was supposed to run, and then the data pump was supposed to, essentially copy the content of a global temp to another server/table. I was doing this as two steps, and tried to use SSIS to do it all in one step. there wasn't really a need to pass values within SSIS from component to component. It doesn't seem like this should be that hard.
In any event, as I said nothing worked. Ok, let me tell what I think happened. After making a lot of mistakes, a lot of undo's, and a lot of unsuccessful attempts, something started working. One of the things I think contributes is that I had set the ResultSetType to ResultSetType_None, since I wouldn't be using any results from that step. If anyone thinks that's not what happened, I'm happy to hear the actuality, since I want to learn.
I consider this a non-answer, because I have little confidence that I'm right, or that I got it by anything other than an accident.

Drop and Restore database on package failure using SSIS

Say for example I have an SSIS package with more than 20 steps doing an assortment of tasks and I wish to do the following when the package fails:
1.) Drop the database
2.) Restore the backup taken at the beginning
3.) Send an email containing the log file
At the moment I have added these steps into the OnError event at package level, and this works apart from the fact that it is generally doing this twice each time the package fails. I understand that the OnError may occur multiple times before the whole package terminates but I don't understand how I can do what I want any other way?
I essentially want to run the said steps on package termination i.e. it will run once not several times depending on the number of errors that caused the package to fail. I don't mind receiving two emails with the only difference being an extra error in one but I don't think it is right to drop/restore the database twice for no reason. I cannot see a suitable event for this?
One solution is to put all the steps of your package in a container, changing the OnError handler to increment an ErrorCount variable, and putting another container that happens OnCompletion of the main container that checks the ErrorCount and performs the actions in your current OnError handler if the count > 0.

SSIS - File system task, Create directory error

I got an error after running a SSIS package that has worked for a long time.
The error was thrown in a task used to create a directory (like this http://blogs.lessthandot.com/wp-content/uploads/blogs/DataMgmt/ssis_image_05.gif) and says "Cannot create because a file or directory with the same name already exists", but I am sure the directory or a file with the same name didnĀ“t exist.
Before throwing error, the task created a file with no extension named as the expected directory. The file has a modified date more than 8 hours prior to the created date wich is weird.
I checked the date in the server and it is correct. I also tried running the package again and it worked.
What happened?
It sounds like some other process or person made a mistake in that directory and created a file that then blocked your SSIS package's directory create command, not a problem within your package.
Did you look at the security settings of the created file? It might have shown an owner that wasn't the credentials your SSIS package runs under. That won't help if you have many packages or processes that all run under the same credentials, but it might provide useful information.
What was in the file? The contents might provide a clue how it got there.
Did any other packages/processes have errors or warnings within a half day of your package's error? Maybe it was the result of another error. that you could locate through the logs of the other process.
Did your process fail to clean up after itself on the last run?
Does that directory get deleted at the start of your package run, at the end of your package run, or at the end of the run of the downstream consumer of the directory contents? If your package deletes it at the beginning, then something that slows the delete could present a race condition that normally resolves satisfactorily (the delete finishes before the create starts) but once in a while goes the wrong way.
Were you (or anyone) making a copy or scan of the directory in question? Sometimes copy programs (i.e. FTP) or scanning programs (anti virus, PII scans) can make a temporary copy of a large item being processed (i.e. that directory) and maybe it got interrupted and left the temp copy behind.
If it's not repeatable then finding out for sure what happened is tough, but if it happens again try exploring the above. Also, if you can afford to, you might want to increase logging. It takes more CPU and disk space and makes reviewing logs slower, but temporarily increasing log details can help isolate a problem like that.
Good luck!

Making MySQL ignore an error

Disclaimer: I'm trying to do something very bad and if you think I shouldn't be doing this, that's wonderful but I'm still going to do it because I'm being told explicitly to either do this or show that it can't be done.
I have a piece of code that I have absolutely 0.00% control over. This piece of code cannot be changed or edited by me in any way, I don't have the source and have no way of obtaining it. I only have control over the MySQL db itself. The code I am working with connects to the MySQL db and attempts to do an update on a table I do not want updated. The update is pointless and destructive and should not happen but the code will not proceed unless the update query goes through.
I need MySQL to lie about the update. Is there any setting or option or anything to make it so when something connects to MySQL with an account that doesn't have write access to a table tries to do an update... MySQL will just tell the client "Yeah sure no problem, update complete." and then just not do it?

Executing shell command from MySQL

I know what I'm looking for is probably a security hole, but since I managed to do it in Oracle and SQL Server, I'll give it a shot:
I'm looking for a way to execute a shell command from a SQL script on MySQL. It is possible to create and use a new stored procedure if necessary.
Notice: I'm not looking for the SYSTEM command which the mysql command line tool offers. Instead I'm looking for something like this:
BEGIN IF
COND1...
EXEC_OS cmd1; ELSE
EXEC_OS cmd2; END;
where EXEC_OS is the method to invocate my code.
This isn't so much an answer to the question as it is justification for this sort of functionality - hence negating those who would say "you should do something else" or "why would you want to".
I have a database which I am trying to keep strict rules on - I don't want orphans anywhere. Referential integrity checks help me with this on the table level, but I have to keep some of the data as files within the filesystem (this is a result from a direct order from my boss to not store any binary data in the database itself).
The obvious solution here is to have a trigger which fires on deletion of a record, which then automatically deletes the associated external file.
Now, I do realise that UDF's may provide a solution, but that seems like a lot of C/C++ work to simply delete a file. Surely the database permissions themselves would provide at least some security from would-be assailants.
Now, I do realise that I could write a shell script or some such which could delete the table record and then go and delete the related file, but again, that's outside the domain of the database itself. As an old instructor once told me "the rules of the business should be reflected in the rules of the database". As one can clearly see, I cannot enforce this using MySQL.
You might want to consider writing your scripts in a more featureful scripting language, like Perl, Python, PHP, or Ruby. All of these languages have libraries to run SQL queries.
There is no built-in method in the stored procedure language for running shell commands. This is considered a bad idea, not only because it's a security hole, but because any effects of shell commands do not obey transaction isolation or rollback, as do the effects of any SQL operations you do in the stored procedure:
START TRANSACTION;
CALL MyProcedure();
ROLLBACK;
If MyProcedure did anything like create or edit a file, or send an email, etc., those operations would not roll back.
I would recommend doing your SQL work in the stored procedure, and do other work in the application that calls the stored procedure.
see do_system() in http://www.databasesecurity.com/mysql/HackproofingMySQL.pdf
According to this post at the forums.mysql.com, the solution is to use the MySQL_Proxy.