I'm trying to program a database, and I'm using a mix of parameterized queries and stored procedures. Mostly I'm using pqs inside sprocs. I'm doing each correctly, and getting the proper results. However, each time I log out of the mysql server and back in, the sprocs are still there, but it acts like I never programmed any pqs. It only works if I do the pqs all over again from scratch. I haven't seen anything either in lectures or online about pqs being temporary, so is there something I'm doing wrong? Thank you.
You have an apples-and-asterisks category confusion.
Apples: Stored procedures are persistent server-side objects with names in the name space of a particular MySQL database. Just like table definitions, views, and table contents, they are part of your database.
Asterisks: Parameterized queries (prepared statements) are client-side objects that are created underneath a particular connection to the DBMS. They're objects in the class hierarchy of whatever connection library (in whatever language) you happen to be using. Their lifetimes cannot exceed the lifetime of the connection.
If your app happens to be using more than one connection (for example, if it's multithreaded) you need to create your parameterized query for the particular connection you're using.
Related
I am implementing a SSIS package and currently trying to do the following.
Truncate the destination table
Fetch the data by executing the stored procedure and insert it into the destination table.
I have created an Execute SQL task to address step 1 and dataflow with oledb source and oledb destination to address the second point. It been working successfully so far but isn't working for one my stored procedure that uses temp tables.
When I edit the oledb source and click the preview button, I get the error no column returned
I know that SSIS has an issue with generating column while executing stored procedures that depend on temp tables. I have converted the stored proc to use temporary table variables and its now able to return columns in SSIS when I do a preview. The only downside is that the stored procedure is taking longer time to execute. Its taking 1 hour 15 mins as compared to 15 mins while using temp tables.
I did see a suggestion to use SET FMTONLY before executing the stored procedure as an alternate solution to changing to temp table variables but that didn't seem to work as I am getting syntax or permission denied error.
Could somebody tell me a solution to my problem which does not compromise on the performance.
Sounds like you've already read all the approaches to using Temp tables in SSIS, including the IF 1=0... trick? If you haven't seen that one yet, google it.
You say that using Table Variables causes your stored procedure to take about 5 times longer than using Temp Tables. The most likely reason for that is that you are indexing your temp tables but not your table variables. If you didn't know that table variables can be indexed, they can. You might try that.
Finally, a solution that you haven't mentioned is that you can replace your temporary table with a real table that gets truncated when you're done using it.
Short comment:
Try EXEC WITH RESULT SETS and specify the metadata yourself for a proc with temp tables; or use the Script Component as a source and specify the Output columns yourself.
Long comment:
Technically speaking, it is the driver/database you are using in SSIS that would decide the behavior when working with temp tables.
Metadata is an important factor when using SSIS's pipeline components. By metadata, I mean the names of the columns, their data types etc that a pipeline component uses. When designing a data flow, someone/something should provide this metadata to the components that require it.
In most cases, SSIS automatically retreives the metadata. Components that do not connect to a external data source, like Conditional Split etc, get their metadata from the other components they are connected to. For the pipeline components that connect to a external data source (like Oledb source, oledb destination, Lookup etc.), SSIS provides a mechanism to get this metadata without human involvement. This mechanism involves the driver connecting to the database and retrieving the metadata of the output. If the driver/database is capable of returning the metadata, then that metadata is used. If the driver/database is incapable, then you get the errors you are seeing. The rest of my comments are based on the assumption that you are using a SQL Server database in your question.
When working with a SQL Server database in SSIS, typically, we use the native client drivers provided by Microsoft. When trying to get the metadata, these drivers try to get the metadata without actually executing the SQL Statement (actual execution can have side effects; and also, might take more than a few seconds/minutes/hours; and you dont want side effects and long wait times during package design time.) So to get the metadata, the driver relies on the metadata of the actual objects used in the sql command. If the command uses a physical table or view, SQL Server already has the metadata available and can supply it to the driver. If it is a temp table, SQL Server does not have the metadata until it can create the temp table. If using FMT ONLY option, you can use it in such a way to create the temp tables, but avoid any heavy processing/side affects and thus be able to retrieve metadata without penalties. Post 2012, these native client drivers rely on some newer functionality to retrieve metadata than the drivers before 2012. In 2012 and after, the driver uses the sp_describe_first_result_set proc to retrieve metadata. So, whether you can get metadata or not is determined by the ability of the sp_describe_first_result_set proc.
So while SSIS can automatically get the metadata (because of the driver/database), it does not automatically get the metadata in some cases (again because of the driver/database). In cases involving the second scenario, some other process (typically a human) can help the driver infer metadata or provide the metadata to the component directly.
To help the driver, in case of SQL Server 2012 and after, you can use the WITH RESULTSETS clause to specify the output metadata. When this clause is present, the driver will use it and doesnt try to query the metadata from system objects; and thus avoid the error which you would otherwise get. If you are using the drivers that came with SQL Server 2008, you can use FMT ONLY. This option is at the driver/database level.
Another option could be to use a Script Component as the Source and in the Output columns, you can specify the columns/metadata. SSIS would not try to retrieve metadata from the datasource in this case, but would rely on the definitions you provided in the Output section of the Script Component.
As you can see, both options involve a human (or some other process) specifying the metadata instead of SSIS trying to retrieve the metadata in an automated fashion. I would prefer the first option if working with SQL Server and the second option if working with databases like MySql.
I have a website using mySQL database and I want to do common tasks like add users, modify their info, etc. I can do it perfectly with regular queries. Im using prepared statements to increment security.
Should I use stored procedures to increment the security or the results will be the same? I though that may be using stored procedures I can restrict the direct interaction that a possible attacker could have with the real query. I'm wrong?
I guess it would depend on what language youre using. Using a prepared statement with a sql string that contains all of the sql to be executed, or using a prepared statement with a sql string that executes a stored procedure are going to be about equivalent in most languages. The language should take care of the security around the prepared statement. C# for example will validate the input, so sql injection vulnerabilities are greatly reduced unless your prepared statement is written so poorly that feeding it bad (but expected, ie, 1 vs 0) variables will dramatically change the result set. Other languages may not provide the same level of validation though, so there may be an advantage depending on exactly what your stored proc looks like.
Using a stored procedure is better for maintainability, but there are not many scenarios where its going to provide any sort of change in security level, assuming the program is properly designed to begin with. The only example i can think of off the top of my head would be a stored procedure that takes raw sql strings from user input, and then executes that sql against the db. This is actually less secure than using a prepared statement unless you went to great lengths to validate the acceptable input, in which case you better have a really good reason for using such a stored proc in the first place.
Basically, what I'm saying boils down to the fact that you're going to need to read the documentation for your language about prepared statements, and determine what vulnerabilities, if any, using prepared statements may have, and whether or not those can be eliminated in your specific scenario by switching to a prepared statement that calls out a stored procedure instead of executing a sql query directly.
The results would be the same (assuming that you set your stored procedure up right).
there appears to be a pretty good write up on it here. Though I would never suggest you try to escape user input yourself. (They mention this as option 3)
We have a large database and we do manipulations on it ever day by using the basic mysql queries.
Can anyone please tell me, what is the use of Mysql Stored Procedures?
The real use of the Stored Procedures comes into picture when have any application accessing database.
For example: Imagine that you have written all your database operations in the form of queries in your data access code.
Suppose, that you need to make any change to query , then you need to rebuild and redeploy the entire application in order see your changes.
But, if you are using stored procs and refering them in application, you can just make changes in your database with out need for redeploying the application.
So, obviously better security , maintainability and much more
Note: This is one scenario where stored procs are better than normal queries.
Usage of Stored Procs also avoids SQL Injection Attacks
In very simple words, stored procedures allow you to store your quires along with database, you can combine multiple quires in single procedure. now whenever you want to execute those quires just "CALL yourProcedure;"
Need to perform specific query daily ?
Read about MySQL events = stored procedures with scheduling capability !
https://dev.mysql.com/doc/refman/5.1/en/events.html
Red Gate has some pretty good tools, but I don't think that their Dependency Tracker shows how Tables are effected by the stored procedures that touch them.
Is there any tool that can scan a database and determine what processes INSERT, UPDATE or DELETE records from the table as opposed to just touching\being dependent on them? Seems like this shuld exist by now...
No, dependency tracking still isn't perfect. The reason is that procedures can reference tables by dynamic SQL, dependencies can be broken if objects are dropped and re-created (I've written about how dependencies can break here). The best "first sweep" I have come to rely on is:
SELECT OBJECT_NAME([object_id])
FROM sys.sql_modules
WHERE LOWER(definition) LIKE '%table_name%';
Again, this won't find objects that build statements using dynamic SQL, and it can produce false positives because table_name could be simplistic and part of other object or parameter names, or included only in comments or commented-out code.
You can also check for plans that reference a table using sys.dm_exec_cached_plans and related DMFs/DMVs but note that this won't find any plans that have rolled out of the cache.
Using SQL Search, you can search for the column name and find all the stored procedures where it is used.
It's a Third Party tool and that is Red Gate SQL Search
Features
Find fragments of SQL text within stored procedures, functions, views
and more
Quickly navigate to objects wherever they happen to be on a server
Find all references to an object
Hope this will help you.
Is there a way to allocate a default database to a specific user in MySQL so they don't need to specify the database name while making a query?
I think you need to revisit some concepts - as Lmwangi points out if you are connecting with mysql client then my.cnf can set it.
However, your use of the word query suggests that you are talking about connecting from some programming environment - in this case you will always need a connection object. To create connection object and in this case having default database to connect to will lead to no improvement (in terms of speed or simplicity). Efficiently managing your connection(s) might be interesting for you - but for this you should let us know exactly what is your environment.
If you use a database schema you don't need to specify the database name every time, but you need to select the database name.
The best thing to do would be to use a MySQL trigger on the connection. However, MySQL only accepts triggers for updates, deletes and inserts. A quick Google search yielded an interesting stored procedure alternative. Please
see MySQL Logon trigger.
When you assign the permissions to every user group, you can also specify, at the same file, several things for that group, for example the database that users group need to use.
You can do this with a specification file, depending on the language you are working with, as a simple variable. Later, you only have to look for that variable to know which database you need to work with. But, I repeat, it depends on the language. The specification file can be an XML, phpspecs file, or anything like this.