prevent delete * from table unless primary key specified - mysql

I want to prevent user from using deleting * from table unless primary key specified, one of our team member accendently used "delete * from table_name" i want to prevent such scenarios in future.

Would safe updates be viable for you? This is an option you can enable on the command line, in the option file or set a variable in SQL code that prevents updates and deletes without a where clause that includes the key columns defining the rows to change.
In MySQL Workbench there is a setting in Preferences -> SQL Editor -> Safe Updates (rejects UPDATEs and DELETEs with no restriction). I believe this is even on by default.

Related

MySQL - Enforcing update of row to ONLY be possible when a certain key is provided

This is something I can't seem to find information on.
Let's say I have a table users, and for security purposes, I want any SQL query to only executable if a reference to the id columns is made.
E.g. this should NOT work:
UPDATE users SET source="google" WHERE created_time < 20210303;
The above update statement is syntactically valid, but because it isn't making a reference to the id column, it should not be executable.
Only the below would be executable:
UPDATE users SET source="google" WHERE id in (45,89,318);
Is there any way to enforce this from the MySQL server's end?
I think the only way you can really do what you want is to use a stored procedure, where you pass in the ids and to the update there. You would set up the security as:
Turn off updates to the underlying table for all-but-one user.
Run the stored procedure as the user with permissions to modify the table (using DEFINER).
This will be cumbersome because you will need to pass in all the values in the table.
You can come close with safe update mode. However, that also allows LIMIT as well as key comparisons, so that is not sufficient for your purposes.
Note: This sort of issue is usually handled in another way. Most users would not have permissions to modify such a table. Then "special" users who do would be assumed to be more knowledgable and careful about changes. If the data is sensitive, then the changes would be logged, so it would be (relatively) easy to undo changes that have been made.

DELETE query results in 'Query Interrupted' MySQL Workbench?

I can successfully delete records manually by click-selecting & deleting row(s) but executing delete queries result in 'Query Interrupted'.
My deletion queries are in the form:
DELETE FROM table where column = value;
The select statement uses the same values:
SELECT * FROM table WHERE column = value;
and returns desired results.
What could be causing the delete statement to fail? Are there limits on the amount of records you can delete at once in workbench?
If you wish to delete the entire contents of a table you can use Truncate.
TRUNCATE [TABLE] tbl_name
Please see the docs: https://dev.mysql.com/doc/refman/5.7/en/truncate-table.html
Using the DELETE function is usually used for deleting single rows.
According to the documentation, in the Preferences >> SQL Editor >> Other, the Safe Updates setting is on by default.
Safe Updates (rejects UPDATEs and DELETEs with no restrictions)
Enabled by default. Prevents UPDATE and DELETE queries that lack a corresponding key in a WHERE clause, or lack a LIMIT clause, from executing. This option requires a MySQL server reconnection.
When selected, this preference makes it possible to catch UPDATE and DELETE statements with keys that are not used properly and that can probably accidentally change or delete a large number of rows.
I think what this says is that if the setting is on, then the column you are filtering by in the DELETE or UPDATE statement must be the primary key, it cannot be just any column.
If you change the setting to off, then you might need to restart MySQL Workbench for the change to take effect (at least under Linux).
There is a default thousand-row limit in MySQL-Workbench. The SELECT query will return results but DELETE will fail if the number of records to be deleted exceeds one thousand. One option is to limit the results in the query itself or you can adjust the settings as stated in the documentation.

MySQL Add Column with Online DDL

I'm currently trying to add a column to a table of ~25m rows. I need to have near-0 down time, so was hoping to use online DDL. It runs for a while, but eventually runs into the issue:
"Duplicate entry '1234' for key 'PRIMARY'"
[SQL: u'ALTER TABLE my_table ADD COLUMN my_coumn BOOL NOT NULL DEFAULT false']
I think this is happening because I'm running INSERT ... ON DUPLICATE KEY UPDATE ... operations against the table while running the operation. This seems to be a known limitation.
After this didn't work, I tried using the Percona pt-online-schema-change tool, but unfortunately, because my table has generated columns, that didn't work either with error:
The value specified for generated column 'my_generated_column' in table '_my_table_new' is not allowed.
So, I'm now at a loss. What are my other options for adding a column without blocking DML operations?
Your Alter statement is creating a non nullable column with a default of false. I'd suspect this to place an exclusive lock on your table, attempt to create the column, then setting it to False across each row.
If you don't have any available downtime, I'd suggest you
Add the column as nullable and with no default
ALTER TABLE my_table ADD COLUMN my_coumn BOOL NULL;
Update the values for existing rows to false
update my_table set my_coumn=false;
Alter the table a second time to be not nullable and with a default.
ALTER TABLE my_table modify my_coumn BOOL NOT NULL DEFAULT false;
Alternatively you could use something like Percona which manages schema changes using triggers and is meant to offer the ability to update schemas without locking the table.
Either option I'd suggest you test in your development environment with some process writing to the table to simulate user activity.

Where clause using key column still gives an error

I have a table that serves as a foreign key lookup from another table. The table is very simple, containing a ID column with is the primary key and a JSON column. I wish to remove abandoned entries from this table.
I tried running this script:
DELETE
FROM `ate`.`test_configuration`
WHERE `ate`.`test_configuration`.`ID` NOT IN (SELECT DISTINCT `ate`.`index`.`TestID` from `ate`.`index`);
But encountered an error stating my I wasn't using a where clause that uses the key column:
Error Code: 1175. You are using safe update mode and you tried to update a table without a WHERE that uses a KEY column To disable safe mode, toggle the option in Preferences -> SQL Editor and reconnect.
This is confusing as my where clause does use the primary key column. I am aware that I can disable safe mode as part of my script as a workaround, but would still like to understand why I'm getting this error. I'd like to avoid unsafe updates if possible.
I believe Optimizer just unable to use index effectively for such query - so it does full table scan.
How many rows are in the test_configuration and how many of them will be deleted?
(You might try to use index hints to force optimizer to use index for the query, just not sure if they are supported in your version of mysql).

Restore DB from SQL script with Foreign Key Constraints

I am trying to restore a DB using an SQL script, but things foreign key constraints get in the way
I am taking a MySQL DB and bringing it over to PostgreSQL.
Since the MySQL create table syntax ended up being quite different, I took another PostgreSQL DB with the same schema, but different data and restored the schema only, from that.
In other words, I now have a database with tables, constraints, sequences and all that shnaz but no data inside.
So, it's is time to restore data.
I take a backup of the MySQL DB with phpMyAdmin (data only) as an SQL script (pgAdmin does not seem to accept zip or gzip files for some reason) and run the SQL script.
Now, this is where the problems start to happen, it's only natural, I am going from MySQL to PostgreSQL, so syntax errors are bound to happen.
But, there are other non syntax related problems to, like this one:
ERROR: insert or update on table "_account" violates foreign key constraint "fk_1_account"
DETAIL: Key (accountid)=(2) is not present in table "_entity".
So, yeah, basically, a foreign constraint exists, the query is trying to insert data into the _account table, but the corresponding data has not been inserted into the _entity table yet.
How do I get around that? Is there a way to make pgAdmin3/PostgreSQL disable ALL OF the constraints, insert the data, and then re-enable the constraints?
A syntax related error I encountered, was this one:
INSERT INTO _accounttype_seq (id) VALUES (11);
The PostgreSQL equivalent of that statement (if I am correct) is
ALTER SEQUENCE _accounttype_seq INCREMENT BY 11;
But, it's a bit of a pain to run through the whole script and change all 200+ Sequence insert statements. So, I am being lazy here, but is there an easier way to deal with the sequences as well?
Or, do you guys have any suggestions for a different set of tools to make this easier?
Thanks for your time, have a good day.
Do not try to get around the foreign key constraints. That is the way to make sure the data is bad.
First look at the constraints and make sure you are inserting to the tables in the correct order. If _entity is parent of "_account, then it should be populated first.
Next you need to have the script move any failing records to an exception table. Then you can look at them and see what the data integrity issues is and if you need to throw the records away permanently or try to figure out what the missing parent value should be. If it is critical data such as orders where the customer no longer exists (possible in any system that didn't have correct fks to begin with) and you must keep the record and cannot determine what the parent value should have been, you can create an 'Unknown" record in the customer table and assign all bad orders to that customer id.
And manually changing the alter sequences shouldn't take long even if it is boring. There wil be plently of other things you need to handle manually in a conversion of this type.
I would try to find a data import tool for PostgreSQL - I live in SQL server world where I would use SSIS but you need the equivalent of SSIS for the PostgreSQL world.
Aparently the foreign keys weren't actually enforced in MySQL (maybe because of using MyISAM) or the generated SQL just does it in the wrong order.
If it's "only" the wrong order, I see two possible solutions:
edit the generated script and either move all FK definitions to the end of the script
Edit the definition of each FK constraint and set them all to initially deferred. Then run the script as one single transaction with only on commit at the very end.
Edit (because this is too much to be put as a comment)
Using SET CONSTRAINTS ALL DEFERRED will only work if the constraints have been created with the option DEFERRABLE.
To run everything in one single transaction, you have to make sure you have turned autocommit off. Then simply run the INSERTs and at the very end issue a COMMIT. A ; will only commit if you have autocommit on.
If you want to be independent of the autocommit setting, then start your script with [BEGIN][1] and make sure there is only a single COMMIT at the very end.
BEGIN DEFERRABLE
INSERT INTO table_one ... ;
INSERT INTO table_two ... ;
.....
COMMIT;