I have a MySQL table that needs restricted permissions for INSERT.
Specifically, I have written a stored function that handles all INSERT operations, and I need to restrict permissions so ONLY this stored function is allowed to insert rows in this specific table.
"But why would you do that?"
Because MySQL doesn't support Microsoft's INSTEAD OF INSERT triggers, and a custom stored function/procedure for insertion is the only viable workaround.
And I need it to work like a trigger - guaranteed to execute for every INSERT operation, with no exceptions or loopholes.
The function returns 0 if no errors, 1 if invalid parameters, 2 if unique index collision, etc.
(Or maybe the function will include an INSERT ... ON DUPLICATE KEY UPDATE statement, I haven't decided yet.)
"But why do you need INSTEAD OF INSERT triggers?"
For the same reason that Microsoft SQL Server developers need it ... because it does the job that needs to be done ...
Specifically, because my code includes GET_LOCK before insertion, and RELEASE_LOCK after insertion ... and I don't want a BEFORE INSERT trigger to end without releasing the lock (seems like a very bad idea).
You can create MySQL stored functions (and stored procedures) including the clause SQL SECURITY DEFINER . Then, they'll run with the permissions of the definer, not the user. Your definition might look like this:
CREATE PROCEDURE proc()
SQL SECURITY DEFINER
BEGIN
INSERT INTO tbl whatever....;
END;
When you do this, log in to MySQL with an administrator's account, not a user's account.
Then use MySQL's permission system to grant INSERT access for that table to the administrator's account and revoke it for other users' accounts.
GRANT SELECT, UPDATE, INSERT, DELETE mydatabase.mytable TO admin;
REVOKE INSERT ON mydatabase.mytable FROM user1;
REVOKE INSERT ON mydatabase.mytable FROM user2;
Then, INSERT queries from other users' accounts will fail, but the stored function will succeed. This is more-or-less conventional SQL stored code privilege handling.
MySQL triggers, of any flavor, can't start or end transactions.
I have master node and salve node running on the same system replicate from the master to the slave, when i start the system it will generate sym_ tables on both master and slave nodes .Is it really required for working in one way replication . i tried to add the following properties in slave side
auto.config.database=false
but it stops the synchronisation itself,and my entries in master sym_ tables as follws
delete from sym_trigger_router;
delete from sym_trigger;
delete from sym_router;
delete from sym_channel where channel_id in ('item');
delete from sym_node_group_link;
delete from sym_node_group;
delete from sym_node_host;
delete from sym_node_identity;
delete from sym_node_security;
delete from sym_node;
insert into sym_channel
(channel_id, processing_order, max_batch_size, enabled, description)
values('item', 1, 100000, 1, 'Item and pricing data');
insert into sym_node_group (node_group_id) values ('corp');
insert into sym_node_group (node_group_id) values ('store');
insert into sym_node_group_link (source_node_group_id, target_node_group_id, data_event_action) values ('corp', 'store', 'W');
insert into sym_node_group_link (source_node_group_id, target_node_group_id, data_event_action) values ('store', 'corp', 'P');
insert into sym_trigger
(trigger_id,source_table_name,channel_id,last_update_time,create_time)
values('item','item','item',current_timestamp,current_timestamp);
insert into sym_router
(router_id,source_node_group_id,target_node_group_id,router_type,create_time,last_update_time)
values('corp_2_store', 'corp', 'store', 'default',current_timestamp, current_timestamp);
insert into sym_router
(router_id,source_node_group_id,target_node_group_id,router_type,create_time,last_update_time)
values('store_2_corp', 'store', 'corp', 'default',current_timestamp, current_timestamp);
insert into sym_trigger_router
(trigger_id,router_id,initial_load_order,last_update_time,create_time)
values('item','corp_2_store', 100, current_timestamp, current_timestamp);
insert into sym_node (node_id,node_group_id,external_id,sync_enabled,sync_url,schema_version,symmetric_version,database_type,database_version,heartbeat_time,timezone_offset,batch_to_send_count,batch_in_error_count,created_at_node_id)
values ('000','corp','000',1,null,null,null,null,null,current_timestamp,null,0,0,'000');
insert into sym_node (node_id,node_group_id,external_id,sync_enabled,sync_url,schema_version,symmetric_version,database_type,database_version,heartbeat_time,timezone_offset,batch_to_send_count,batch_in_error_count,created_at_node_id)
values ('001','store','001',1,null,null,null,null,null,current_timestamp,null,0,0,'000');
insert into sym_node_security (node_id,node_password,registration_enabled,registration_time,initial_load_enabled,initial_load_time,created_at_node_id)
values ('000','5d1c92bbacbe2edb9e1ca5dbb0e481',0,current_timestamp,0,current_timestamp,'000');
insert into sym_node_security (node_id,node_password,registration_enabled,registration_time,initial_load_enabled,initial_load_time,created_at_node_id)
values ('001','5d1c92bbacbe2edb9e1ca5dbb0e481',1,null,1,null,'000');
insert into sym_node_identity values ('000');
if slave side not required sym_ tables please help me to avoid
thanks in advance
You're talking about sym_ not sys_ tables, aren't you?
Yes, you need the sym_ tables at the destination node. For example, without the sym_node your destination node wont be able to register and held the registration with the source node. sym_incoming_batch holds all batches of data synced from the destination server etc.
auto.config.database just tells SymmetricDS that you are going to manage the creation and maintenance of the SymmetricDS tables yourself. They are still required. However, you can put the SymmetricDS tables in a different catalog (database) than the target tables.
There is also a yet undocumented feature to setup up a generic jdbc data loader and use a small local H2 database as the SymmetricDS database. This option is only for one-way synchronization to the client only.
So I want to write a trigger that updates (insert, update, delete) a table if another table(in another database) gets updated, something like this for example:
CREATE TRIGGER new_data
AFTER INSERT ON account
FOR EACH ROW
INSERT INTO test4.bank3
SET
money = NEW.amount
The problem is that I only have reading access to the other database (in this example where account lies on).
Is there a way around it or do I have to use a completely different method?
Can you ask the admin of the other database to set up a connection / trigger so that a duplicate table is made in your database? Then you could use that table as the trigger?
I'm trying to insert rows into a table via a trigger or stored procedure without writing any data to the binary log. Is this possible? I know that for normal connections you can set SQL_LOG_BIN=0 to disable binary logging for the connection, but I haven't been able to get that to work for triggers. If there's a way to do this with a federated engine table, that would also be OK.
edit:
I can almost accomplish this via a stored procedure:
CREATE PROCEDURE nolog_insert(`id` INT, `data` blob)
BEGIN
SET SESSION SQL_LOG_BIN = 0;
INSERT INTO `table` (`id`, `data`) VALUES (id, data);
SET SESSION SQL_LOG_BIN = 1;
END
I insert a record by calling the procedure from the mysql prompt:
call nolog_insert(50, 'notlogged');
As expected, the record (50, 'notlogged') is inserted into the table, but is not written to the binary log.
However, I want to run this procedure from a trigger. When using a trigger as follows:
create trigger my_trigger before insert on blackhole_table for each row call nolog_insert(new.id, new.data);
The data is both inserted to the table and written to the binary log.
If you run statement based replication triggers are executed on both the master and the slave but not replicated. Perhaps that could solve your problem.
Other than that, it's not allowed to change sql_log_bin inside a transaction so I would say that there is no good way to have the effects of a trigger not replicated when using row based replication.
I need to get an auditrail in mysql; is there a way to configure the binary log to get not only the changes, also the user, (connection) who made this change? Or do I have to use mySQL Proxy?
TIA
Peter
I don't think its possible to have the binlog show connection info. My approach is to set triggers in the database that log to audit tables. For example, here is one from work:
CREATE TRIGGER whatever_audit_INSERT
AFTER INSERT ON whatever FOR EACH ROW
BEGIN
INSERT INTO whatever_audit(
audit_when_start, audit_who_start, col1, col2
) VALUES (
now(), #app_user, new.col1, new.col2
)
END
That's from memory; hopefully I got the syntax right...