Today I attempted to turn on my the Events Scheduler on my Amazon RDS instance.
I received the following error:
Access denied; you need (at least one of) the SUPER privilege(s) for
this operation.
I've been looking a a couple of post around the internet on how to solve this but I haven't found anything of real use. I'm not sure where to even start to figure out a solution because these posts have stated that Amazon doesn't grant SUPER privileges to anyone.
To enable the Event Scheduler on RDS you will need to specify this in a parameter group.
You will need to either create a new parameter group or modify an existing one. This can be done via the web console or, as with many AWS things, via the CLI/API/SDK.
You want to change the value of event_scheduler to either 1 or ON.
Once this has been changed you can then apply the parameter group to an existing database instance either via the console or the CLI/API/SDK.
To make the database pick up the parameter change you will need to reboot the instance.
Related
I'm using MySQL 8.0.21 From the MySQL Community Installer on Windows 10 updated to version2004 and for some reason if I create a event in the event scheduler, which calls a procedure once every second (regardless of what that SP actually does, I'll explain my test case) - my CPU maxes out and when I look at the active connections in MySQL Workbench, it stacks up a ton of worker threads which stall on the "Opening Tables" state. My PC freezes, I have to edit the event to be disabled, stop the MySQL process in Task Manager and Start the service again.
TEST CASE
During setup of a brand new server, I used all default settings, except I enabled the general log and I use the new 8.0+ mysql_sha2_password encryption (although I ALTER USER to mysql_native_password for phpmyadmin so that might revert it, I'm honestly not sure)
I create a new Schema called "Test"
I create one Table called
"TestTable" has only one column called "column1" INT
I then create a Stored Procedure "TestProc" which does "SELECT COUNT(*) FROM
TestTable;" Adjusts Priv.'s, DEFINER::Definer is root#localhost and
Reads SQL
And Finally I create an Event called "TestEvent" which does
"CALL TestProc()s" Reoccurring every 1/sec, preserve on Complete, and
definer is root#localhost
restart server before event is fired.
Also, if I enable the event, or create it, it'll run without issue, it's important to note that the issue begins when the event scheduler is left on, and the event is left enabled, then the server is restarted from the services in task manager. Immediately the CPU jacks up to max and active connections show threads stacking up without completing.
Any clues are appreciated, I find no actual errors nor do I have any idea where to begin debugging anymore. I've tried skipping grant tables (but obviously that's not optimal, and didn't work).
I did find a hint when reviewing the MySQL 8.0+ docs
"If a repeating event does not terminate within its scheduling interval, the result may be multiple instances of the event executing simultaneously. If this is undesirable, you should institute a mechanism to prevent simultaneous instances. For example, you could use the GET_LOCK() function, or row or table locking. " from
However, when analyzing there does not appear to be any locks, nor should I need to implement such manually just for this test case (or my actual program)
UPDATE
Up to this point, albeit a rather niche bug, I do believe that is exactly what this is, and I have posted it on MySQL bug forum. Reference post is here:
The answer actually has turned out to be a bug which is reproducible - Bug#: 100449
I have an Azure SQL db where I am executing a change with a c# call (using await db.SaveChangesAsync();)
This works fine and I can see the update in the table, and in the APIs that I call which pull the data. However, roughly 30-40 minutes later, I run the API again and the value is back to the initial value. I check the database and see that it is indeed back to the initial value.
I can't figure out why this is, and I'm not sure how to go about tracking it down. I tried to use the Track Changes SQL command but it doesn't give me any insight into WHY the change is happening, or in what process, just that it is happening.
BTW, This is a test Azure instance that nobody has access to but me, and there are no other processes. I'm assuming this is some kind of delayed transaction rollback, but it would be nice to know how to verify that.
I figured out the issue.
I'm using an Azure Free Tier service, which is done on a shared virtual machine. When the app went inactive, it was being shut down, and restarted on demand when I issued a new request.
In addition, I had a Seed method in my Entity Framework Migration set up to set the particular record I was changing to 0, and when it restarted, it re-ran the migration, because it was configured to do so in my web config.
Simply disabling the EF Migrations and republishing does the trick (or when I upgrade to a better tier for real production, it will also go away). I verified that records outside of those expressly mentioned in the Migration Seed method were not affected by this change, so it was clearly that, and after disabling the migrations, I am not seeing it any more.
For security purpose, we will create a database log that will contain all changes done on different tables on the database, to achieve this we will use triggers as stated here but my concern is that if the system admin or anyone who has the root privilege changes the data on the logs for their benefit it will then make having logs meaningless. thus, I would like to know if there is a way for me to prevent anyone and I mean no one at all from doing any changes on the logs table, i.e dropping the table, updating, and deleting a row. if this is even possible? also in regards to my logs table, is it possible to keep track of the previous data that was changed using the update query? I would like to have a previous and new data on my logs table so that we may know what changes were made.
The problem you are trying to fix is hard, as you want someone who can administer you system, but you don't want them to be able to actually do something with all parts of the system. That means you either need to administer the system yourself and give someone limited access, trust all administrators, or look for an external solution.
What you could do is write your logs to a system where only you (or at least: a different adminsitrotor then the first) have access.
Then, if you only ever write (and don't allow changes/updates and deletes) on this system, you will be able to keep a trusted log and even spot inconsistencies in case of tampering.
A second method would be to use a specific method to write logs, one that adds a signed message. In this manner you can be sure that the logs have been added by that system. If you'd also save (signed) message of the state of the complete system, you are probably going to be able to recognize any tampering. The 'system' used for signing should live on another machine obviously, making it somewhat equivalent to the first option.
There is no way to stop root access from having permissions to make alterations. A combination approach can help you detect tampering though. You could create another server that has more limited access and clone the database table there. Log all login activity on both servers and cross backup the logs between servers. also, make very regular off server backups. You could also create a hashing table that matches each row of the log table. They would not only have to find the code that creates the hash, but reverse engineer it and alter the time stamp to match. However, I think your best bet is to make a cloned server that has no net login. Physical login only. If you think there has been any tampering, you will have to do some forensics. You can even add a USB key to the physical clone server and keep it with a CEO or something. Of course, if you can't trust the sysadmin's, no matter what your job is very difficult. The trick is not to create solid wall, but a fine net and scrutinize everything coming through the net.
Once you setup the Master Slave relationship, and only give untrusted users access to the slave database, you won't need to alter your code. Just use the master database as the primary in your code. The link below is information on setting up a master slave replication. To be fully effective though, these need to be on different servers. I don't know how this solution would work on one server. It may be possible, I just don't know.
https://dev.mysql.com/doc/refman/5.1/en/replication.html
Open PhpMyAdmin
open the table
and assign table level privileges on the table
I have an interface that I want to allow an arbitrary SQL select statement (as an input string) to be input that will select data from a given table for use in an operation. I want to make sure that this statement does not make changes to the database.
string query = GetStringFromForm(...);
DatabaseStatement statement(query);
statement.execute();
while (statement.fetch(...))
...
One way to implement this would be to create a new database user with the appropriate permissions and then execute the statement under that user. This would be a hassle as it would require setting up this new user and creating a new database connection for it and so on.
Is there a way to isolate the permissions for a single statement MySQL 5.5? Or some other way to do this?
With MySQL 5.6 you can do:
START TRANSACTION READ ONLY;
https://dev.mysql.com/doc/refman/5.6/en/commit.html
I think it's what you're looking for, but you have to upgrade to 5.6 to use it.
Don't connect to the database with the same login for everything.
At the very least you should use thee logins for this achitecture:
A development level login for creating tables, etc
The login used by your application to make the application run
The login used to execute user specified queries
This means that your application login only has the permissions it needs - to read or write to the tables necessary, not to do everything to every table; application logins shouldn't need to be able to CREATE or DROP tables, for example.
This limits the impact of mistakes in code, but also the scope to which someone could hack your system (such as with SQL Injection attacks).
It also means that the login for running user specific queries needs only to be granted SELECT permissions, and only to the tables/views/function that it should be able to use. If they try to run an INSERT or a DELETE that they don't have permissions for, you can catch the error and tell the user that they're a very naughty boy - secure in the mind the the RDBMS simply won't let the user damage anything that you haven't already given them permission to do.
In short, RDBMS already have login permission architectures. Use those to limit the permissions and functionallity of different aspects of your code.
I would not try to re-invent this wheel. It is extremely likely that there is a trick or hack that you missed that exposes a vulnerability in your application. I appreciate that you say this is a hassle, but it really is the right way of doing things, and the only reliable way of doing things. There's a reason that it's the standard approach to data security, sorry.
(And trust me, even if no-one is trying to hack your system, eventually someone will type some screwball query in - accidentally bypassing your security and making a pigs ear of your database.)
I have been working on eCommerce site (using drupal). Few days ago before i am getting this error my site was working fine no issues was there. But now a days no. of times my site goes offline with the error message ('max_user_connection').
I was using some custom code containing mysql_connect and mysql_query now i changed everything into module and no custom queries left as such.The error is still their. On some of the pages data is populated with two different databases and to handle two database at same page i am using drupal function db_set_active().
I had discussed with hosting provider also they have increased a 'connection_limit' but error is still coming, what will be the possible reasons of having this kind of issue and the ways to handle this.
In this case the dbms is not able to serve all incoming connection requests to the database.
You can check with the "show full processlist" (which requires SUPER privilege) for current count of connections.
You now have either two choices: alter you application logic so that overall connections are descreased or you can try to alter the max_connections system variable in order to allow your DBMS to server more connections (also requires SUPER privilege).
But if your provider already told you that they increased 'connection_limit, you should go for the first approach (alter your application logic).