So this morning I was looking at our company's database users and it was a great mess, with some big potential insecurities. Since most colleagues where around I decided to gather them around and decide which users to delete.
Now I forgot one colleague uses powerbi and he wasn't around (a lot of different dashboards) and it will take some time for him te replace all the data sources with a new user. So I was wondering if there is anyway I can find the users I deleted so I can see which one was used by him since you cant see which credentials he was using in powerbi.
If the general query log was enabled on your server, then you have a log of each query and the user that executed it.
Alternatively, enable the error log and set log-warnings to 1 or higher and then ask the user to try to refresh the dashboard. The access denied event will be logged into the error log.
Related
We want to log every user action(display/print/export) on certain reports of our application. I found that SSRS already have its log(select * from ExecutionLog) and most of the data we need is there BUT that log can be turned off if someone has access to the server. Is there an API/class we could to do that logging by our own?
We need a reliable solution where we are 100% sure that the log of the report is performed every time.
We are trying to avoid the use of events in the client side to log the user actions due to the network performance. Not sure if that is a recommendable option instead.
Any comment is appreciated :D
For security purpose, we will create a database log that will contain all changes done on different tables on the database, to achieve this we will use triggers as stated here but my concern is that if the system admin or anyone who has the root privilege changes the data on the logs for their benefit it will then make having logs meaningless. thus, I would like to know if there is a way for me to prevent anyone and I mean no one at all from doing any changes on the logs table, i.e dropping the table, updating, and deleting a row. if this is even possible? also in regards to my logs table, is it possible to keep track of the previous data that was changed using the update query? I would like to have a previous and new data on my logs table so that we may know what changes were made.
The problem you are trying to fix is hard, as you want someone who can administer you system, but you don't want them to be able to actually do something with all parts of the system. That means you either need to administer the system yourself and give someone limited access, trust all administrators, or look for an external solution.
What you could do is write your logs to a system where only you (or at least: a different adminsitrotor then the first) have access.
Then, if you only ever write (and don't allow changes/updates and deletes) on this system, you will be able to keep a trusted log and even spot inconsistencies in case of tampering.
A second method would be to use a specific method to write logs, one that adds a signed message. In this manner you can be sure that the logs have been added by that system. If you'd also save (signed) message of the state of the complete system, you are probably going to be able to recognize any tampering. The 'system' used for signing should live on another machine obviously, making it somewhat equivalent to the first option.
There is no way to stop root access from having permissions to make alterations. A combination approach can help you detect tampering though. You could create another server that has more limited access and clone the database table there. Log all login activity on both servers and cross backup the logs between servers. also, make very regular off server backups. You could also create a hashing table that matches each row of the log table. They would not only have to find the code that creates the hash, but reverse engineer it and alter the time stamp to match. However, I think your best bet is to make a cloned server that has no net login. Physical login only. If you think there has been any tampering, you will have to do some forensics. You can even add a USB key to the physical clone server and keep it with a CEO or something. Of course, if you can't trust the sysadmin's, no matter what your job is very difficult. The trick is not to create solid wall, but a fine net and scrutinize everything coming through the net.
Once you setup the Master Slave relationship, and only give untrusted users access to the slave database, you won't need to alter your code. Just use the master database as the primary in your code. The link below is information on setting up a master slave replication. To be fully effective though, these need to be on different servers. I don't know how this solution would work on one server. It may be possible, I just don't know.
https://dev.mysql.com/doc/refman/5.1/en/replication.html
Open PhpMyAdmin
open the table
and assign table level privileges on the table
I'm creating a database that registers working hours.
People can introduce start working day, when and how long they take lunch break, and the end of the working day.
All works well, and I've created some tables that proplery catch the time.
BUT my manager wants to prevent that people can change their working hours the next day (unless of course the field is empty because the user forgot). The user should only change his working hours when the admin (manager) gives access via a password.
Note that I've created a separate database for each user (which is automatically created when the user registers) due to the need for password protection.
How would I handle this best? I don't know if locking records would work (?).
Locking controls doesn't have a purpose, because the user obviously has manual access to his own personalized password protected database.
I could provide code, but the code I would provide would be useless to this specific problem...
(I've got a hundreds of lines by now, all not really anything to do with this specific problem).
Thanks for your suggestions
Alright, so this morning I got a giant spam of automated mails from my vbulletin website with mysql errors stating
`Can't connect to MySQL server on '127.0.0.1'"
Too many connections
User username already has more than 'max_user_connections' active
connections`.
I've never had this before on my host, I don't get that many visitors on my two sites. One site running vbulletin gets between 300-700 daily visits and my second site is one I put together myself so that's probably the source of the connections staying open, I started advertising it yesterday but it doesn't get many visitors either so i don't think it's just too many users connecting, I think it's connections staying open or something...
Is there some way to figure out the source of this, or the location where connections stay open too long or any information would be helpful actually.
Thanks
In a MySQL shell you could run show processlist; which will show you currently running processes, what user is logged in, what database they have selected and what host they're coming from. This might give you some clues to the origin of your excess connections. Maybe you can see queries that are running for a very long time (combine that with an impatient user repeatedly hitting refresh).
Keep in mind that if any of your code runs with persistent connections there will be a bunch of idle processes in that list, which is perfectly normal in that case.
I'm using phpmyadmin and working on someone site whose information is pulled from a database with a table called "profile_types" I had to add a row for a new type but the website isn't reflecting the changes. I've been reading around and "have query cache" is set to yes so figured I should clear the cache and see if that helps any.
So after reading I was trying to use RESET QUERY CACHEl but kept getting an error about using RELOAD> So after some more reading I can't figure out how to use the RELOAD command. As far as I know this is the databases only user account so I'd figured it was admin and had the necessary privs. Am I missing something? Also, do you guys thinks doing the RESET QUERY CACHE would maybe allow it to update the site with the new record? I've cleared my browsers cache and tried all that and no go so figured this was my last option.
The query cache is for the results of selects. It doesn't "cache" inserts - if queries were stuck into the cache and then not reflected in subsequent results, the database wouldn't be ACID compliant.
In other words, imagine if this was a banking database, and it "cached" deposits but made sure withdrawals were reflected immediately. You'd be drowning in overdrafts. Oh... wait... That's how banks work these days.