Monitoring Data if manually changed [closed] - mysql

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Is there any tool to monitor the Data in SQL Server in case it has been changed manually not through the system.
I want to check if our DB Administrator edited any record manually or not

You can do this in many ways, but one feature should actually be set up, either on MSSQL Server or mysql server. In the first case, you should switch auditing for specific database. In case you didn't do it before, you can dog the transaction logs:
https://solutioncenter.apexsql.com/read-a-sql-server-transaction-log/
But it's time consuming and can bring no conclusive results.
For MySQL there is a plug-in using audit api, delivered by mysql from version 5.5.3. There are number of plug-ins developed, Oracle delivers one with their enterprise solution, percona delivers one, I believe there are others as well.
In this case also, you can dig the bin logs, analyze sql query log (if set up), slow query log (if set up). But again - it is time consuming and actually you may not be able to definitely find 100% proof.
Only auditing can deliver hard evidence but it must be set up before, not after.
But you must also know, that if you db admin knows passwords of application users, and he can log in to application servers, then he can log to database using that app user, do the changes, and there will be pretty much no way to say it wasn't an app that changed the record (unless there are app roles set up in the database, which can be used only by application).

You Can create Triggers
Triggers will be fired when a record is inserted, deleted or updated and msdb.dbo.sp_send_dbmail should be used to send an email alert to a specified email recipient, immediately:
CREATE TRIGGER t_Pers
ON Person.Person
AFTER INSERT, UPDATE, DELETE
AS
EXEC msdb.dbo.sp_send_dbmail
#profile_name = 'ApexSQLProfile',
#recipients = 'marko.radakovic#apexsql.com' ,
#body = 'Data in AdventureWorks2012 is changed',
#subject = 'Your records have been changed'
GO

Related

Query log file of mySQL database for analysis [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
We have log files of mySQL database and we want to use that log files for analysis (data mining, machine learning,...)
and iam very new with that.
Can you give me instruction how to do that ?
There are a number of tools that could be useful to you depending on your requirements.
But why use the MySQL logs and not the DB directly, or have the details you are searching for go to a new 'information mining' table in the DB as the user interacts with the interface. Which could be a lot more powerful
If you wish to determine the rate of inserts / queries.
You can actually set up MySQL to log these types of events to different files (so some of the work is already done, rather than having to mine a complete log file of all events).
Otherwise you are going to want to make use of tools such as Grep.
On top of that you have a system called Lucerne (from apache) that will mine the data and search for key words. They have various different hooks for going into Java, C, and others. Its very similar to how google trawl web pages.
Otherwise if you intend to mine the data 'within' the database, then the logs are clearly not your best point of call.
The logs will contain lots of information about the users also. IP address may be more tricky, but you could cross reference the name of the user running the query and the general server logs to determine the IP of the connection.

What engine type would be better in this scenario? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I’m writing an android app that will sync with a MySQL db on my webserver (there will also be a website reading from/writing to the same dB). The android app will store a copy of the data locally in a sqlite db to provide access while offline. If the user creates a row while offline, that record will be uploaded to the server the next time a data connection is available. I’m designing the app and website myself so I have the ability to set it up as I see fit (meaning it doesn’t have to conform to someone else’s server).
The sqlite db will have a column for id (which will represent the id as stored on the server) and a localID column. When the server receives the data, it will acknowledge the new data by returning an array (in json format) of the id numbers as stored on the server.
What would be better for this type of scenario: a transaction-safe engine or non-transaction-safe (such as isam)? It’s my understanding that isam would be faster and take less space to store but I can’t deal with losing data. I’m thinking that if the android app doesn’t receive the confirmation, it would resubmit the data. It seems like that would prevent data loss but I need a second (more-experienced) opinion. If you would go with a transaction-safe db, which would you recommend as I’ve never worked with one?
TIA!
A real database should be your default choice until you've seen that it's not fast enough.
Consider using UUIDs to generate IDs on the client that are guaranteed to be unique on the server.
Have you thought about how you would handle updates from multiple devices that both had off-line changes? You should consider some known patterns for dealing with this kind of synchronization.
Stack Overflow question
Data Replication book

Programming and computer science basics [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Today, to make MySQL work on my Ubuntu i found some piece of code that i had to type in the terminal to install MySQL server. Then i had to insert some code like:
mysql -u root -p /*enter*/
then insert my password and enter again.
At this point i created a new database with the code
mysql> create database MyFirstDatabase; /*enter*/
Fact is that i have no idea of what i did and why. Why did i have to install MySQL server and why my workbench wasn't working before? Why my username was root?
Where the database file was created, in which folder?
I want to be able to know the answer to questions like:"what happen in the machine when i declare a variable?" or "What happen if i declare an array with a certain number of elements but with no content in them?" and also "what happens in the pc when i run a SQL query with an inner join?".
ADDITIONS
I had also no idea why one of the IT technician of the university asked me if i had an apache server. Why did he ask this? I admit i have no idea of client/server from a technical perspective
By inserting the first command in the terminal you're accessing the MySQL cli interface which commands the MySQL server. The second command (with prompt mysql>) is a query itself made directly in the cli and submitted at your enter.
For what MySQL is used for... well you should already know, as "Oracle", it is a relational database and it is used to store data in a relational way.
Root is the username with ALL privileges, in every field, the root user is the one who have all the power in his hand. Database files are usually stored in /var/lib/mysql if I remember correctly (on Ubuntu).
For the other questions I can't really answer you since it would require much time and starting from scratch.
Beware: Asking for suggestions or recommendations in this site is a good way to get your question closed.

Two questions about backing up your website (mysql and files) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
So the only method i like using and think is the simplest to use is mysqldump to backup mysql databases. Right now, im using phpmyadmin to backup the tables. Is there any way i can code a script that does it automatically (preferably everyday).
And how do i back up files exactly on my server. I have an images file that i need to back up. I'm not sure exactly how to go about backing those up.
Of course -- use MySQL Dumper. You can automatically backup your databases to another host if you like!
Features
Send dumpfiles via FTP to up to 3 different server. This is also working using the multipart feature.
Automatic file-deletion: set your own rules to delete old backups. Specify the number of backups you want to hold and let MySQLDumper automatically delete the older ones to save server webspace.
MySQLDumper can do Multipart-Backups. That means: it can automatically split the dumpfile if it gets bigger than your chosen size. When you want to restore a backup and choose the wrong part - it doesn' matter: MySQLDumper will notice that and will get the correct startfile automatically.
Security: MySQLDumper can generate a .htaccess-file to protect itself and all of your backup-files
Good reading resource for alternatives
10 Ways to Automatically & Manually Backup MySQL Database
Since Gary answered your first question, I'll answer your second.
For backing up the server:
I'm assuming you are talking about your web applications and the images contained in folders used by those applications. Source control will work for this. Set up a Subversion server or something like it.
http://subversion.tigris.org/
Hope this helps. Good luck.

Automatically Monitor MySQL servers for crashed tables [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Is there some ready off the shelf solution that would periodically connect to MySQL server, check for crashed tables and automatically initiate repair and/or send administrator an email?
MySQL has come up with a MySQL Enterprise Monitor web applciation tool that helps you to monitor the MySQL Servers. It can also be used to send an alert the administrators of errors.
I resorted to writing a simple Scheduled task that runs myisamchk periodically
It is not ready off the shelf, but very easy solution - every table crash is written to MySQL error log (usually .err in data directory. You can create very simple script that is awaken every X minutes and checks this log file (using tail -XXX command for example) for entries with 'marked as crashed'. Then it can alert in any way.
Is it crash a lot? if it is crashing a lot , I think you have to find the reason for why it is crashing. May be there is a hardware problem or another problems.