MySQL Get Time of Old Record Insert? - mysql

I have a table in my database which contains all of the users for my application. Unfortunately, when I launched my application, I didn't think to include a column which tracked the time at which a particular user signed up, and now I wish I had (bad idea, yes indeed).
Is there, by any shred of luck, a way that MySQL tracks when a particular record is inserted (such as in record metadata???), and would allow me to grab it and insert in into a new dedicated column for this purpose?
I am running on a shared cPanel host, so I doubt I have access to the MySQL logs.
Thank you for your time.

Only if you have binary logging enabled will you be able to trace exact times for the transaction.
http://dev.mysql.com/doc/refman/5.5/en/binary-log.html
Its not just for replication, but also a form of transactional recording in case of emergency.

Related

Can multiple users insert into a server database simultaneously?

I have a MySQL database on my server, and a Windows WPF application from which my clients will be inserting and deleting rows corresponding to their data. There may be hundreds of users working on the application at the same time, and they will be inserting or deleting rows in the db.
My question is whether or not all the database execution can go successfully or should I adapt some other alternative?
PS: There won't be any clash on rows while insertion/deletion by users as a user will be able to add/remove his/her corresponding data only.
My question is whether or not all the database execution can go successfully ...
Yes, like most other relational database systems, MySQL supports concurrent inserts, updates and deletes so this shouldn't be an issue provided that the operations don't conflict with each other.
If they do, you need to find a way to manage concurrency.
MySQL concurrency, how does it work and do I need to handle it in my application

Connecting 3rd party reporting tools to MySQL

I have an application that runs on a MySQL database, the application is somewhat resource intensive on the DB.
My client wants to connect Qlikview to this DB for reporting. I was wondering if someone could point me to a white paper or URL regarding the best way to do this without causing locks etc on my DB.
I have searched the Google to no avail.
Qlikview is in-memory tool with preloaded data so your client have to get data only during periodical reloads not all the time.
The best way is that your client will set reload once per night and make it incremental. If your tables have only new records load every night only records bigger than last primary key loaded.
If your tables have modified records you need to add in mysql last_modified_time field and maybe also set index on that field.
last_modified_time TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
If your fields are get deleted the best is set it as deleted=1 in mysql otherwise your client will need to reload everything from that tables to get to know which rows were deleted.
Additionally your client to save resources should load only data in really simple style per table without JOINS:
SELECT [fields] FROM TABLE WHERE `id` > $(vLastId);
Qlikview is really good and fast for data modelling/joins so all data model your client can create in QLikview.
Reporting can indeed cause problems on a busy transactional database.
One approach you might want to examine is to have a replica (slave) of your database. MySQL supports this very well and your replica data can be as up to date as you require. You could then attach any reporting system to your replica to run heavy reports that won't affect your main database. This also gives you a backup (2nd copy) and the backup can further be used to create offline backups of your data also without affecting your main database.
There's lots of information on the setup of MySQL replicas so that's not too hard.
I hope that helps.

how to prevent anyone from dropping, deleting, and changing the contents of a log table in mysql

For security purpose, we will create a database log that will contain all changes done on different tables on the database, to achieve this we will use triggers as stated here but my concern is that if the system admin or anyone who has the root privilege changes the data on the logs for their benefit it will then make having logs meaningless. thus, I would like to know if there is a way for me to prevent anyone and I mean no one at all from doing any changes on the logs table, i.e dropping the table, updating, and deleting a row. if this is even possible? also in regards to my logs table, is it possible to keep track of the previous data that was changed using the update query? I would like to have a previous and new data on my logs table so that we may know what changes were made.
The problem you are trying to fix is hard, as you want someone who can administer you system, but you don't want them to be able to actually do something with all parts of the system. That means you either need to administer the system yourself and give someone limited access, trust all administrators, or look for an external solution.
What you could do is write your logs to a system where only you (or at least: a different adminsitrotor then the first) have access.
Then, if you only ever write (and don't allow changes/updates and deletes) on this system, you will be able to keep a trusted log and even spot inconsistencies in case of tampering.
A second method would be to use a specific method to write logs, one that adds a signed message. In this manner you can be sure that the logs have been added by that system. If you'd also save (signed) message of the state of the complete system, you are probably going to be able to recognize any tampering. The 'system' used for signing should live on another machine obviously, making it somewhat equivalent to the first option.
There is no way to stop root access from having permissions to make alterations. A combination approach can help you detect tampering though. You could create another server that has more limited access and clone the database table there. Log all login activity on both servers and cross backup the logs between servers. also, make very regular off server backups. You could also create a hashing table that matches each row of the log table. They would not only have to find the code that creates the hash, but reverse engineer it and alter the time stamp to match. However, I think your best bet is to make a cloned server that has no net login. Physical login only. If you think there has been any tampering, you will have to do some forensics. You can even add a USB key to the physical clone server and keep it with a CEO or something. Of course, if you can't trust the sysadmin's, no matter what your job is very difficult. The trick is not to create solid wall, but a fine net and scrutinize everything coming through the net.
Once you setup the Master Slave relationship, and only give untrusted users access to the slave database, you won't need to alter your code. Just use the master database as the primary in your code. The link below is information on setting up a master slave replication. To be fully effective though, these need to be on different servers. I don't know how this solution would work on one server. It may be possible, I just don't know.
https://dev.mysql.com/doc/refman/5.1/en/replication.html
Open PhpMyAdmin
open the table
and assign table level privileges on the table

Database offline control (prevent data conflict when back to online)

Sorry, I am a newbie programmer with insufficient database control knowledge.I am not sure whether now i need to redesign the whole system flow or add on some feature in it.
Currently I have a few client that running on different computer.
Current Architecture:
-When the network is on,
Each clients will update into a same server, then server will sync data back to all local database located in each client.
-when the network is down,
Each clients will access to the local database, and save their update list, until they are reconnect back to the server, the update list will pass to server and will doing update to the server again.
The architecture is doing fine when the customer will only use one of the client (once), so the update apply to same customer only apply once and therefore no data conflict is occurred.
Now the problem come, one of the new feature of the system, like appointment function, if one of the appointment type is fail, customer is allow to make appointment again but not the type he had failed before.
But if system is down, customer is allowed to make the appointment with the same type again and he might success on the second try and the record will delete the previous failed information/ or if this information is updated before the client that holding the failed record connecting back to the server, the old record will overwrite the new record.
Currently i have some idea but i hope some one here will give me a guide, either other design or some other precaution should i take on this flow.
Some of the idea:
If the system is down, some function that might cause conflict should be not allowed.
-(Yet this is somehow decrease the availability of the system)
The update list will only work until all client is reconnected to the server, It compare which data is fresh and conflict, and will choose the fresh data to update and flag up the conflict.
-(Somehow i think it is the worst idea.by binding the client all together and restraint the update list to update in time, we can't confirm the data in server is the latest and might cause more problem to those online client. )
When the connection is flag down, record down all the update in another table. Just use some program/excel? to compare and check on the conflict?
-(probably the idea that cost the most)
Is there any third party software that can help on my situation?Or I should just change the system flow on updating and retrieving the data?

MySQL synchronization questions

I have a MySQL DB which manages users’ accounts data.
Each user can only query he’s own data.
I have a script that on initial login gets the user data and inserts it to the DB.
I scheduled a cron process which updates all users’ data every 4 hours.
Here are my questions regarding it:
(1) - Do I need to implement some kind of lock mechanism on the initial login script?
This script can be executed by large number of users simultaneously - but every
user has a dedicated place in the DB so it does not affect other DB rows.
(2) - Same question on the cron process, should I handle this scenario:
While the cron process updates user i data, user i tries to fetch his data
from the DB.
I mean does MySQL already support and handles this scenario?
Any help would be appreciated.
Thanks.
No, you don't need to lock the database, MySQL engine handles this task for you. If you would make your database engine by yourself, you would have to be sure, that nothing will get in the way or conflict with data update, but since you are running such a smart thing as MySQL, you don't need to worry about it.
While data is updated, all queries will stand in line, until update finishes.