I have a MySQL DB which manages users’ accounts data.
Each user can only query he’s own data.
I have a script that on initial login gets the user data and inserts it to the DB.
I scheduled a cron process which updates all users’ data every 4 hours.
Here are my questions regarding it:
(1) - Do I need to implement some kind of lock mechanism on the initial login script?
This script can be executed by large number of users simultaneously - but every
user has a dedicated place in the DB so it does not affect other DB rows.
(2) - Same question on the cron process, should I handle this scenario:
While the cron process updates user i data, user i tries to fetch his data
from the DB.
I mean does MySQL already support and handles this scenario?
Any help would be appreciated.
Thanks.
No, you don't need to lock the database, MySQL engine handles this task for you. If you would make your database engine by yourself, you would have to be sure, that nothing will get in the way or conflict with data update, but since you are running such a smart thing as MySQL, you don't need to worry about it.
While data is updated, all queries will stand in line, until update finishes.
Related
I have a MySQL database on my server, and a Windows WPF application from which my clients will be inserting and deleting rows corresponding to their data. There may be hundreds of users working on the application at the same time, and they will be inserting or deleting rows in the db.
My question is whether or not all the database execution can go successfully or should I adapt some other alternative?
PS: There won't be any clash on rows while insertion/deletion by users as a user will be able to add/remove his/her corresponding data only.
My question is whether or not all the database execution can go successfully ...
Yes, like most other relational database systems, MySQL supports concurrent inserts, updates and deletes so this shouldn't be an issue provided that the operations don't conflict with each other.
If they do, you need to find a way to manage concurrency.
MySQL concurrency, how does it work and do I need to handle it in my application
I have an application that runs on a MySQL database, the application is somewhat resource intensive on the DB.
My client wants to connect Qlikview to this DB for reporting. I was wondering if someone could point me to a white paper or URL regarding the best way to do this without causing locks etc on my DB.
I have searched the Google to no avail.
Qlikview is in-memory tool with preloaded data so your client have to get data only during periodical reloads not all the time.
The best way is that your client will set reload once per night and make it incremental. If your tables have only new records load every night only records bigger than last primary key loaded.
If your tables have modified records you need to add in mysql last_modified_time field and maybe also set index on that field.
last_modified_time TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
If your fields are get deleted the best is set it as deleted=1 in mysql otherwise your client will need to reload everything from that tables to get to know which rows were deleted.
Additionally your client to save resources should load only data in really simple style per table without JOINS:
SELECT [fields] FROM TABLE WHERE `id` > $(vLastId);
Qlikview is really good and fast for data modelling/joins so all data model your client can create in QLikview.
Reporting can indeed cause problems on a busy transactional database.
One approach you might want to examine is to have a replica (slave) of your database. MySQL supports this very well and your replica data can be as up to date as you require. You could then attach any reporting system to your replica to run heavy reports that won't affect your main database. This also gives you a backup (2nd copy) and the backup can further be used to create offline backups of your data also without affecting your main database.
There's lots of information on the setup of MySQL replicas so that's not too hard.
I hope that helps.
We will be providing analytics, multiple writes will be made per seconds. Currently the databases are MariaDb. My aim is to be able to write as fast as I could to the database, and to be able to query the data occasionally on user request (through the web application). The read data doesn't have to be latest time. I could query analytics data and parse it every 5 minutes.
As I understand, if I set up a master/slave database relationship, I will be able to read from the slave database, and write as fast as I could to the master, while not locking the database. Is that right?
Are there any better ideas?
I have a table in my database which contains all of the users for my application. Unfortunately, when I launched my application, I didn't think to include a column which tracked the time at which a particular user signed up, and now I wish I had (bad idea, yes indeed).
Is there, by any shred of luck, a way that MySQL tracks when a particular record is inserted (such as in record metadata???), and would allow me to grab it and insert in into a new dedicated column for this purpose?
I am running on a shared cPanel host, so I doubt I have access to the MySQL logs.
Thank you for your time.
Only if you have binary logging enabled will you be able to trace exact times for the transaction.
http://dev.mysql.com/doc/refman/5.5/en/binary-log.html
Its not just for replication, but also a form of transactional recording in case of emergency.
We like the simplicity of sqlite3 but are concerned about its ability to handle concurrent updates gracefully. Our web app is for about 30 users (50 users maximum) who has rights to update and a number of web users (let's say 500 web users) who can only read the page. Those 30 (50) users likely will not do update simultaneously. Daily update to the db should be no more than 1000 updates (consider saving one db record into a table as ONE update) on regular base. The update activity most likely happens during the 9am-5pm work hour.
Since sqlite3 locks the whole db for update (not sure if it locks for read request), our question is that is sqlite3 powerful enough to handle the concurrent updates gracefully in our situation without showing the exception error.
Thanks so much.
I think you already have enough information about how SQLite works. So the answer to your question is "yes" it can handle. But the real question is what would be the performance? It depends on the frequency of updates/inserts to your database. Updates will lock and keep reads waiting.
Let's say the performance is acceptable and you use it. What if your database gets corrupted? Even most advanced DBMS systems can have corrupted data. There can be many reasons of this from server shutdown to bugs. If your SQLite gets corrupted, as far as I know it is harder to recover the database file.
I'd strongly suggest don't risk and use a non-embedded DBMS.