my problem is HOW CAN I PREVENT duplication of ID into mydatabase when two or more user (multi users) have to encode simultaneously, when i tried to test saving data together at the same time i always go duplicate id. im using mysql.
I think your problem is related to database locking. What you need to do lock the database for writing so that only one session can access the database at a time, and prevent anyone else from writing until the first session has finished. See this info on database locking https://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
Related
I have a MySql database hosted on a webserver which has a set of tables with data in it. I am distributing my front end application which is build using HTML5 / Javascript /CS3.
Now when multiple users tries to make an insert/update into one of the tables at the same time is it going to create a conflict or will it handle the locking of the table for me automatically example when one user is using, it will lock the table for him and then let the rest follow in a queue once the user finishes it will release the lock and then give it to the next in the queue ? Is this going to happen or do i need to handle the case in mysql database
EXAMPLE:
When a user wants to make an insert into the database he calls a php file located on a webserver which has an insert command to post data into the database. I am concerned if two or more people make an insert at the same time will it make the update.
mysqli_query($con,"INSERT INTO cfv_postbusupdate (BusNumber, Direction, StopNames, Status, comments, username, dayofweek, time) VALUES (".trim($busnum).", '".trim($direction3)."', '".trim($stopname3)."', '".$status."', '".$comments."', '".$username."', '".trim($dayofweek3)."', '".trim($btime3)."' )");
MySQL handles table locking automatically.
Note that with MyISAM engine, the entire table gets locked, and statements will block ("queue up") waiting for a lock to be released.
The InnoDB engine provides more concurrency, and can do row level locking, rather than locking the entire table.
There may be some cases where you want to take locks on multiple MyISAM tables, if you want to maintain referential integrity, for example, and you want to disallow other sessions from making changes to any of the tables while your session does its work. But, this really kills concurrency; this should be more of an "admin" type function, not really something a concurrent application should be doing.
If you are making use of transactions (InnoDB), the issue your application needs to deal with is the sequence in which rows in which tables are locked; it's possible for an application to experience "deadlock" exceptions, when MySQL detects that there are two (or more) transactions that can't proceed because each needs to obtain locks held by the other. The only thing MySQL can do is detect that, and the only recovery MySQL can do for this is to choose one of the transactions to be the victim, that's the transaction that will get the "deadlock" exception, because MySQL killed it, to allow at least one of the transactions to proceed.
I have gone through the manual and it was mentioned that every transaction will add a BEGIN statement before it starts taking the dump. Can someone elaborate this in a more understandable manner?
Here is what I read:
This option issues a BEGIN SQL statement before dumping data from the server. It is useful only with transactional tables such as InnoDB and BDB, because then it
dumps the consistent state of the database at the time when BEGIN was issued without blocking any applications."
Can some elaborate on this?
Since the dump is in one transaction, you get a consistent view of all the tables in the database. This is probably best explained by a counterexample. Say you dump a database with two tables, Orders and OrderLines
You start the dump without a single transaction.
Another process inserts a row into the Orders table.
Another process inserts a row into the OrderLines table.
The dump processes the OrderLines table.
Another process deletes the Orders and OrderLines records.
The dump processes the Orders table.
In this example, your dump would have the rows for OrderLines, but not Orders. The data would be in an inconsistent state and would fail on restore if there were a foreign key between Orders and OrderLines.
If you had done it in a single transaction, the dump would have neither the order or the lines (but it would be consistent) since both were inserted then deleted after the transaction began.
I used to run into problems where mysqldump without the --single-transaction parameter would consistently fail due to data being changed during the dump. As far as I can figure, when you run it within a single transaction, it is preventing any changes that occur during the dump from causing a problem. Essentially, when you issue the --single-transaction, it is taking a snapshot of the database at that time and dumping it rather than dumping data that could be changing while the utility is running.
This can be important for backups because it means you get all the data, exactly as it is at one point in time.
So for example, imagine a simple blog database, and a typical bit of activity might be
Create a new user
Create a new post by the user
Delete a user which deletes the post
Now when you backup your database, the backup may backup the tables in this order
Posts
Users
What happens if someone deletes a User, which is required by the Posts, just after your backup reaches #1?
When you restore your data, you'll find that you have a Post, but the user doesn't exist in the backup.
Putting a transaction around the whole thing means that all the updates, inserts and deletes that happen on the database during the backup, aren't seen by the backup.
As our Rails application deals with increasing user activity and load, we're starting to see some issues with simultaneous transactions. We've used JavaScript to disable / remove the buttons after clicks, and this works for the most part, but isn't an ideal solution. In short, users are performing an action multiple times in rapid succession. Because the action results in a row insert into the DB, we can't just lock one row in the table. Given the high level of activity on the affected models, I can't use the usual locking mechanims ( http://guides.rubyonrails.org/active_record_querying.html#locking-records-for-update ) that you would use for an update.
This question ( Prevent simultaneous transactions in a web application ) addresses a similar issue, but it uses file locking (flock) to provide a solution, so this won't work with multiple application servers, as we have. We could do something similar I suppose with Redis or another data store that is available to all of our application servers, but I don't know if this really solves the problem fully either.
What is the best way to prevent duplicate database inserts from simultaneously executed transactions?
Try adding a unique index to the table where you are having the issue. It won't prevent the system from attempting to insert duplicate data, but it will prevent it from getting stored in the database. You will just need to handle the insert when it fails.
Hi I am developing a site with JSP/Servlets running on Tomcat for the front-end and with a MySql db for the backend which is accessed through JDBC.
Many users of the site can access and write to the database at the same time ,my question is :
Do i need to explicitly take locks before each write/read access to the db in my code?
OR Does Tomcat handle this for me?
Also do you have any suggestions on how best to implement this ? I have written a significant amount of JDBC code already without taking the locks :/
I think you are thinking about transactions when you say "locks". At the lowest level, your database server already ensure that parallel read writes won't corrupt your tables.
But if you want to ensure consistency across tables, you need to employ transactions. Simply put, what transactions provide you is an all-or-nothing guarantee. That is, if you want to insert a Order in one table and related OrderItems in another table, what you need is an assurance that if insertion of OrderItems fails (in step 2), the changes made to Order tables (step 1) will also get rolled back. This way you'll never end up in a situation where an row in Order table have no associated rows in Order items.
This, off-course, is a very simplified representation of what a transaction is. You should read more about it if you are serious about database programming.
In java, you usually do transactions by roughly with following steps:
Set autocommit to false on your jdbc connection
Do several insert and/or updates using the same connection
Call conn.commit() when all the insert/updates that goes together are done
If there is a problem somewhere during step 2, call conn.rollback()
i was wondering if anyone knew how to lock a database, and then dequeue the waiting jobs. so i have a hashtable in a database, and im storing data in the database where multiple users are sending requests to edit that data at the same time, but the data needs to be persistent across all users and that only one user can acesss/edit it at a time.
thanks a bunch = )
You might find something interesting at http://dev.mysql.com/doc/refman/5.0/en/innodb-deadlocks.html. One technique for serializing transactions is to create a table with just one row in it, and have every transaction update this row before commencing its real work.
Can you elaborate on "but the data needs to be persistent across all users"?
I think that InnoDb which is transactional will keep things in a way that's consistent to all of your users. It's ACID compliant.