mysql ensure only one access at a time - mysql

I'm connecting to a mysql-database by several threads in Java. Sometimes, the threads are reading and updating the same column of a database-table so that some inconsistency appears.
In Java there is a synchronized keyword which limits the access to one ressource. Are there any possible limitation for the mysql-database? So that these inconsistencies do not occur?

You should use transactions with the appropiate isolation level.
Simplified example:
BEGIN TRANSACTION
...
COMMIT
Mysql docs about transactions Mysql docs about isolation levels.

Try using a version column, that way you can know if someone messed with your data while you work on it.
meaning: add a column named "updated" of type datetime, and whenever updating check that the updated you got is the same as the one on row in db, if they're not you know someone worked on your record.

Related

Is there ANY way to retrieve the original input of an sql int field higher than 2147483647?

I have a big table, which saved data with an ID based on input from an external API. The ID is stored in an int field. When I developed the system, I encountered no problems, because the ID of records in the external API were always below 2147483647.
The system has been fetching data from the API for the last few months, and apparantly the ID crossed the 2147483647 mark. I now have a database with thousands of unusable records with ID 2147483647.
It is not possible to fetch this information from the database again (basically, the API allows us to look up data from max x days ago).
I am pretty sure that I am doomed. But might there be any backlog, or any other way, to retrieve the original input queries, or numbers that were truncated by MySQL to fit in the int field?
As already discussed in the comments, there is no way to retrieve the information from the table. It was silently(?!!!) truncated to 32 bits.
First, call the API provider, explain your situation, and see if you can redo the queries. Best that happens is they say yes and you don't have to try to reconstruct things from logs. Worst that happens is they say no and you're back where you are now.
Then there are some logs I would check.
First is the MySQL General Query Log. IF you had this turned on, it may contain the queries which were run. Another possibility is the Slow Query Log, more often enabled, if your queries happened to be slow.
In MySQL, data truncation is a warning by default. It's possible those warnings went into a log and included the original data. The MySQL Error Log is one possibility. On Windows it may have gone into the Windows Event Log. On a Mac, it might be in a log visible to the Console. In Unix, it might have gone to syslog.
Then it's possible the API queries themselves are logged somewhere. If you used a proxy it might contain them in its log. The program fetching from the API and adding to the database may also have its own logs. It's a long shot.
As a last resort, try grepping all of /var/log and /var/local/log and anywhere else you might think could contain a log.
In the future there are some things you can do to prevent this sort of thing from happening again. The most important is to turn on strict SQL mode. This will turn warnings, like that data has been truncated, into errors.
Set UNIQUE constraints on unique columns. Had your API ID column been declared UNIQUE the error would have been detected.
Use UNSIGNED BIGINT for numeric IDs. 2 billion is a number easily exceeded these days. It will mean 4 extra bytes per row or about 8 gigabytes extra to store 2 billion rows. Disk is cheap.
Consider turning on ANSI SQL mode. This will disable a lot of MySQL extensions and make your SQL more portable.
Finally, consider switching to PostgreSQL. Over the years MySQL has accumulated a lot of bad ideas, mish-mashes of functions, and bad default behaviors. You just got bit by one. PostgreSQL is far better designed, more powerful and flexible, and usually as fast or faster.
In Postgres, you would have gotten an error.
test=# CREATE TABLE foo ( id INTEGER );
CREATE TABLE
test=# INSERT INTO foo (id) VALUES (2147483648);
ERROR: integer out of range
If you have binary logging enabled, and you still have backups of the binlogs, and your binlog_format is not set to ROW then your original insert and/or update statements should be preserved there, where you could extract them and replay them into another server with a more appropriate table definition.
If you don't have the binlog enabled and/or you aren't archiving the binlogs in perpetuity... this is one of the reasons why you should consider doing it.

Explain the idea behind of consistent nonlocking reads in mysql

I've just read a mysql docs where I found such sentence: "A consistent read means that InnoDB uses multi-versioning to present to a query a snapshot of the database at a point in time."
I read a lot of mysql doc pages, but still cann't clarify to myself what exactly "to a query" here means. Definitly it ralates to a SELECT statement, but what about if my transaction starts with UPDATE, INSERT, DELETE statement?
Thanks!
I found another way on my answer. And I think it should be apripriate by the others. So, days of searching whiting oracle docs and finaly founed:
InnoDB creates a consistent read view or a consistent snapshot either when the statement
mysql> START TRANSACTION WITH CONSISTENT SNAPSHOT;
is executed or when the first select query is executed in the transaction.
https://blogs.oracle.com/mysqlinnodb/entry/repeatable_read_isolation_level_in
When the query can change the data, the database also uses locks to synchronise queries.
So between queries that change data, locks are used to make sure that only one query at a time can change specific items. Between a query that reads data and a query that changes data, multi-versioning is used to present the data before the change to the query that reads it.

Mysql DB Table Rows Disappearing

A really weird (for me) problem is occurring lately. In an application that accepts user submitted data the following occurs at random:
Rows from the Database Table where the user submitted data is stored are disappearing.
Please note that there is NO DELETE, DROP, TRUNCATE or other SQL statement issued on the database table except from the INSERT statement.
Could this be a bug of Mysql? Did some research on mysql.com (forums, bugs, etc) and found 2 similar cases but without getting a solid answer (just suggestions).
Some info you might find useful:
Storage Engine: InnoDB
User Submitted Data sanitized and checked for SQL Injection attempts
Appreciate any suggestions, info.
regards,
Here's 3 possibilities:
The data never got to the database in the first place. Something happened elsewhere so the data disappeared. Maybe intermitten network issues, overloaded server, application bug.
A database transaction was not commited, and got rolled back. Maybe a bug in your application code, maybe some invalid data screwd things up, maybe a concurrency exception occured etc.
A bug in mysql.
I'd look at 1. and 2. first.
A table on which you only ever insert (and presumably select) and never update or delete should be really stable. Are you absolutely certain you're protecting thoroughly against SQL injection attacks? Because those could (of course) delete rows and such if successful.
You haven't mentioned which table engine you're using (there are several), but it's well worth running whatever diagnostic tools there are for it on the table in question. For instance, on a MyISAM table, run myisamchk. Or more generically (this works for several table types), use the CHECK TABLE statement.
Have you had issues with the underlying storage? It may be worth checking for those.
Activating binlog and periodically monitoring DELETE queries can help to identify the culprit.
One more case to fullfill the above. There could also be the case of client-side and server-side parts of application. Client-side initiated changes can be processed on the server side with additional code logics.
For example, in our case, local admin panel updated an order information with pay_date = NULL and php-website processed this table to clean-up overdue orders from this table. As php logics were developed by another programmer, it looked strange when orders update resulted in records to disappear after some time.
The same refers to crone operations, working on mysql database in a schedule.

concurrent mySql database queries

I'm running mySql server that being updated every 4 hours.
In the mean while data can be retrieved.
Does mySql handle this scenario where the DB is being updated
and a query from the user is received?
or I should handle this scenario?
Is it possible to create a snapshot of the DB just before the update takes place
and query this DB?
Thanks
If you are worried about how concurrent writes are handled, that's one thing, but you can safely read from a MySQL database that is being updated by another thread without worrying about corrupted data.
Of course, if the writing thread is performing multiple writes that need to be atomic (ie all or none), make sure he is using a transaction.
You can use transactions and/or locking to make sure that every user sees a consistent view (i.e. completely updated or not updated at all). The database engine itself is very well suited to the scenario. If you are not worried about what readers see at the exact time the updates are running, you don't need to do anything.
Considering my scenario I think that no further actions are required.
If the write comes before the read the user will get the most updated data
otherwise it does not, and he will wait for next query.
This fits my requirements.
Thank you all

when will select statement without for update causing lock?

I'm using MySQL,
I sometimes saw a select statement whose status is 'locked' by running 'show processlist'
but after testing it on local,I can't reproduce the 'locked' status again.
It probably depends on what else is happening. I'm no mySQL expert but in SQL Server various lock levels control when data can be read and written. For example in production your select stateemnt might want to read a record that is being updated. It has to wait until the update is done. Vice-versa - an update might have to wait for a read to finish.
Messing with default lock levels is dangerous. And since dev environs don't have nearly as much traffic you probasbly don't see that kind of contention.
If you spot that again see if you can see if any update is being made against one of the tables your select is referencing.
I'm no expect in mysql, but it sounds like another user is holding a lock against a table/field while your trying to read it.
I'm no MySQL expert either, but locking behavior strongly depends on the isolation level / transaction isolation. I would suggest searching for those terms in the MySQL docs.