My colleague provides me a code segement that simulates Oracle's sequence:
// generate ticket
pstmt = conn.prepareStatement( "insert seq_pkgid values (NULL);" );
if(pstmt.executeUpdate() > 1) {
success = 1;
} else {
throw new Exception("Generating seq_pkgid sequence failed!");
}
pstmt.close();
pstmt = null;
// get ticket
pstmt = conn.prepareStatement( "select last_insert_id() as maxid" );
rs = pstmt.executeQuery();
if( rs.next() ) {
nSeq = rs.getInt( "maxid" );
}
rs.close();
rs = null;
pstmt.close();
pstmt = null;
But I wonder what if this code segment executed from 2 instances about the same time. Will they get same generated auto-increment value? Does MySQL has concurrent control, e.g. critical section or semaphore, when generating a new auto-increment value?
Yes ! If the column has AUTO_INCREMENT in column definition, MySQL will have an Auto Increment Lock on the column. Please refer
https://dev.mysql.com/doc/refman/5.7/en/innodb-auto-increment-handling.html
If you really need to generate a counter from DB, you can read here how to do on MySQL using InnodDB and Locking Reads.
https://dev.mysql.com/doc/refman/5.7/en/innodb-locking-reads.html
Buy I'm wondering if you really need an autoincrement field or just a uniq identifier for the object you want to store; in this case maybe a UUID() is just fine:
https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid
Related
#Test
public void transaction() throws Exception {
Connection conn = null;
PreparedStatement ps = null;
try {
String sql = "insert into `1` values(?, ?, ?, ?)";
conn = JDBCUtils.getConnection();
ps = conn.prepareStatement(sql);
conn.setAutoCommit(false);
for(int i = 1; i <= 10000; i++){
ps.setObject(1, i);
ps.setObject(2, 10.12345678);
ps.setObject(3, "num_" + i);
ps.setObject(4, "2021-12-24 19:00:00");
ps.addBatch();
}
ps.executeBatch();
ps.clearBatch();
conn.commit();
} catch (Exception e) {
conn.rollback();
e.printStackTrace();
}finally {
JDBCUtils.closeResources(conn, ps);
}
}
When setAutoCommit = true, local MySQL and distributed MySQL insert speeds are very slow.
When I set the transaction to commit manually, just like the code above, the local MySQL speed has increased a lot, but the insertion speed of distributed MySQL is still very slow.
Is there any additional parameters I need to set?
Setting parameters probably won't help (much).
There are a couple of reasons for the slowness:
With autocommit=true you are committing on every insert statement. That means the each new row must be written to disk before the database server returns the response to the client.
With autocommit=false there is still a client -> server -> client round trip for each insert statement. Those round trips add up to a significant amount of time.
One way to make this faster is to insert multiple rows with each insert statement, but that is messy because you would need to generate complex (multi-row) insert statements.
A better way is to use JDBC's batch feature to reduce the number of round-trips. For example:
PreparedStatement ps = c.prepareStatement("INSERT INTO employees VALUES (?, ?)");
ps.setString(1, "John");
ps.setString(2,"Doe");
ps.addBatch();
ps.clearParameters();
ps.setString(1, "Dave");
ps.setString(2,"Smith");
ps.addBatch();
ps.clearParameters();
int[] results = ps.executeBatch();
(Attribution: above code copied from this answer by #Tusc)
If that still isn't fast enough, you should get even better performance using MySQL's native bulk insert mechanism; e.g. load data infile; see High-speed inserts with MySQL
For completeness, I am adding this suggestion from #Wilson Hauck
"In your configuration [mysqld] section, innodb_change_buffer_max_size=50 # from 25 (percent) for improved INSERT rate per second. SHOW FULL PROCESSLIST; to monitor when the instance has completed adjustment, then do your inserts and put it back to 25 percent for typical processing speed."
This may increase the insert rate depending on your table and its indexes, and on the order in which you are inserting the rows.
But the flip-side is that you may be able to achieve the same speedup (or more!) by other means; e.g.
by sorting your input so that rows are inserted in index order, or
by dropping the indexes, inserting the records and then recreating the indexes.
You can read about the change buffer here and make your own judgements.
I need to update multiple rows in my MySQL database using Hibernate. I have done this using JDBC where we have the support of batched Query. I want something like this in hibernate.
Does hibernate support batched Query?
Batched Query Example in jdbc:
// Create statement object
Statement stmt = conn.createStatement();
String SQL = "INSERT INTO Employees (id, first, last, age) " +
"VALUES(200,'Zia', 'Ali', 30)";
// Add above SQL statement in the batch.
stmt.addBatch(SQL);
// Create one more SQL statement
String SQL = "INSERT INTO Employees (id, first, last, age) " +
"VALUES(201,'Raj', 'Kumar', 35)";
// Add above SQL statement in the batch.
stmt.addBatch(SQL);
int[] count = stmt.executeBatch();
Now when we issue stmt.executeBatch call Both Sql Query will be executed in a single jdbc round trip.
You may check the Hibernate documentation. Hibernate has some configuration properties that control (or disable) the use of JDBC batching.
If you issue the same INSERT multiple times and your entity does not use an identity generator, Hibernate will use JDBC batching transparently.
The configuration must enable the use of JDBC batching. Batching is disabled by default.
Configuring the Hibernate
The hibernate.jdbc.batch_size property defines the number of statements that Hibernate will batch before asking the driver to execute the batch. Zero or a negative number will disable the batching.
You can define a global configuration, e.g. in the persistence.xml, or define a session-specific configuration. To configure the session, you can use code like the following
entityManager
.unwrap( Session.class )
.setJdbcBatchSize( 10 );
Using the JDBC batching
As mentioned before, Hibernate call the JDBC batching transparently. If you wanna control the batching, you can use the flush() and clear() methods in the session.
The following is an example from the Documentation. It calls flush() and clear() when the number of insertions reach a batchSize value. It works efficiently if batchSize is lesser or equal than the configured hibernate.jdbc.batch_size.
EntityManager entityManager = null;
EntityTransaction txn = null;
try {
entityManager = entityManagerFactory().createEntityManager();
txn = entityManager.getTransaction();
txn.begin();
// define a batch size lesser or equal than the JDBC batching size
int batchSize = 25;
for ( int i = 0; i < entityCount; ++i ) {
Person Person = new Person( String.format( "Person %d", i ) );
entityManager.persist( Person );
if ( i > 0 && i % batchSize == 0 ) {
//flush a batch of inserts and release memory
entityManager.flush();
entityManager.clear();
}
}
txn.commit();
} catch (RuntimeException e) {
if ( txn != null && txn.isActive()) txn.rollback();
throw e;
} finally {
if (entityManager != null) {
entityManager.close();
}
}
I want to DELETE column from base in hibernate where my inserted -regBroj- parameter is same as one in a base.
This is my method in controller for deleting.But i constantly get
SQLGrammarException:
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 'BG026CH' in 'where clause'
This 'BG026CH' is value of regBroj that i use as a parameter to find vehicle in database and delete it.And i insert it in text area in adminPage.
public String izbrisi(String regBroj) {
List<Vozilo> lista = listaj();
Session s = HibernateUtil.getSessionFactory().getCurrentSession();
Transaction t = s.beginTransaction();
for (int i = 0; i < lista.size(); i++) {
if (regBroj .equals(lista.get(i).getRegBroj())) {
String izbrisiquery = "DELETE FROM Korisnik WHERE brojLk=" + regBroj + "";
Query q = s.createQuery(izbrisiquery);
int a = q.executeUpdate();
t.commit();
return "adminPage";
}
}
t.commit();
return "error";
}
Please replace below string with these one
String izbrisiquery = "DELETE FROM Korisnik WHERE brojLk='" + regBroj + "'";
You should consider using prepared statements because they will automatically take care of escaping field values with quotes, and they will also protect you from SQL injection.
// obtain a Connection object using your Hibernate session, or through some other means
Connection conn = getDBConnection();
for (int i = 0; i < lista.size(); i++) {
if (regBroj .equals(lista.get(i).getRegBroj())) {
String izbrisiquery = "DELETE FROM Korisnik WHERE brojLk = ?";
PreparedStatement ps = conn.prepareStatement(izbrisiquery);
ps.setString(1, regBroj);
ps.executeUpdate();
t.commit();
return "adminPage";
}
}
To see how SQL injection works, or how a malicious user could wreck the Korisnik table, imagine that someone hacks the UI to pass a value of '' OR TRUE for brojLK. This is what the resulting DELETE statement would look like:
DELETE FROM Korisnik WHERE brojLk = '' OR TRUE
In other words, this injected query would drop your entire table! Prepared statements would choke on this input and a hacker would not get as far as executing the query.
I have a table that has 1.400.000 entries. Its is a simple list of documents
Table - Document
ID int
DocumentPath nvarchar
DocumentValid
bit
I scan a directory and set any document found in the directory as valid.
public void SetReportsToValidated(List<int> validatedReports)
{
SqlConnection myCon = null;
try
{
myCon = new SqlConnection(_conn);
myCon.Open();
foreach (int id in validatedReports)
{
SqlDataAdapter myAdap = new SqlDataAdapter("update_DocumentValidated", myCon);
myAdap.SelectCommand.CommandType = CommandType.StoredProcedure;
SqlParameter pId = new SqlParameter("#Id", SqlDbType.Int);
pId.Value = id;
myAdap.SelectCommand.Parameters.Add(pId);
myAdap.SelectCommand.ExecuteNonQuery();
}
}
catch (SystemException ex)
{
_log.Error(ex);
throw;
}
finally
{
if (myCon != null)
{
myCon.Close();
}
}
}
The performance of Updates is ok, but I want more. It takes more than 1 hour to update 1000000 of the documents to valid. Is there any good way to speed up the updates? I am thinking of using some kind of batch (like table valued parameters).
Each update takes some 5-10ms when profiled on SQLServer.
Read the reports in and append them together in a DataTable (since they have the same dimensions) then use the SqlBulkCopy object for to upload the entire thing. Will probably work better for you. I don't think you will have memory issues given the small number of columns and rows.
At the moment you are calling the db for each record individually. You can use the SqlDataAdapter to do bulk updates by (in a very brief nutshell):
1) define one SqlDataAdapter
2) set the .UpdateCommand on the adapter to your update sproc
3) call the .Update method on the adapter, passing it a DataTable containing the ids of documents to be updated. This will batch up the updated rows from the DataTable in to the DB, calling the sproc for each record in a batched manner. You can control the Batch Size via the .BatchSize property.
4) So what you're doing is removing the manual, row by row looping which is inefficient for batched updates.
See examples:
http://support.microsoft.com/kb/308055
http://www.c-sharpcorner.com/UploadFile/61b832/4430/
Alternatively, you could:
1) Use SqlBulkCopy to bulk insert all the IDs into a new table in the database (highly efficient)
2) Once loaded in to that staging table, run a single SQL statement to update your main table from that staging table to validate the documents.
See examples:
http://www.adathedev.co.uk/2010/02/sqlbulkcopy-bulk-load-to-sql-server.html
http://www.adathedev.co.uk/2011/01/sqlbulkcopy-to-sql-server-in-parallel.html
Instead of creating the adapter and parameter every time in the loop just create them once and assign different value to the parameter:
SqlDataAdapter myAdap = new SqlDataAdapter("update_DocumentValidated", myCon);
myAdap.SelectCommand.CommandType = CommandType.StoredProcedure;
SqlParameter pId = new SqlParameter("#Id", SqlDbType.Int);
myAdap.SelectCommand.Parameters.Add(pId);
foreach (int id in validatedReports)
{
myAdap.SelectCommand.Parameters[0].Value = id;
myAdap.SelectCommand.ExecuteNonQuery();
}
This might not result in a very dramatic improvement but is better compared to the original code. Also, as you are manually executing the SqlCommand object you do not need the adapter at all. Just use the SqlCommand directly.
I'm having a problem with deadlock on SELECT/UPDATE on SQL Server 2008.
I read answers from this thread: SQL Server deadlocks between select/update or multiple selects but I still don't understand why I get deadlock.
I have recreated the situation in the following testcase.
I have a table:
CREATE TABLE [dbo].[SessionTest](
[SessionId] UNIQUEIDENTIFIER ROWGUIDCOL NOT NULL,
[ExpirationTime] DATETIME NOT NULL,
CONSTRAINT [PK_SessionTest] PRIMARY KEY CLUSTERED (
[SessionId] ASC
) WITH (
PAD_INDEX = OFF,
STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON
) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[SessionTest]
ADD CONSTRAINT [DF_SessionTest_SessionId]
DEFAULT (NEWID()) FOR [SessionId]
GO
I'm trying first to select a record from this table and if the record exists set expiration time to current time plus some interval. It is accomplished using following code:
protected Guid? GetSessionById(Guid sessionId, SqlConnection connection, SqlTransaction transaction)
{
Logger.LogInfo("Getting session by id");
using (SqlCommand command = new SqlCommand())
{
command.CommandText = "SELECT * FROM SessionTest WHERE SessionId = #SessionId";
command.Connection = connection;
command.Transaction = transaction;
command.Parameters.Add(new SqlParameter("#SessionId", sessionId));
using (SqlDataReader reader = command.ExecuteReader())
{
if (reader.Read())
{
Logger.LogInfo("Got it");
return (Guid)reader["SessionId"];
}
else
{
return null;
}
}
}
}
protected int UpdateSession(Guid sessionId, SqlConnection connection, SqlTransaction transaction)
{
Logger.LogInfo("Updating session");
using (SqlCommand command = new SqlCommand())
{
command.CommandText = "UPDATE SessionTest SET ExpirationTime = #ExpirationTime WHERE SessionId = #SessionId";
command.Connection = connection;
command.Transaction = transaction;
command.Parameters.Add(new SqlParameter("#ExpirationTime", DateTime.Now.AddMinutes(20)));
command.Parameters.Add(new SqlParameter("#SessionId", sessionId));
int result = command.ExecuteNonQuery();
Logger.LogInfo("Updated");
return result;
}
}
public void UpdateSessionTest(Guid sessionId)
{
using (SqlConnection connection = GetConnection())
{
using (SqlTransaction transaction = connection.BeginTransaction(IsolationLevel.Serializable))
{
if (GetSessionById(sessionId, connection, transaction) != null)
{
Thread.Sleep(1000);
UpdateSession(sessionId, connection, transaction);
}
transaction.Commit();
}
}
}
Then if I try to execute test method from two threads and they try to update same record I get following output:
[4] : Creating/updating session
[3] : Creating/updating session
[3] : Getting session by id
[3] : Got it
[4] : Getting session by id
[4] : Got it
[3] : Updating session
[4] : Updating session
[3] : Updated
[4] : Exception: Transaction (Process ID 59) was deadlocked
on lock resources with another process and has been
chosen as the deadlock victim. Rerun the transaction.
I can't understand how it can happen using Serializable Isolation Level. I think first select should lock row/table and won't let another select to obtain any locks. The example is written using command objects but it's just for test purposes. Originally, i'm using linq but I wanted to show simplified example. Sql Server Profiler shows that deadlock is key lock. I will update the question in few minutes and post graph from sql server profiler. Any help would be appreciated. I understand that solution for this problem may be creating critical section in code but I'm trying to understand why Serializable Isolation Level doesn't do the trick.
And here is the deadlock graph:
deadlock http://img7.imageshack.us/img7/9970/deadlock.gif
Thanks in advance.
Its not enough to have a serializable transaction you need to hint on the locking for this to work.
The serializable isolation level will still usually acquire the "weakest" type of lock it can which ensures the serializable conditions are met (repeatable reads, no phantom rows etc)
So, you are grabbing a shared lock on your table which you are later (in your serializable transaction) trying to upgrade to an update lock. The upgrade will fail if another thread is holding the shared lock (it will work if no body else it holding a shared lock).
You probably want to change it to the following:
SELECT * FROM SessionTest with (updlock) WHERE SessionId = #SessionId
That will ensure an update lock is acquired when the SELECT is performed (so you will not need to upgrade the lock).