How to prevent Lock Timeout in Mysql Update Table with millions request - mysql

I have problem with one table from mysql.
Struktur table like this :
Every table_detail will insert every a minutes. And user need to show all off detail data in on page. So, i create table_header that collect all last data from every table_detail.
But the problems is, Table_header lock because milion request update to this table.
How to resolved this problem ?
I have already set
innodb_lock_wait_timeout 100
innodb_write_io_threads 24
Thanks anyway

Thanks to Mr. #shadow
After use SAAS Mysql server & recommendation from google cloud sql - we create replication server. 1 Server for INSERT/UPDATE and other server for SELECT.
But cost server very high.
After research - not all transaction in data will store in mysql - we use scylladb for handle high request.
Thanks all

Related

Does SELECT FOR UPDATE in MySQL have a limit?

In our application, we are using SELECT FOR UPDATE statement to ensure locking for our entities from other threads. One of our original architects who implemented this logic put a comment in our wiki that MySQL has a limit of 200 for select for update statements. I could not find anything like this anywhere on the internet. Does anyone know if this is true and if so is there any way we can increase the limit?
The primary reason for SELECT FOR UPDATE is used is for Concurrency Prevention in the case when two users are currently trying to access the same data in the same time. If the users, however, try to update the data there will be a serious problem in the database.
In some Database Systems this problem can affect database integrity in a serious way. To help prevent concurrency problem, some Database Management Systems like SQL Server and MySQL use locking in most cases to prevent serious data integrity problems from occuring.
These locks delay the execution of the committed transaction if it conflicts the transaction that is already running.
In SQL Server or MySQL SELECT FOR UPDATE queries are used when the transaction is committed or rolled back.
In MySQL, however, the transaction records are allocated to individual MySQL servers for a minimum total number of transactions in the cluster.
MySQL uses high level datbase algorithm that makes up this formula:
TotalNoOfConcurrentTransactions = (maximum number of tables accessed in any single transaction + 1) * number of SQL nodes.
Each data node can handle TotalNoOfConcurrentTransactions / number of data nodes. Each and every Network Database (NDB) Cluster has 4 data nodes.
The result of the above formula is expressed as MaxNoOfConcurrentTransactions / 4.
In MySQL Documentation, they provided an example using 10 SQL nodes using a cluster in 10 tables in 11 transaction that resulted in 275 as MaxNoOfConcurrentTransactions.
LIMIT in SELECT FOR UPDATE is possibly used for number of rows affected during update.
I am not sure probably your architects made use of the figure above according to MySQL Documentation.
Please check the link below for more information.
https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster-ndbd-definition.html#ndbparam-ndbd-maxnoofconcurrentoperations

Update records from MySQL to SQL Server

I have the same tables in two different databases, one is on MySQL and the other one on SQL Server. I want to run a query to get the data from a MySQL table to a SQL Server table to update the records on daily basis.
E.g. I have 200 record in MySQL today by tomorrow it might be 300. I want to update 200 records today and the only 100 new record tomorrow.
Can any one help me please?
Thanks in advance
Probably the best way to manage this is from the SQL Server database. This allows that database to pull the data in every day, rather than having the MySQL database push the data.
The place to start is by linking the servers. Start with the documentation on the subject. Next, set up a job in SQL Server Agent. This job would do the following:
Connect to the MySQL server.
Load the data into a staging table.
Validate the data.
Update or insert the data into the final table.
You can schedule this job to run every day.
Note that 200 or 300 records is very small by the standards of databases (unless the records are really, really big).
There is no straight forward way for this. But you can approach this way
Use mysqldump to create a dump of the table data.
restore that in your SQL Server in a temporary / auxiliary table.
perform the update to main table JOIN with that temporary table.
delete the temporary table

SQL Server Linked Server To MySQL "Too many connections"

I have attempted to find the answer here and via Google on how to control connections for a linked server ODBC connection.
Overview
I have a linked server from SQL Server 2014 to MySQL for the purposes of extracting data for our data warehouse. I've queried the database quite a few times without issue. Then yesterday, suddenly the query to read from the table is slow, and then I get reports that the application using this MySQL database are getting a "too many connections" error.
Details
The following query selects the data from MySQL and inserts to the SQL Server table.
INSERT INTO tmpCustomers
(fieldlist)
SELECT
myc.contact_id,
myl.franchise_id,
myl.lead_source,
LEFT(RTRIM(myc.first_name) + ' ' + RTRIM(myc.last_name),100) AS Name,
myc.first_name,
myc.last_name,
myc.company,
myc.Email,
myc.primary_phone,
myc.home_phone,
myc.mobile_phone,
myc.work_phone,
myc.fax,
myc.address1,
myc.Address2,
myc.City,
myc.[state],
myc.zip_code,
myc.created_date,
myc.updated_date
FROM [MYSQLDB]...[franchise] myf
INNER JOIN [MYSQLDB]...[leads] myl
ON myl.franchise_id = myf.franchise_id
INNER JOIN [MYSQLDBE]...[contact] myc
ON myc.contact_id = myl.contact_id
This query returns about 200K rows of data, and will grow. The MySQL database is used by our customer base, and this is a back-end process to pull data into our data warehouse.
The query has been working without issue over the past week of testing, until yesterday, where it caused our MySQL support to restart the MySQL server twice.
The ODBC setup was done using the "mysql-connector-odbc-5.3.6-win64.msi" version. I don't find any settings there to limit the number of connections. ODBC does show "Allow multiple statements", which this is not. It also has "Enable automatic reconnect", which I can't imagine why for a single query would be needed.
Summary
I can't afford to stop customers from connecting, and need to disable the process from using too many connections when doing the import.
Any input on this would be greatly appreciated.
Thanks
KDS
Update: 2016-Oct-05
AWS server - M3.xlarge
4 CPU
15 GiB
2 40 GiB SSD drives
It's better to optimize the MySQL server if you can't afford to stop customers from connecting.
With this much information, it hard to optimize or suggest something for MySQL optimization.
https://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html
Better to update your configuration file. Or max_connections limit and InnoDB variable if you are using innoDB. And RAM also.
Can you update the above information in question section.
I'm going to mark this as answered as it's been about a year and no real solution to it. The issue was locks on the MySQL server as the SQL Server linked server was reading the data. SQL Server arguments like NOLOCK had no impact on resolving this.
So, what was done was to take a backup of the MySQL database nightly and restore it to a separate database that we linked to for SQL Server, and process the data from there. The reads are usually done in a matter of a minute or two. SQL Server was still putting a lock on the MySQL table, and users then started to stack multiple connections until all the connections to MySQL were used up.
So, since I only needed the data for reporting purposes daily, this separate database copy worked, but I don't know of any other fix to this.
Thanks
KD

Sync.\Maintaining updated data in 2 DATABASE TABLES(MYSQL)

I have 2 Databases
Database 1,
Database 2
Now Each Database has Table say Table 1(IN DATABASE 1) and Table 2(IN DATABASE 2).
Table 1 is Basically a Copy of Table 2(Just for Backup).
How can i Sync Table 2 if Table 1 is Updated?
I am using MYSQL,Storage Engine:InnoDBand in back-end programming i am using php.
Further i can check for update after every 15 minutes using php script but it takes too much time because each table has51000 rows.
So, How can i achieve something like if Administrator/Superuser updates table 1, that update should me immediately updated in Table 2.
Also, is there a way where Bi-Directional Update can work i.e Both can be Masters?
Instead Table 1 as the only master, Both Table 1 and Table 2 can be Master's? if any update is done at Any of the tables other one should update accordingly?
If not wrong, what you are looking for is Replication which does this exact thing for you. If you configure a Transnational Replication then every DML operation will get cascaded automatically to the mirrored DB. So, no need for you to do continuously polling from your application.
Quoted from MySQL Replication document
Replication enables data from one MySQL database server (the master)
to be replicated to one or more MySQL database servers (the slaves).
Replication is asynchronous - slaves need not be connected permanently
to receive updates from the master. This means that updates can occur
over long-distance connections and even over temporary or intermittent
connections such as a dial-up service. Depending on the configuration,
you can replicate all databases, selected databases, or even selected
tables within a database.
Per your comment, Yes Bi-Directional Replication can also be configured.
See Configuring Bi-Directional Replication
As Rahul stated, what you are looking for is replication.
The standard replication of mysql is master -> slave which means that one of the databases is "master", the rest slaves. All changes must be written to the master db and will then be copied to the slaves. More info can be found in the mysql documentation on replication.
There is aslo an excellent guide on the digitaloceans community forums on master <-> master replication setup.
If the requirements for "Administrator/Superuser" weren't in your question, you could use the mysql's Replication functions on the databases.
If you want the data to be synced immediately to the Table2 upon inserting in Table1, you could use a trigger on the table. In that trigger you can check which user (if you have a column in that table specifying which user inserted the data) submitted data. If the user is an admin, configure the trigger to duplicate the data, if the user is a normal user, don't do anything.
Next for normal users entering data, you could keep an counter on each row, increasing by 1 if it's a new 'normal' user's data. Again in the same trigger, you could also check for what number the counter already is. Let's say if you reach 10, then duplicate all the rows to the other table and reset the counter + remove the old counter values from the just-duplicated-rows.

What happens when multiple simultaneous update requests received for a SQL table?

I have a table in SQL server database in which I am recording the latest activity time of users. Can somebody please confirm me that SQL server will automatically handle the scenario when multiple update requests received simultaneously for different users. I am expecting 25-50 concurrent update request on this table but each request is responsible for updating different rows in the table. Do i need something extra like connection pooling etc..?
Yes, Sql Server will handle this scenario.
It is a SGDB and it expects scenarios like this one.
When you insert/update/delete a row in Sql, sql will lock the table/row/page to garantee that you will be able to do what you want. This lock will be released when you are done inserting/updating/deleting the row.
Check this Link
And introduction-to-locking-in-sql-server
But there are a few thing you should do:
1 - Make sure you will do whatener you want fast. Because of the lock issue, if you stay connected for too long other requests to the same table may be locked until you are done and this can lead to a timeout.
2 - Always use a transaction.
3 - Make sure to adjust the fill factor of your indexes. Check Fill Factor on MSDN.
4 - Adjust the Isolation level according to what you want.
5 - Get rid of unused indexes to speed up your insert/update.
Connection pooling are not very related to your question. Connection pooling is a technique that avoid the extra overhead of creating new connections to the Database every time you send a request. In C# and other languages that uses ADO this is automatically done. Check this out: SQL Server Connection Pooling.
Other links that may be usefull:
best-practices-for-inserting-updating-large-amount-of-data-in-sql-2008
Speed Up Insert Performance