how to quick insert big collection to mysql use waterline - mysql

I wanna insert 10000+ records to mysql, If I process 30000+, application is crash
RangeError: Maximum call stack size exceeded
I use sails 0.10.5,sails-mysql 0.10.8, Model.create([....]).exec();
how to reslove is?

Related

Getting "Mysql::Error: Lock wait timeout exceeded" on insert

I am getting MySQL lock wait timeout on the insert which very unusual. Usually, we are supposed to get this issue during an update.
Insert statement that we are using very simple one INSERT INTO Document ('title', 'page_number') VALUES('some name here','12');
We are not locking any table during this period,
what is the possible cause for this behavior?
How to debug this condition to figure out what is holding the lock?
We are using AWS Mysql aurora. Please let us know if we need any other information
Update
I tried to get the table lock details by using SHOW INNODB STATUS but it did not help me much since this issue occurs only on production randomly and we are not able to reproduce it.
Check this link: https://forums.aws.amazon.com/thread.jspa?threadID=234301
You can check what are the possible causes from the link above.
Now How you can debug this: create a customized parameter group, enable slow_query_log and set a proper value for long_query_time, and also enable variable log_queries_not_using_indexes.
When the issue happens again, lock into the slow query logs to find out which query is causing the lock (it will exceed lock wait time).

some of execution failed due to unable connect any specific mysql hosts vb.net

i have look around regarding this issues. Recently i faced issues so called unable to conenct any specific mysql server in vb.net. Means in middle of execution of MYSQL, some of execution might interrupted and missing out. So let say i execute 3 mysql operation update, insert and delete, so in middle of execution of this 3 operation, suddenly it pop out "unable to connect any specific mysql HOSTS" and execution for update or insert might missed out. Sometime some columns of execution is missing out and make the execuation incomplete..I tried to change the connection timeout to longger time for 90sec, but still doesnt help. Any suggestion?
You mean instead of doing this
sqlQuery="Update"
executeQuery(sqlQuery, Connection2)
sqlQuery="Update"
executeQuery(sqlQuery, Connection2)
sqlQuery="Insert"
executeQuery(sqlQuery, Connection2)
better doing this
sqlQuery="Update"
sqlQuery="Update"
sqlQuery="Insert"
executeQuery(sqlQuery, Connection2)

Logback batch writing to a database

we have a requirement to record (in a database) each call to one of our methods. However, because of the very large number of times the method is called we want to "batch" the inserts eg. only insert every x seconds or only insert when you have say 100 entries using JDBC batch insert (statement.executeBatch()).
I've understand I can use an Async DB Appender and that this uses an AsyncAppender which uses a Blocking Queue however this seems to be more about blocking the producing thread rather than controlling how often the consuming thread performs an action.
Please could someone advise if my requirement is possible with Logback?
Many thanks

Magento Throwing Error Under High Loads: UNQ_SALES_FLAT_INVOICE_INCREMENT_ID

I am trying to figure out why under high load (not under normal load) our magento store throws the following error at random intervals:
Payment transaction failed. Reason
SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry
'INV1392428' for key 'UNQ_SALES_FLAT_INVOICE_INCREMENT_ID'
This results in the card being processed but the order not going through. My guess is that transactions are colliding on the db, (we are running InnoDB) but I cant figure out how to set it so that it "locks" the key properly to keep from duplicates being created.
Any help is greatly appreciated!
Thanks,
Rick
The increment is done in PHP (Mage_Eav_Model_Entity_Type::Mage_Eav_Model_Entity_Type()), not the database, and so there is a defined period of time where it is possible to get two of the same increment IDs. Normally this period of time is very, very small though there are two scenarios that could increase it.
Extremely high database load that is slowing down save operations
A customization (or possibly Magento itself) that is starting a transaction earlier in the save process. Transactions may lock tables and increase the window of opportunity for duplicate increment IDs to be generated. Couple this with #1 and the window for this error to occur will increase.

Would you expect MySql to drop records if bombed by inserts on the same table?

I don't have a testing environment for this yet. But before I think too much about solutions I'd like to know if people think this would be a problem.
I will have 10-20 java processes connected to a MySql db via JDBC. Each will be inserting unique records, all going to the same table. The rate of inserts will be on the order of 1000's per second.
My expectation is that some process will attempt to insert and encounter a table lock while another process is inserting, and this will result in a JDBC exception and that insert to fail.
Clearly if you increase the insert rate sufficiently there eventually will be a point where some buffer somewhere fills up faster than it can be emptied. When such a buffer hits its maximum capacity and can't contain any more data some of your insert statements will have to fail. This will result in an exception being thrown at the client.
However, assuming you have high-end hardware I don't imagine this should happen with 1000 inserts per second, but it does depend on the specific hardware, how much data there is per insert, how many indexes you have, what other queries are running on the system simultaneously, etc.
Regardless of whether you are doing 10 inserts per second or 1000 you still shouldn't blindly assume that every insert will succeed - there's always a chance that an insert will fail because of some network communication error or some other problem. Your application should be able to correctly handle the situation where an insert fails.
Use InnoDB as it supports reads and writes at the same time. MyISAM will usually lock the table during the insert, but give preference to SELECT statements. This can cause issues if you're doing reporting or visualization of the data while trying to do inserts.
If you have a natural primary key (no auto_increment), using it will help avoid some deadlock issues. (Newer versions have fixed this.)
http://www.mysqlperformanceblog.com/2007/09/26/innodb-auto-inc-scalability-fixed/
You might also want to see if you can queue your writes in memory and send them to the database in batches. Lots of little inserts will be much slower than doing batches in transactions.
Good presentation on getting the most out of the MySQL Connector/J JDBC driver:
http://assets.en.oreilly.com/1/event/21/Connector_J%20Performance%20Gems%20Presentation.pdf
What engine do you use? That can make a difference.
http://dev.mysql.com/doc/refman/5.5/en/concurrent-inserts.html