I've have a event which runs every morning at 1 AM.
The aim of the event is to do some calculation on the database and save it in a table.
I have a procedure, say spCalc which takes a unique mission_id as parameter. i have to calculate for 5 missions, so i have to execute the procedure like
call spCalc(1);
call spCalc(2);
..
call spCalc(5);
Can i run this stored procedures concurrently so i can insert into a table more faster.
Update:
The reason why i'm inserting data a table is:
I've to generate reports with more than 10 lakhs of records and to join more than 30 tables, as you know, it will take couple of hours to generate a report. We want to reduce this delay.
I'm doing some calculation like , i need to use max() function and to process all the data to find the highest. It would take lot of time when we process all the data.
So what we decided is to run the calculations every day morning and to save it, so it would be faster to retrieve.
Related
I've built several stored procedures in MySQL. I would like to run them in a specific order to ensure that tables are updated properly and efficiently. What would be the best way to call the procedures to run in a specific order? I would like to have them run once every 30 minutes or so.
Thanks
After some research, I found that a recurring event for each procedure is a good way to go. Creating each event about 10 seconds apart ensures they run in sequence every 30 minutes.
CREATE EVENT `[Event_Name]`
ON SCHEDULE EVERY 30 MINUTE
DO
CALL `[Stored_Procedure_Name]`();
I am planning to split the number of rows from procedure and then load them into a table, and using loop, it continues to load certain rows to table.
I am unsure of the way to do it.
My Process: Table Input (calling a procedure - returns 900 million records) -> Data conversion -> Insert / update step (incremental loading to a target table).
Now I have to retrieve few records (say 1 million at a time) from procedure, based on some field in the procedure, then load them into table. This has to be iterated until end of the rows from procedure.
Kindly help me on this.
I don't really see a problem with this, other than the time it takes to process that many rows. PDI (Spoon/Kettle) works with streams, not "data sets" like in SQL, and rows are processed as soon as they are received. Because of this, PDI will likely never have to deal with all 900 million rows at once, and you will not have to wait for all of them to be returned from SQL before it starts processing.
The Table output step has a Commit size value to control how many records are sent to your target table in one transaction. The trick is to balance the amount of time it takes to start new connections with the time it takes to process a large number of rows in one transaction. I run values from 200 to 5000, depending my need and system's ability, but you may be able to go higher than that.
It sounds like your bigger problem will be returning that many rows from a Stored Procedure. Using an SP instead of a SELECT or VIEW means you will have to find ways to keep memory pressure low.
I have a few multi-million row tables, and I create TEMP tables (non-table variables) to store data while processing, using a single SELECT * FROM temp..table at the end of the SP. This streams the data from the server as expected and uses the minimum amount of memory.
I have an application
which does around 20000 DATA-OPERATIONS per/hour
DATA-OPERATION has overall 30 parameters(for all 10 queries). Some are text, some are numeric. Some Text params are as long as 10000 chars.
Every DATA-OPERATION does following:
A single DATA-OPERATION, inserts / updates multiple tables(around 10) in database.
For every DATA-OPERATION, I take one connection,
Then I use new prepared-statement for each query in the DATA-OPERATION.
Prepared-statement is closed every time a query is executed.
Connection is reused for all 10 prepared-statements.
Connection is closed when DATA-OPERATION is completed.
Now to perform this DATA-OPERATION,
10 queries, 10 prepared-statement(create, execute, close), 1o n/w calls.
1 connection (Open,Close).
I personally think that, if I create a Stored Procedure from above 10 queries, it will be better choice.
In case of SP, DATA-OPERATION will have:
1 connection, 1 callable statement, 1 n/w hit.
I suggested this, but I am told that
This might be more time consuming than SQL-queries.
It will put additional load on DB server.
I still think SP is a better choice. Please let me know your inputs.
Benchmarking is an option. Will have to search any tools which can help in this.
Also can any one suggest already available benchmarks for this kind of problem.
Any recommendation depends partially on where the script executing the queries resides. If the script executing the queries is on the same server as the MySQL instance then you won't see that much of a difference, but there will still be a small overhead in executing 200k queries compared to 1 stored procedure.
My advice either way would be though to make it as a stored procedure. You would need maybe a couple of procedures.
A procedure that combines the 10 statements you do per-operation
into 1 call
A procedure that can iterate over a table of arguments using a CURSOR to feed into procedure 1
Your process would be
Populate a table with arguments which would be fed into procedure 1 by procedure 2
Execute procedure 2
This would yield performance benefits as there is no need to connect to the MySQL server 20000*10 times. While the overhead per-request may be small, milliseconds add up. Even if the saving is 0.1ms per request, that's still 20 seconds saved.
Another option could be to modify your requests to perform all 20k data operations at once (if viable) by adjusting your 10 queries to pull data from the database table mentioned above. The key to all of this is to get the arguments loaded in a single batch insert, and then using statements on the MySQL server within a procedure to process them without further round trips.
I'm currently building a system that does running computations, and every 5 seconds inserts or updates information based on those computations to a few rows in MySQL. I'm working on running this system on a few different servers at once right now with a few agents that are each doing similar processing and then writing on the same set of rows. I already randomize the order in which each agent writes its set of rows, but there's still a lot of deadlock happening. What's the best/fastest way to get through those deadlocks? Should I just rerun the query each time one happens, or do row locks, or something else entirely?
I suggest you try something that won't require more than one client to update your 'few rows.'
For example, you could have each agent that produces results do an INSERT to a staging table with the MEMORY access method.
Then, every five seconds you can run a MySQL event (a stored procedure within the server) that loops through all the rows in that table, posting their results to your 'few rows' and then deleting them. If it's important for the rows in your staging table to be processed in order, then you can use an AUTO_INCREMENT id field. But it might not be important for them to be in order.
If you want to get fancier and more scalable than that, you'll need a queue management system like Apache ActiveMQ.
I have a resource table for a game I'm trying to code, and each resource has a fixed income rate over time. But I can't find any description on how to increase the stored values of the MySQL Table over time automatically.
I'm using NetBeans to connect the program with the database, but I want the values to be updated on the server without the need to run the program. Otherwise I would just have had the time recorded and just add the time difference value.
Is there a way of doing this?
Table:
Player ID: 1
Gold: 100
Wood: 100
Increase rate: 50 per hour
One way of doing this is using Cron jobs and schedule some script to run.
Otherwise you can simply calculate the time elapsed from the beginning and (whithout updating your DB) calculate values based on the time when your program is running.
You can define a cron job on the server, that runs a query to update the values.
Yes, you can by adding a scheduled event like this. However, if you update the value in the database, the value/variable stored by the program will not be updated in real-time: you have to query the database for the updated value.