I'm on a shared hosting platform and would like to throttle the queries in my app so that if the total execution time exceeds a certain amount over a variable time period then I can make the app cool off and then resume later on.
To do this I would like to find out how long each of my queries take in real time and manage it within the app and not by profiling it externally.
I've seen examples in PHP where the time is recorded before and after the query (even phpMyAdmin does this), but this won't work in NodeJS or anything that runs the query asynchronously.
So the question is: how would I go about getting the actual execution time of a query in NodeJS?
For reference I am using this module to query the MySQL db: https://github.com/felixge/node-mysql/
One option is just to timestamp before the query and timestamp after the query and check the difference like this:
// get a timestamp before running the query
var pre_query = new Date().getTime();
// run the job
connection.query(query, function(err, rows, fields) {
// get a timestamp after running the query
var post_query = new Date().getTime();
// calculate the duration in seconds
var duration = (post_query - pre_query) / 1000;
});
If you are using rest api then you can use postman. Run the query in postman and then look for Time: on middle right side of section as show in image attached.
console.time('100-elements');
for (let i = 0; i < 100; i++) {}
console.timeEnd('100-elements');
// prints 100-elements: 225.438ms
Label is must be unique and same as start and End time
Working well in nodejs.
here is documentation on nodejs
Related
I need to perform some actions in my app based on user's input. For e.g. if user selects 17:00-18:00, the timeframe is updated in the MySQL db and something should happen in that specific time.
Could someone tell me how can I achieve that?
You can use cron jobs for it.
For example, if you need to do some actions based on time value in your DB, you can write some code, that gets the current date and selects from MySQL all raws if your current date is between the date range of that raw and process them.
Then just make some cronjob that calls your code every minute, or an hour or whatever you need.
You can use cron package for that.
Here is an example:
const { CronJob } = require('cron');
// example cron job
const yourFunction = require('./yourFile.js');
// setting cron for example cronJob
const cron = new CronJob('0 * * * * *', yourFunction, null, true);
cron.start();
Or you can use Linux's crontab, you can read about how to use it here.
Configure frequency of cron with cron scheduler, you can play with it here.
I am getting the following error in my Google sheet:
Service invoked too many times for one day: urlfetch
I know for a fact I am not making 100k calls, but I do have quite a few custom functions in my sheet. I tried to make a new sheet and copy/paste the script into that one, but I still get the same error. I then switched my account, made a new sheet, added the code, and I still got the error.
Is this just because I am on the same computer? Is Google smart enough to realize I am the same person trying to do it? I highly doubt that, so I am wondering why it would be throwing this error, even after switching accounts and making a new sheet.
In addition to that, is there any way to make sure I don't go over the limit in the future? This error sets me back at least a day with what I was working on. I do plan to write a script to just copy/paste the imported HTML as values into another sheet, but until I get that working, I need a temporary fix.
Sample code:
function tbaTeamsAtEvent(eventcode){
return ImportJSON("https://www.thebluealliance.com/api/v3/event/" + eventcode + "/teams?X-TBA-Auth-Key=" + auth_key);
}
function ImportJSONForTeamEvents(url, query, options){
var includeFunc = includeXPath_;
var transformFunc = defaultTransform_;
var jsondata = UrlFetchApp.fetch(url);
var object = JSON.parse(jsondata.getContentText());
var newObject = [];
for(var i = 0; i < object.length; i++){
var teamObject = {};
teamObject.playoff = object[i].alliances
newObject.push(teamObject);
}
return parseJSONObject_(object, query, "", includeFunc, transformFunc);
}
That is one "set" of code that is used for a specific function. I am pulling two different functions multiple times. I have about 600 of one function, and 4 of another. That would only be just over a thousand calls if all were run simultaneously.
I should note that I also have another sheet in my drive that automatically updates every hour with a UrlFetch. I do no believe this should affect this though, due to the very low pull rate.
I had a similar issue even though I was only calling two fetch calls in my functions and each function per data row. It exponentially grew, and with my data changing, every recalculate call also called those functioned, which VERY quickly hit the max.
My solution? I started using the Cache Service to temporarily store the results of the fetch calls, even if only for a few seconds, to allow for all the cells triggered by the same recalculation event to propagate using only the single call. This simple addition saved me thousands of fetch calls each time I accessed my sheets.
For reference:
https://developers.google.com/apps-script/reference/cache?hl=en
I am building an application in Google App Maker that takes in a user-input Excel CSV file with 3 columns and 370,573 rows, so in total 1,111,719 data values. I am trying to efficiently input this data into a MySQL database by sending Batch Requests. However, I am unsure of how to properly optimize this process to minimize the amount of time it takes.
This is how I am currently completing the process:
var file = DriveApp.getFileById(fileID);
var data = Utilities.parseCsv(file.getBlob().getDataAsString());
var stmt = conn.prepareStatement('INSERT INTO report '
+ '(createdDate, accountFullID, lsid) values (?, ?, ?)');
for(var i = 1; i < **data.length**; i++) {
stmt.setString(1, data[i][0]);
stmt.setString(2, data[i][1]);
stmt.setString(3, data[i][2]);
stmt.addBatch();
}
var batch = stmt.executeBatch();
conn.commit();
conn.close();
When testing my code, it took upwards of 3 minutes to complete when I set the for-loop to iterate until variable i was less than 500. When I set the value to a small number like 5, it took several seconds to complete. When I set the value to data.length (as it is currently set to in bold), it never completed and timed out with a deadlock exception. How should I edit my code in order to more efficiently execute batches and reduce the total amount of time it takes when inputting all the data entries from the Excel CSV file, not only a small portion of the spreadsheet?
If this is a one time import, I would use app makers native import function. Create a data model that matches the structure of your cvs document. Then open the csv in a google sheet. make sure the formatting matches the data model and the fields and column names match exactly then use the import function in the top left of the app maker screen. select the google sheet and the data model you created then click import. this should get your data loaded it still make take some time as 1M items is a lot. I see this is 10 months old, so might not have been available back then.
https://developers.google.com/appmaker/models/import-export
I'm having a 'Tournament' sql table that contains start_time and end_time for my tournaments. I also have another table which has playerId and tournamentIds so I can tell which players playes in which tournament.
What I'm trying to do is to run a cron task to check my tournament table and see if tournament has ended so it can check players results from an external api. The problem is the external API has rate limit and I have to send my requestes every 1.5 sec.
What I tried to do is to write a cron job for every 10 seconds to check my tournament table (I couldn't come up with anyother solution rather than keep checking db):
cron.job("*/10 * * * * *", function(){
result = Query tournament table Where EndTime=<Now && EndTime+10second>=Now
if(result is not empty)
{
cron.job("*/1.5 * * * * *",function(){
send API requests for that userId
parse & store result in db
});
}
});
I don't feel right about this and it seems so buggy to me. Because the inner cron job might take longer than 10 seconds. Is there any good solution to do this. I'm using ExpressJS & MySQL.
The problem you are facing can be solved with event emitters. There is a very useful module node-schedule in npm which can help you in this scenario that you are telling. What you have to do is is to schedule a job to fire at the deadline of the project, that job will hit the 3rd party api and check for results.You an schedule a job like this
var schedule = require('node-schedule');
schedule.scheduleJob("jobName", "TimeToFire", function () {
//Put your api hit here.
//finally remove the schedule
if ("jobName" in schedule.scheduledJobs) {
schedule.scheduledJobs["jobName"].cancel();
delete schedule.scheduledJobs["jobName"];
}
})
Make sure you store all the jobs scheduled in the database also as a server crash will invalidate all the schedules that you have scheduled and will have to reschedule them again.
I have some tables in a MySQL database to represent records from a sensor. One of the features of the system I'm developing is to display this records from the database to the web user, so I used ADO.NET Entity Data Model to create an ORM, used Linq to SQL to get the data from the database, and stored them in a ViewModel I designed, so I can display it using MVCContrib Grid Helper:
public IQueryable<TrendSignalRecord> GetTrends()
{
var dataContext = new SmgerEntities();
var trendSignalRecords = from e in dataContext.TrendSignalRecords
select e;
return trendSignalRecords;
}
public IQueryable<TrendRecordViewModel> GetTrendsProjected()
{
var projectedTrendRecords = from t in GetTrends()
select new TrendRecordViewModel
{
TrendID = t.ID,
TrendName = t.TrendSignalSetting.Name,
GeneratingUnitID = t.TrendSignalSetting.TrendSetting.GeneratingUnit_ID,
//{...}
Unit = t.TrendSignalSetting.Unit
};
return projectedTrendRecords;
}
I call the GetTrendsProjectedMethod and then I use Linq to SQL to select only the records I want. It is working fine in my developing scenario, but when I test it in a real scenario, where the number of records is way greater (something around a million records), it stops working.
I put some debug messages to test it, and everything works fine, but when it reaches the return View() statement, it simply stops, throwing me a MySQLException: Timeout expired. That let me wondering if the data I sent to the page is retrieved by the page itself (it only search for the displayed items in the database when the page itself needs it, or something like that).
All of my other pages use the same set of tools: MVCContrib Grid Helper, ADO.NET, Linq to SQL, MySQL, and everything else works alright.
You absolutely should paginate your data set before executing your query if you have millions of records. This could be done using the .Skip and .Take extension methods. And those should be called before running any query against your database.
Trying to fetch millions of records from a database without pagination would very likely cause a timeout at best.
Well, assuming information in this blog is correct, .AsPagination method requires you to sort your data by a particular column. It's possible that trying to do an OrderBy on a table with millions of records in it is just a time consuming operation and times out.