Is there a limitation on Web3.js API calls? - ethereum

I am trying to traverse over blocks and get their transaction information like this:
var endOfLoop = app.web3.eth.blockNumber;
var latestBlockNumberInDb = 1;
for (var i = latestBlockNumberInDb; i <= endOfLoop; i++) {
var block = app.web3.eth.getBlock(i, true);
console.log(i);
if (block.transactions.length) {
/*TODO*/
}
else {
/*TODO*/
}
}
The problem is I am correctly getting around 525-545 block and then,
app.web3.eth.getBlock(i) call blocks the execution. Loop stucks there. It is not about the incoming block that possibly make the process erroneous because when I start the loop from the block 1000 it correctly fetches blocks 1000 - 1540 and stucks at 1540.
Also app.web3.eth.blockNumber is something like 3 million and anyways the problem is not about terminating loop.
The interesting thing is, iteration count over loop is not stable. It is always changing between 525-545.
I tried to put a delay between requests but that also didn't work.
Also when I double the API call as below:
var block = app.web3.eth.getBlock(i, true);
block = app.web3.eth.getBlock(i, true);
Loop iterates around 270 times.
Is there a limit for API calls? How can it be surpassed if so?

Some functions interact with the blockchain, these require a node.
Unless you're running a node and configured web3 to use it, it may use infura.io by default.
Infura is a paid service, they allow you some free API calls to query the blockchain before having to register, get a key and upgrade your plan if needed.

Related

Why is it every time we deploy our custom function users experience #ERROR! for some time?

Each time we deploy our Google AppScript Project, our custom function starts returning #ERROR!. This will happen regardless of whether a code change was made.
See photo below:
NOTE: Internal error executing the custom function. is not one of our error strings.
This is very strange because the function does not seem to be executing. I say this because #ERROR! is returned immediately, with 0 processing time. See failures in photo below:
This issue resolves itself after some seemingly arbitrary amount of time. Meaning the custom function will run normally, after some seemingly arbitrary amount of time.
This has become a very large problem because we have uncontrollable downtime after each deployment, and it does not seem to be an issue with our code considering this happens every time we deploy the code, regardless of whether the code actually changed.
This Google document states A custom function call must return within 30 seconds. If it does not, the cell will display an error: Internal error executing the custom function.. Our custom function does not take 30s to run. We actually can't even find an instance where our function runs longer than 5s.
NOTE: the only thing that fails is our custom function, our task pane that interacts with the Google Sheet remains functional.
According to the Optimization section of Custom Functions:
Each time a custom function is used in a spreadsheet, Google Sheets
makes a separate call to the Apps Script server.
Having multiple custom functions means you are making multiple calls to the server simultaneously. This could lead to a slow process or sometimes Error.
Solution:
The workaround to this is to lessen the use of custom functions. Custom function can accept range as parameter and it will be translated as two-dimensional array in Apps Script. Use the array to calculate the values and return a two-dimensional array that can overflow into the appropriate cells in your spreadsheet.
Example:
Code:
function multiplicationTable(row, col){
var table = [];
var rowArr = row.flat();
var colArr = col.flat();
for (var i = 0; i < rowArr.length ; i++) {
table.push([]);
for (var j = 0; j < col.length; j++) {
table[i].push(rowArr[i] * colArr[j]);
}
}
return table;
}

In Google Apps Script, can a timeout be caught with try/catch or does it occur at a higher level?

So I have some code running off a button click. There are a number of functions, called one after another. In some of them there are calls to REST services. Occasionally, the whole shebang runs over 6 minutes thus hitting the timeout barrier which, unlike the sound-barrier, cannot be broken.
Is the timeout trappable? Can I wrap the most likely offender in a try/catch and have the catch block evaluate at the appropriate moment? Or am I stuck with trying to make things just so incredibly performant that I never hit the wall?
Given that you're calling multiple functions, I'd recommend creating a trigger instead using ScriptApp.newTrigger, which could be triggered by introducing another function which checks whether or not the script has exceeded the predetermined execution time (if the max is 6, you could set it to be 5 to be safe).
Referring to something like this -
// var today = new Date();
// 'today' is declard in the original script
function isTimeUp(today) {
var now = new Date();
return now.getTime() - today.getTime() > 30000;
// 30000 = 30 seconds; this is the threshold limit
// you are free to setup your own threshold limit
}
I've written more on this topic here where i've used a .timeBased() trigger; however, you may use a different kind of trigger altogether.

Service Worker slow response times

In Windows and Android Google Chrome browser, (haven't tested for others yet) response time from a service worker increases linearly to number of items stored in that specific cache storage when you use Cache.match() function with following option;
ignoreSearch = true
Dividing items in multiple caches helps but not always convenient to do so. Plus even a small amount of increase in items stored makes a lot of difference in response times. According to my measurements response time is roughly doubled for every tenfold increase in number of items in the cache.
Official answer to my question in chromium issue tracker reveals that the problem is a known performance issue with Cache Storage implementation in Chrome which only happens when you use Cache.match() with ignoreSearch parameter set to true.
As you might know ignoreSearch is used to disregard query parameters in URL while matching the request against responses in cache. Quote from MDN:
...whether to ignore the query string in the url. For example, if set to
true the ?value=bar part of http://example.com/?value=bar would be ignored
when performing a match.
Since it is not really convenient to stop using query parameter match, I have come up with following workaround, and I am posting it here in hopes of it will save time for someone;
// if the request has query parameters, `hasQuery` will be set to `true`
var hasQuery = event.request.url.indexOf('?') != -1;
event.respondWith(
caches.match(event.request, {
// ignore query section of the URL based on our variable
ignoreSearch: hasQuery,
})
.then(function(response) {
// handle the response
})
);
This works great because it handles every request with a query parameter correctly while handling others still at lightning speed. And you do not have to change anything else in your application.
According to the guy in that bug report, the issue was tied to the number of items in a cache. I made a solution and took it to the extreme, giving each resource its own cache:
var cachedUrls = [
/* CACHE INJECT FROM GULP */
];
//update the cache
//don't worry StackOverflow, I call this only when the site tells the SW to update
function fetchCache() {
return Promise.all(
//for all urls
cachedUrls.map(function(url) {
//add a cache
return caches.open('resource:'url).then(function(cache) {
//add the url
return cache.add(url);
});
});
);
}
In the project we have here, there are static resources served with high cache expirations set, and we use query parameters (repository revision numbers, injected into the html) only as a way to manage the [browser] cache.
It didn't really work to use your solution to selectively use ignoreSearch, since we'd have to use it for all static resources anyway so that we could get cache hits!
However, not only did I dislike this hack, but it still performed very slowly.
Okay, so, given that it was only a specific set of resources I needed to ignoreSearch on, I decided to take a different route;
just remove the parameters from the url requests manually, instead of relying on ignoreSearch.
self.addEventListener('fetch', function(event) {
//find urls that only have numbers as parameters
//yours will obviously differ, my queries to ignore were just repo revisions
var shaved = event.request.url.match(/^([^?]*)[?]\d+$/);
//extract the url without the query
shaved = shaved && shaved[1];
event.respondWith(
//try to get the url from the cache.
//if this is a resource, use the shaved url,
//otherwise use the original request
//(I assume it [can] contain post-data and stuff)
caches.match(shaved || event.request).then(function(response) {
//respond
return response || fetch(event.request);
})
);
});
I had the same issue, and previous approaches caused some errors with requests that should be ignoreSearch:false. An easy approach that worked for me was to simply apply ignoreSearch:true to a certain requests by using url.contains('A') && ... See example below:
self.addEventListener("fetch", function(event) {
var ignore
if(event.request.url.includes('A') && event.request.url.includes('B') && event.request.url.includes('C')){
ignore = true
}else{
ignore = false
}
event.respondWith(
caches.match(event.request,{
ignoreSearch:ignore,
})
.then(function(cached) {
...
}

Exchange Webservices Managed API - using ItemView.Offset for FindItems() hits performance of LoadPropertiesForItems method

I'm doing a little research on possible application of EWS in our existing project which is written with heavy use of MAPI and I found out something disturbing about performance of LoadPropertiesForItems() method.
Consider such scenario:
we have 10000 (ten thousands) messages in Inbox folder
we want to get approximately 30 properties of every message to see if they satisfy our conditions for further processing
messages are retrieved from server in packs of 100 messages
So, code looks like this:
ItemView itemsView = new ItemView(100);
PropertySet properties = new PropertySet();
properties.Add(EmailMessageSchema.From);
/*
add all necessary properties...
*/
properties.Add(EmailMessageSchema.Sensitivity);
FindItemsResults<Item> findResults;
List<EmailMessage> list = new List<EmailMessage>();
do
{
findResults = folder.FindItems(itemsView);
_service.LoadPropertiesForItems(findResults, properties);
foreach (Item it in findResults)
{
... do something with every items
}
if (findResults.NextPageOffset.HasValue)
{
itemsView.Offset = findResults.NextPageOffset.Value;
}
}while(findResults.MoreAvailable);
And the problem is that every increment of itemsView.Offset property makes LoadPropertiesForItems method longer to execute. For first couple of iterations it is not very noticeable but around 30th time loop makes that call time increases from under 1 second to 8 or more seconds. And memory allocation hits physical limits causing out of memory exception.
I'm pretty sure that my problems are "offset related" because I changed a code a little to that:
itemsView = new ItemView(100, offset, OffsetBasePoint.Beginning);
...rest of loop
if (findResults.NextPageOffset.HasValue)
{
offset = findResults.NextPageOffset.Value;
}
and I manipulated offset variable (declared outside of loop) in that way that I set its value on 4500 at start and than in debug mode after first iteration I changed its value to 100. And according to my suspicions first call of LoadPropertiesForItems took veeeery long to execute and second call (with offset = 100) was very quick.
Can anyone confirm that and maybe propose some solution for that?
Of course I can do my work without using an offset but why should I? :)
Changing the offset is expensive because the server has to iterate through the items from the beginning -- it isn't possible to have an ordinal index for messages because new messages can be inserted into the view in any order (think of a view over name or subject).
Paging through all the items once is the best approach.

Best way to reuse an element from an Array?

I have a Array of Characters objects (extends Sprite).
public static var charlist:Array;
When a Character die, I set a flag to indicate that.
char.die = true;
When I spawn a new player, I check against this flag, to reuse the slot. This is where the problem comes.
What is the best way to do that? Actually I have this approach:
char = new Char();
/* ... */
for (var i:Number = 0; i < Char.charlist.length; i++) {
if (Char.charlist[i].death) {
Char.charlist[i] = char;
return;
}
}
But the problem is that I come from C++ and I think calculating the index every iteration is wasteful.
I would do this, but this doesn't work in AS3, since I can't access by reference the item:
char = new Char();
/* ... */
for (var i:Number = 0; i < Char.charlist.length; i++) {
var char_it:Char = Char.charlist[i];
if (char_it.death) {
char_it = char;
return;
}
}
Just a note: charlist is a static member of class Char.
Do you have any ideas or better approaches?
Thanks!
The "optimized" version doesn't optimize much, I think.
YOur code does not perform 2 accesses in every iteration. It does just when you happen to find a dead Char; and after that you return. So, no big deal, I think.
Personally, I don't think reusing array slots is going to make a big difference either. And, maybe it could be degrade (but the worst part is that your code could be simpler if you avoid it). Suppose you have 100 chars, 60 of which are dead, and you have a loop that runs every frame and performs some check / action on every live char. You'd be looping over 100 chars, when you could loop over 40 if you mantained a list of live objects and a separte list of dead objects, ready to reuse them. Also, the "dead list" could be handled as a stack, so no iteration is needed to get a char.
Anyway, I see 2 things you could easily optimize in your code:
1) retrieve the list length outside the loop:
Do this:
var len:int = Char.charlist.length;
for (var i:Number = 0; i < len; i++) {
Instead of this:
for (var i:Number = 0; i < Char.charlist.length; i++) {
The compiler won't optimize away the call to length (following the ecmascript spec).
2) Avoid static access (for charlist). It's known to be considerably slower than instance access.
Edit:
Re-reading your question, I realize I misunderstood part of it. You're not trying to reuse Char objects, but rather array slots. Still, I don't think reusing array slots is worth it (arrays in Actionscript are non fixed); but something that could help performance is reusing the Char objects themselves. Reseting a Sprite is generally cheaper than creating a new one.
You could manage this with a pool of "dead" objects, from which you can take one and "bring it back to life" when you need, instead of creating a new one.
This post talks about object pools (the blog itself is a good source for Actionscript stuff, especially for performance and data structures topics; worth taking a look):
http://lab.polygonal.de/2008/06/18/using-object-pools/
Why not to use a list instead of array and simply remove dead ones from the list and add new one when needed?
Is there a good reason to try to reuse the slot?
What are you trying to optimize?
Is there a reason you can't just splice out the desired character and push a new character onto the array?
var characters:Array = ["leo", "raph", "don", "mike"];
characters.splice(1, 1);
trace(characters);
characters.push("splinter");
trace(characters);
If you're looking for storage by reference, checkout AS3's Dictionary object or even Flash 10's Vector (if storage is consistent by type; quite fast):
http://www.gskinner.com/blog/archives/2006/07/as3_dictionary.html