I have a complex Angular application which uses pure cesium.
The problem is that at startup I have a lot of warnings like this:
Violation ‘requestAnimationFrame’ handler took 742ms.
Violation ‘load’ handler took 80ms.
I tried to use setTimeout for the creation of the Viewer object, but it just makes the warnings occur later.
What do you recommend me to do?
Thanks in advance
I have such problem, I find this answer on https://forum.playcanvas.com/t/solved-chrome-warnings/12558/2
from #Albertos,
I think this will help you!!!
“Chrome violations” don’t represent errors in either Chrome or your own web app. They are instead warnings to help you improve your app. In this case, Long running JavaScript and took 83ms of runtime are alerting you there’s probably an opportunity to speed up your script.
(“Violation” is not the best terminology; it’s used here to imply the script “violates” a pre-defined guideline, but “warning” or similar would be clearer. These messages first appeared in Chrome in early 2017 and should ideally have a “More info” prompt to elaborate on the meaning and give suggested actions to the developer. Hopefully those will be added in the future.)
Related
I'd like to do this in Chrome Developer Tools, but I'd take anything at this point.
The question
If I know I am looking for a specific object, how can I search the entire JS hierarchy to find it?
The situation
In know from my time on irc://irc.freenode.net/bash that when people reduce their question to what they think they should do, they waste a lot of time.
I use Confluence Cloud and their WYSIWYG is terrible. Because it is TinyMCE, I can get a lot done by editing the DOM in the Chrome Developer Tools. I ought to be able to hack the JS object and get even more through. But, the first step is the find the TinyMCE.settings object.
If you've got a reference to the object's constructor, the queryObjects() API might be able to help you locate the object. You might need to set a breakpoint and pause the code at a point where you know that the object still exists in the heap.
Check what version of Chrome you're using at chrome://version. queryObjects() was introduced in Chrome 62, which should be hitting Stable right about now but you might not have it quite yet.
When I am debugging broken code, after a while the browser announces that the Flash plugin has crashed, and I can't continue debugging my code. Can I prevent the browser from killing Flash?
I am using Firefox.
Going to the debugger on a breakpoint makes the plugin "freeze". This is intentional, it's a breakpoint after all!
However, from the browsers perspective, the plugin seems to be stuck in some kind of infinite loop. The timeout value varies, my Firefox installation is set to 45 seconds.
To change the timeout value go enter about:config in the url field and look for the setting dom.ipc.plugins.timeoutSecs increase this or set it to -1 to disable the timeout altogether.
When the plugin crashes, it does in fact not so, because the browser is "killing" it, but rather the plugin terminates itself when a fatal error occurs. This is necessary to prevent the browser, or even your entire machine from crashing - there is no way to tell what is going to happen after an error like that. And besides: After the first uncaught error, your program will likely not be able to perform even correct code the way you intended, so you won't do any good by continuing a broken debugging session. So it is not a flaw, it is actually a good thing this happens!
However, you can do some things in order to work more effectively (and make your programs better). The most important I can think of right now are:
Learn to use good object-oriented programming techniques and get acquainted with design patterns, if you haven't already done so.
Take extra care to prevent error conditions from happening (e.g. test if an object is null before accessing its properties, give default values to variables when possible, etc.)
Use proper error handling to gracefully catch errors at runtime.
Use unit tests to thoroughly test your code for errors one piece at a time, before debugging in the browser. Getting to know FlexUnit is a good place to start.
EDIT
I should also have said this: A Debugger is a useful tool for stepping through your code to find the source of an error, such as a variable not being properly initialized, or unexpected return values. It is not helpful when trying to find out what's happening after a fatal error has occurred - which will also not help you to fix the code.
Imagine i have a function with a bug in it:
Pseudo-code:
void Foo(LPVOID o)
{
//implementation details omitted
}
The problem is the user passed null:
Object bar = null;
...
Foo(bar);
Then the function might crash due to a access violation; but it could also happen to work fine. The bug is that the function should have been checking for the invalid case of passing null, but it just never did. It was never issue because developers were trusted to know what they're doing.
If i now change the function to:
Pseudo-code:
void Foo(LPVOID o)
{
if (o == null) throw new EArgumentNullException("o");
//implementation details omitted
}
then people who were happily using the function, and happened to but not get an access violation, now suddenly will begin seeing an EArgumentNullException.
Do i continue to let people using the function improperly, and create a new version of the function? Or do i fix the function to include what it should have originally had?
So now the moral dillema. Do you ever add new sanity checks, safety checks, assertions to exising code? Or do you call the old function abandoned, and have a new one?
Consider a bug so common that Microsoft had to fix it for developers:
MessageBox(GetDesktopWindow, ...);
You never, ever, ever want to make a window model against the desktop. You'll lock up the system. Do you continue to let developers lock up the user's computer? Or do you change the function to:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
throw new Exception("hWndParent cannot be the desktop window. Use NULL instead.");
...
}
In reality Microsoft changed the Window Manager to auto-fix the bad parameter:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
hWndParent = 0;
...
}
In my made up example there is no way to patch the function - if i wasn't given an object, i can't do what i need to do on it.
Do you risk breaking existing code by adding parameter validation? Do you let existing code continue to be wrong, getting incorrect results?
The problem is that not only are you fixing a bug, but you are changing the semantic signature of the method by introducing an error case.
From a software engineering perspective I would advocate that you try to specify methods as best as possible (for instance using pre and post-conditions) but once the method is out there, specification changes are a no-go (or at least you would have to check all occurrences of the method) and a new method would be better.
I'd keep the old function and simply let it create a warning that notifies you of every (possibly) wrong use and then i'd just kick the developer who used it wrong until he uses it properly.
You cannot catch everything. What if someone wrote "MakeLocation("Ian Boyd", "is stupid");"? Would you create a new function or change the function to catch that? No, you would fire the developer (or at least punish him).
Of course this requires that you document what your function requires as input.
This is where having automated tests [Unit testing, Intergration Testing, automated functional testing] are great, they give you the power to change existing code with confidance.
When making changes like this I would suggest finding all usages and ensuring they are behaving how you belive they should.
I myself would make bug fixes to existing function rather them duplicating them 99% of the time. If it changes behavior alot and there are a lot of calls to this function you need to be very sure of your change.
So go ahead make your change, run your unit tests, then your automated functional tests. Fix any errors and your golden!
If your code has a bug in it you should do what you normally do when any bug is reported. One part of that is assessing the impacts of fixing and of not fixing it. Sometimes the right thing to do with a bug is to not fix it because the behaviour it exposes has become accepted. Sometimes the cost of fixing it, or the iconvenience of releasing a fix outside the normal release cycle, stops you releasing a fixed bug for a while. This isn't a moral dilemma, it's an economic question of costs and benefits. If you are disturbed at the thought of having known bugs in your published code, publish a known-bugs list.
One option none of the other respondents seems to have suggested is to wrap the buggy function in another function which imposes the new behaviour that you require. In the world where functions can run to many lines it is sometimes less likely to introduce new bugs to preserve a 99%-correct piece of code and address the change without modifying existing code. Of course, this is not always possible
Two choices:
Give the error checking version a new name and deprecate the old version (one version later have it start issuing warnings (compile time if possible, run time if necessary), two versions later remove it).
[not always possible] Place the newly introduced error check in such a way that it only triggers if the unmodified version would crash or produce undefined behavior. (So that users who were taking care in their code don't get any unpleasant surprises.)
It entirely depends on you, your codebase, and your users.
If you are Microsoft and you have a bug in your API that is used by millions of devs around the world, then you will probably want to just create a new function and update the docs on the old one. If you can, you would also want to update the compiler to give warnings as well. (Though even then you may be able to change the existing system; remember when MS switched VC to the C++ standard and you had to update all of your #include iostreams and add using stds to get simple, existing console apps working again?)
It basically depends on what the function is. If it is something basic that will have massive ripple effects, then it could break a lot of code. If it is just an ancillary function, then you may as well fix it. Of course if you are Microsoft and your other code depends on a bug in one of your functions, then you probably should fix it since that is just plain embarrassing to keep. If other devs rely on the bug (that you created), then you may have an obligation to the users to not break their code that you caused to be buggy.
If you are a small company or independent developer, then sure, go ahead and fix the function. If you only need to update yourself or a few people on the new usage then fixing it is the best solution, especially since it is not even a big deal because all it really requires is an added note to the docs for the function. eg do not pass NULL or an exception is thrown if hWnd is the desktop, etc.
Another option as a sort of compromise would be to create a wrapper function. You could create a small, inline function that checks the args and then calls the existing function. That way you don’t really have to do much in the short term and eventually when people have moved to the new one, you can deprecate or even remove the old one, moving the code to the new once between the checks.
In most scenarios, it is better to fix a buggy function, particularly if you are merely adding argument checks as opposed to completely changing the behavior of the function. It is not really a good idea to facilitate—read encourage—bad coding just because it would break some existing code (especially if the code is free!) Think about it: if someone is creating a new program, then they can do it right from the start instead of relying on a bug. If they are re-compiling an old program that depends on the bug, then they can just update the code. Again, it depends on how messy and convoluted the code is, how many people are affected, and whether or not they pay you, but it is quite common to have to update old code to for example initialize variables that hand’t been, or check for error codes, etc.
To sum up, in your specific example (given the information provided), you should just fix it.
This one has been puzzling my for some time now.
Let's imagine a class which represents a resource, and in order to be able to use this resource one needs to first call the 'Open' method on it, or an InvalidOperationException will be thrown.
Should my code also check whether someone tries to open an already open resource, or close an already closed one?
Should code prevent a logically invalid invocation even when no harm would be done?
I think that programming this way would help writing better code at the other side, but I feel that I might be taking too much responsibility and affect reusability.
What do you guys think?
Edit:
I don't think this could be called defensive programming because it won't let a possible bad use to slip either, and another InvalidOperationException will be thrown.
This is called defensive programming. That's a good programming practice because you ensure that your application doesn't crash on misbehaviour.
That some method should be called first before another method is called, is not a good programming practice. It add's a lot of complexity, which is better handled by the class itself.
This is called sequential coupling. This wikipedia article says that it depends on the context if it's a bad practice, but it shouldn't crash when handled improperly. Sometimes it's necessary to throw an exception to make things clear.
This really depends on what the class actually does. In some cases failing silently is a good idea (eg, you want your DVD player to continue working, not show an error message if it opens the DVD tray that is already open) and in other cases you want as much information as possible (eg, if an airplane tries to close a door that is reportedly already closed, then something is wrong and the pilot should be alerted).
In most cases throwing an error when a logically invalid action is performed is useful for developers, so implementing those exceptions depends on who will use the code. If it is used internally for one application, then it's not vital. But if it is used by many different projects or developers, then I would look into it.
If your example is really the case, then the Open functionality should probably be invoked by the class's constructor.
If you consider the C++ iostream library (which is very widely used and considered quite a good example) you can call any operation on a stream class, whether it is open or not. The called function will simply return a failure indicator of some sort if the operation could not be performed. The functions must of course test the stream state in order to do this.
What you must not do is allow your programs to silently accept any old input as parameters. For example, this would be a broken implementation of strlen()
int strlen( const char * s )
{
if ( s == 0 )
{
return 0; // bad
}
else
{
// calculate length not shown
}
}
as it fields bad inputs without causing a fuss - it should instead throw an exception or use an assert(), depending on your exact development philosophy.
There's no substitute for taste, talent and experience in figuring out exactly how many safety checks should be in your code for best cost/benefit ratio for your organization.
A good quality APIs are expected to be fool-proof, and to guide the user with proper amount of warnings.
Sometimes, safety precautions may impair performance. Performance is one of the most counter-intuitive things in programming. Optimize with care, only when performance really matters.
If this is part of a public SDK that you're releasing to the wild, then the exposed API calls should have strong validation. It will help your 'users' (who are developers) and ensure you aren't stuck supporting usage you never intended to support.
Otherwise, I would not add such checks. I think they make the code harder to read, and these checks are rarely tested. In the past I would add a lot of code like this to make sure my code doesn't do the wrong thing. Now I write unit tests to verify my code does the right thing. The difference? I think tests are more maintainable, more readable, and they don't clutter your production code.
In the case of opening a file that is already open, it depends on knowing the effect of the request, will it reset the current read location for example.
In the case of closing a file that is already closed, think of it as a request for the file to be put in a known state. The code doesn't have to do anything but the desired state is acheived so the code can return a success condition. This is not true if there is some sort of file buffering that needs to be taken care of or maybe an interlinked resource to coordinate, like a modem/serial port or a printer/spooler.
Step back and think of the problem in terms of the desired outcome including any side-effects.
We once put a 'logout' link on an app menu that was displayed regardless of your login status. Why? Because it only took a simple (and very short) method to handle returning you to the login screen from the login screen and saved a large number of checks to handled tracking the login status just so the 'logout' menu-item was displayed only when you were logged in.
logical invalid invocations should always be reported to the user in debug mode..
When compiled in release mode, your code should not throw any unneeded exceptions or do anything else which could endanger the whole application.
Personally i prefer having some kind of logfile, and logging such logically invalid invocations surely will do no harm (at least when performance is not important)
Do you use the debugger of the language that you work in to step through code to understand what the code is doing, or do you find it easy to look at code written by someone else to figure out what is going on? I am talking about code written in C#, but it could be any language.
I use the unit tests for this.
Yes, but generally only to investigate bugs that prove resistant to other methods.
I write embedded software, so running a debugger normally involves having to physically plug a debug module into the PCB under test, add/remove links, solder on a debug socket (if not already present), etc - hence why I try to avoid it if possible. Also, some older debugger hardware/software can be a bit flaky.
I will for particularly complex sections of code, but I hope that generally my fellow developers would have written code that is clear enough to follow without it.
Depending on who wrote the code, even a debugger doesn't help to understand how it works: I have a co-worker who prides himself on being able to get as much as possible done in every single line of code. This can lead to code that is often hard to read, let alone understand what it does in the long run.
Personally I always hope to find code as readable as the code I try to write.
I will mostly use debugger to setup breakpoints on exceptions.
That way I can execute any test or unit test I wrote, and still be able to be right where the code fails if any exception should occur.
I won't say I used all the time, but I do use it fairly often. The domain I work in is automation and controls. You often need the debugger to see the various internal states of the system. It is usually difficult to impossible to determine these simply from looking at code.
Yes, but only as a last resort when there's no unit test coverage and the code is particularly hard to follow. Using the debugger to step through code is a time consuming process and one I don't find too fun. I tend to find myself using this technique a lot when trying to follow VBA code.