I'm at a bit of a loss here. I have an issue that I think may be due to mouse events taking precedence. I have a function f being invoked on mouse clicks - f does some work, then invokes another function g. Is it even possible that f runs, then another click happens - invoking f again - and then g is executed?
If my phrasing is hard to understand, I'll try to show what I think may be happening:
click1 ----- /-----------\
\ / \
f -- f-- g g
/ \ /
click2 ------------ / \--------
|---------------- timeline----------------------|
I can say for certain the issue only arises (out of ~50 slow and ~50 quick double-clicks) when clicking twice in very quick succession (and not always even then). I realize my figure may confuse more than it clarifies, but I'm not sure how else to communicate my thoughts. Any input is greatly appreciated!
AS3 is a single threaded code execution environment which will execute all relevant code as it present itself. if a click triggers the execution of a chain of methods, all those methods will run before any other code can be executed again. As a result there cannot be a race condition in the code execution of AS3 code because of it single threaded nature.
All events in AS3 are not a special case in that regard, when their listener triggers all its code is executed the same way and no other code can be executed until it's done.
Special cases are:
You can suspend the execution by using timers and such so execution of code will happen at a later time. In that case there's no guaranty the triggering of those timers will be in sync with their starting order.
Executing asynchronous command (like loading something), in that case there's no guaranty loading operations will happen in order either.
But those special cases are not violating the execution of code principle in AS3, all code execute in one thread so their cannot be overlapping of any kind.
Related
We have an application that allows a user to pass an arbitrary Tcl code block (as a callback) to a custom API that invokes it on individual elements of a large data tree. For performance, this is done using a thread pool, so things can get ripping.
The problem is, we have no control over user code, and in one case they are doing a puts that causes memory to explode and the app to crash. I can prevent the this by redirecting stdout to /dev/null which leads me to believe that Tcl's internal buffers can't be emptied fast enough, so it keeps buffering. Heap analysis seems to confirm this.
What I don't understand is that I haven't messed with any of stdout's options, so it should be line buffered, blocking, 4k. So, my first question would be: why is this happening? Shouldn't there already be backpressure applied to prevent this?
My second question would be: how do I prevent this? If the user wants to to something stupid, I'm more than willing to throttle their performance, but I don't want the app to crash. I suppose one solution would be to redefine puts to write to a file (or simply do nothing) before the callback is invoked, but I'd be interested if there was a way to ensure backpressure on the channel to prevent it from continuing to buffer.
Thanks for any thoughts!
It depends on the channel type and how you've configured it. However, the normal model is that writes to a synchronous channel (-blocking true) will either buffer or write immediately (according to the -buffering option) and writes to an asynchronous channel (-blocking false) will, if not processed immediately, be queued to be carried out later by an internal event handler. For most applications, that does the right thing; it sounds like you've passed an asynchronous channel to code that doesn't call into the event loop (or at least not frequently). Try chan configureing the channel to be synchronous before starting the user code; you're in a separate thread so the blocking behaviour shouldn't be a problem for the rest of the application.
Some channels are more tricky. The one that people most normally encounter is the console channel in Tk on platforms such as Windows, where the channel ends up writing into a widget that doesn't have a maximum number of retained lines.
There is an external C++ function that is called from Tcl/Tk and does some stuff in a noticeable amount of time. Tcl caller has to get the result of that function so it waits until it's finished. To avoid blocking of GUI, that C++ function has some kind of event loop implemented in its body:
while (m_curSyncProc.isRunning()) {
const clock_t tm = clock();
while (Tcl_DoOneEvent(TCL_ALL_EVENTS | TCL_DONT_WAIT) > 0) {} // <- stuck here in case of tkwait/vwait
// Pause for 10 ms to avoid 100% CPU usage
if (double(clock() - tm) / CLOCKS_PER_SEC < 0.005) {
nanosleep(10000);
}
}
Everything works great unless tkwait/vwait is in action in Tcl code.
For example, for dialogs the tkwait variable someVariable is used to wait Ok/Close/<whatever> button is pressed. I see that even standard Tk bgerror uses the same method (it uses vwait).
The problem is that once called Tcl_DoOneEvent does not return while Tcl code is waiting in tkwait/vwait line, otherwise it works well. Is it possible to fix it in that event loop without total redesigning of C++ code? Because that code is rather old and complicated and its author is not accessible anymore.
Beware! This is a complex topic!
The Tcl_DoOneEvent() call is essentially what vwait, tkwait and update are thin wrappers around (passing different flags and setting up different callbacks). Nested calls to any of them create nested event loops; you don't really want those unless you're supremely careful. An event loop only terminates when it is not processing any active event callbacks, and if those event callbacks create inner event loops, the outer event loop will not get to do anything at all until the inner one has finished.
As you're taking control of the outer event loop (in a very inefficient way, but oh well) you really want the inner event loops to not run at all. There's three possible ways to deal with this; I suspect that the third (coroutines) will be most suitable for you and that the first is what you're really trying to avoid, but that's definitely your call.
1. Continuation Passing
You can rewrite the inner code into continuation-passing style — a big pile of procedures that hands off from step to step through a state machine/workflow — so that it doesn't actually call vwait (and friends). The only one of the family that tends to be vaguely safe is update idletasks (which is really just Tcl_DoOneEvent(TCL_IDLE_EVENTS | TCL_DONT_WAIT)) to process Tk internally-generated alterations.
This option was your main choice up to Tcl 8.5, and it was a lot of work.
2. Threads
You can move to a multi-threaded application. This can be easy… or very difficult; the details depend on an examination of what you're doing throughout the application.
If going this route, remember that Tcl interpreters and Tcl values are totally thread-bound; they internally use thread-specific data so that they can avoid big global locks. This means that threads in Tcl are comparatively expensive to set up, but actually use multiple CPUs very efficiently afterwards; thread pooling is a very common approach.
3. Coroutines
Starting in 8.6, you can put the inner code in a coroutine. Almost everything in 8.6 is coroutine-aware (“non-recursive” in our internal lingo) by default (including commands you wouldn't normally think of, such as source) and once you've done that, you can replace the vwait calls with equivalents from the Tcllib coroutine package and things will typically “just work”. (For example, vwait var becomes coroutine::vwait var, and after 123 becomes coroutine::after 123.)
The only things that don't have direct replacements are tkwait window and tkwait visibility; you'll need to simulate those with waiting for a <Destroy> or <Visibility> event (the latter is uncommon as it is unsupported on some platforms), which you do by binding a trivial callback on those that just sets a variable that you can coroutine::vwait on (which is essentially all that tkwait does internally anyway).
Coroutines can become messy in a few cases, such as when you've got C code that is not coroutine-aware. The main places in Tcl where these come into play are in trace callbacks, inter-interpreter calls, and the scripted implementations of channels; the issue there is that the internal APIs these sit behind are rather complicated already (especially channels) and nobody's felt up to wading in and enabling a non-recursive implementation.
Summary:
I'm trying to find out if a single method can be executed twice in overlap when executing on a single thread. Or if two different methods can be executed in overlap, where when they share access to a particular variable, some unwanted behaviour can occur.
Ex of a single method:
var ball:Date;
method1 ():Date {
ball = new Date();
<some code here>
return ball;
}
Questions:
1) If method1 gets fired every 20ms using the event system, and the whole method takes more than 20ms to execute, will the method be executed again in overlap?
2) Are there any other scenarios in a single thread environment where a method(s) can be executed in overlap, or is the AVM2 limited to executing 1 method at a time?
Studies: I've read through https://www.adobe.com/content/dam/Adobe/en/devnet/actionscript/articles/avm2overview.pdf which explains that the AVM2 has a stack for running code, and the description for methods makes it seem that if there isn't a second stack, the stack system can only accomodate 1 method execution at a time. I'd just like to double check with the StackeOverflow experts to see for sure.
I'm dealing with some time sensitive data, and have to make sure a method isn't changing a variable that is being accessed by another method at the same time.
ActionScript is single-threaded; although, can support concurrency through ActionScript workers, which are multiple SWF applications that run in parallel.
There are asynchronous patterns, if you want a nested function, or anonymous function to execute within the scope chain of a function.
What I think you're referring to is how AVM2 executes event-driven code, to which you should research AVM2 marshalled slice. Player events are executed at the beginning of the slice.
Heavy code execution will slow frame rate.
It's linear - blocking synchronously. Each frame does not invoke code in parallel.
AVM2 executes 20 millisecond marshalled slices, which depending on frame rate executes user actions, invalidations, and rendering.
they are definitely allowed in tasks, But I could not find, if they are allowed in functions.
Thanks in advance for your help.
Yes, fork...join_none is allowed within functions.
A fork block can only be used in a function if it is matched with a join_none. The reason is that functions must execute in zero time. Because a fork...join_none will be spawned into a separate thread/process, the function can still complete in zero time.
This is clearly stated in IEEE 1800-2012 in section 13.4.4 Background processes spawned by function calls
Functions shall execute with no delay. Thus, a process calling a function shall return immediately. Statements that do not block shall be allowed inside a function; specifically, nonblocking assignments, event triggers, clocking drives, and fork - join_none constructs shall be allowed inside a function.
My simulation tool allows fork...join_none in functions but issues a warning that fork...join (and probably fork...join_any) will be converted to begin...end. I couldn't find anything in the standard about this which most likely is why I don't get a strict compile error.
Be careful as different simulator vendors may implement different rules. In two of the big 3 simulators fork...join_none in functions definitely works. fork...join/join_any doesn't make sense in the context of a function so I would avoid it altogether.
For some months I've been working on a "home-made" operating system.
Currently, it boots and goes into 32-bit protected mode.
I've loaded the interrupt table, but haven't set up the pagination (yet).
Now while writing my exception routines I've noticed that when an instruction throws an exception, the exception routine is executed, but then the CPU jumps back to the instruction which threw the exception! This does not apply to every exception (for example, a div by zero exception will jump back to the instruction AFTER the division instruction), but let's consider the following general protection exception:
MOV EAX, 0x8
MOV CS, EAX
My routine is simple: it calls a function that displays a red error message.
The result: MOV CS, EAX fails -> My error message is displayed -> CPU jumps back to MOV CS -> infinite loop spamming the error message.
I've talked about this issue with a teacher in operating systems and unix security.
He told me he knows Linux has a way around it, but he doesn't know which one.
The naive solution would be to parse the throwing instruction from within the routine, in order to get the length of that instruction.
That solution is pretty complex, and I feel a bit uncomfortable adding a call to a relatively heavy function in every affected exception routine...
Therefore, I was wondering if the is another way around the problem. Maybe there's a "magic" register that contains a bit that can change this behaviour?
--
Thank you very much in advance for any suggestion/information.
--
EDIT: It seems many people wonder why I want to skip over the problematic instruction and resume normal execution.
I have two reasons for this:
First of all, killing a process would be a possible solution, but not a clean one. That's not how it's done in Linux, for example, where (AFAIK) the kernel sends a signal (I think SIGSEGV) but does not immediately break execution. It makes sense, since the application can block or ignore the signal and resume its own execution. It's a very elegant way to tell the application it did something wrong IMO.
Another reason: what if the kernel itself performs an illegal operation? Could be due to a bug, but could also be due to a kernel extension. As I've stated in a comment: what should I do in that case? Shall I just kill the kernel and display a nice blue screen with a smiley?
That's why I would like to be able to jump over the instruction. "Guessing" the instruction size is obviously not an option, and parsing the instruction seems fairly complex (not that I mind implementing such a routine, but I need to be sure there is no better way).
Different exceptions have different causes. Some exceptions are normal, and the exception only tells the kernel what it needs to do before allowing the software to continue running. Examples of this include a page fault telling the kernel it needs to load data from swap space, an undefined instruction exception telling the kernel it needs to emulate an instruction that the CPU doesn't support, or a debug/breakpoint exception telling the kernel it needs to notify a debugger. For these it's normal for the kernel to fix things up and silently continue.
Some exceptions indicate abnormal conditions (e.g. that the software crashed). The only sane way of handling these types of exceptions is to stop running the software. You may save information (e.g. core dump) or display information (e.g. "blue screen of death") to help with debugging, but in the end the software stops (either the process is terminated, or the kernel goes into a "do nothing until user resets computer" state).
Ignoring abnormal conditions just makes it harder for people to figure out what went wrong. For example, imagine instructions to go to the toilet:
enter bathroom
remove pants
sit
start generating output
Now imagine that step 2 fails because you're wearing shorts (a "can't find pants" exception). Do you want to stop at that point (with a nice easy to understand error message or something), or ignore that step and attempt to figure out what went wrong later on, after all the useful diagnostic information has gone?
If I understand correctly, you want to skip the instruction that caused the exception (e.g. mov cs, eax) and continue executing the program at the next instruction.
Why would you want to do this? Normally, shouldn't the rest of the program depend on the effects of that instruction being successfully executed?
Generally speaking, there are three approaches to exception handling:
Treat the exception as an unrepairable condition and kill the process. For example, division by zero is usually handled this way.
Repair the environment and then execute the instruction again. For example, page faults are sometimes handled this way.
Emulate the instruction using software and skip over it in the instruction stream. For example, complicated arithmetic instructions are sometimes handled this way.
What you're seeing is the characteristic of the General Protection Exception. The Intel System Programming Guide clearly states that (6.15 Exception and Interrupt Reference / Interrupt 13 - General Protection Exception (#GP)) :
Saved Instruction Pointer
The saved contents of CS and EIP registers point to the instruction that generated the
exception.
Therefore, you need to write an exception handler that will skip over that instruction (which would be kind of weird), or just simply kill the offending process with "General Protection Exception at $SAVED_EIP" or a similar message.
I can imagine a few situations in which one would want to respond to a GPF by parsing the failed instruction, emulating its operation, and then returning to the instruction after. The normal pattern would be to set things up so that the instruction, if retried, would succeed, but one might e.g. have some code that expects to access some hardware at addresses 0x000A0000-0x000AFFFF and wish to run it on a machine that lacks such hardware. In such a situation, one might not want to ever bank in "real" memory in that space, since every single access must be trapped and dealt with separately. I'm not sure whether there's any way to handle that without having to decode whatever instruction was trying to access that memory, although I do know that some virtual-PC programs seem to manage it pretty well.
Otherwise, I would suggest that you should have for each thread a jump vector which should be used when the system encounters a GPF. Normally that vector should point to a thread-exit routine, but code which was about to do something "suspicious" with pointers could set it to an error handler that was suitable for that code (the code should unset the vector when laving the region where the error handler would have been appropriate).
I can imagine situations where one might want to emulate an instruction without executing it, and cases where one might want to transfer control to an error-handler routine, but I can't imagine any where one would want to simply skip over an instruction that would have caused a GPF.