Operations that occur during model initialization don't end up in the undo history, so what is the purpose of Model.beginCreationCompoundOperation, as opposed to Model.beginCompoundOperation? My best guess is that it used internally to wrap the initialization function call into a compound operation and was not meant to be public.
Documentation of method
Your guess is correct - beginCreationCompoundOperation is used to wrap the initialization function call so that we know that the changes made were used to create the initial document state, which (among other things) can't be undone.
This function was not meant to be public. Calling it during model initialization will fail immediately, and if it is called after model initialization the changes will refuse to commit on the server and you will need to reload (a creation compound operation can only be applied once - this is what prevents the creation from occurring multiple times if you create a document and then immediately load it on multiple machines).
This function will be removed in a future update to the Realtime API.
Related
I would like to have my code create a dump for unhandled exceptions.
I'd thought of using the SetUnhandledExceptionFilter. But what are the cases when SetUnhandledExceptionFilter may not work as expected. For example what about stack corruption issues when, for instance, a buffer overrun occurs on stack?
what will happen in this case? are there any additional solutions which will always work?
I've been using SetUnhandledExceptionFilter for quite a while and have not noticed any crashes/problems that are not trapped correctly. And, if an exception is not handled somewhere in the code, it should get handled by the filter. From MSDN regarding the filter...
After calling this function, if an exception occurs in a process that
is not being debugged, and the exception makes it to the unhandled
exception filter, that filter will call the exception filter function
specified by the lpTopLevelExceptionFilter parameter.
There is no mention that the above applies to only certain types of exceptions.
I don't use the filter to create a dump file because the application uses the Microsoft WER system to report crashes. Rather, the filter is used to provide an opportunity to collect two additional files to attach to the crash report (and dump file) that Microsoft will collect.
Here's an example of Microsoft's crash report dashboard for the application with module names redacted.
You'll see that there's a wide range of crash types collected, including, stack buffer overrun.
Also make sure no other code calls the SetUnhandledExceptionFilter() after you set it to your handler.
I had a similar issue and in my case it was caused by another linked library (ImageMagick) which called SetUnhandledExceptionFilter() from its Magick::InitializeMagick() which was called just in some situations in our application. Then it replaced our handler with ImageMagick's handler.
I found it by setting a breakpoint on SetUnhandledExceptionFilter() in gdb and checked the backtrace.
IsolatedStorageFile.CopyFile() documents don't really say much about this, so I was wondering if the CopyFile() (and also DeleteFile() for that matter) method is an atomic action on the Isolated Storage?
Meaning, can I be sure that when the CopyFile() method returns and the execution leaves the Isolated Storage's using() block, the file has actually been copied to the flash memory? And I can in subsequent operations for example delete the original file(s)?
IsolatedStorageFile.CopyFile is synchronous: after executing this method, if no exception has been thrown, it means the file has been successfully copied. You can then safely delete the original file.
Note that you can directly use IsolatedStorageFile.MoveFile to do both operation (copying the file and deleting the original) at once.
We have inherited some code that makes use of Linq2Sql and we've just found that an instance of the Data Context is being stored in a class member that is defined as private. As this class is used inside a web application once the class is created, an instance of the data context will be created and assigned to the static member - resulting in an instance of the data context that will now be used by all instances of that class (so across all users, a potential problem in itself) but also that instance of the data context will now exist for the duration of the web application (as it is held in a static class member).
Ignoring the bad design decisions that were originally taken, my question here is what happens to the data read into the data context? I know the session in NHibernate keeps a reference to any objects it reads / creates so it can track changes, etc. and the session can slowly grow and grow and never clears out unless you implicitly tell it to. Does Linq2Sql do something similar, so if the web application lived for ever (without recycling) would this linq2Sql context slowly grow until the machine either ran out of memory, or potentially it has read the entire database by satisfying the incoming requests? It's my understanding that a context isn't usually like a cache, which will remove items that "expire" from itself, or when hitting a memory limit, start removing least used items. Because of what it does, I don't think the data context is ever able to do that? Has anyone had experience of this, and either confirm my understanding or provide a reference to show what a long lived data context can do to stop this happening.
Yes the DataContext will keep track of objects it has read if its ObjectTrackingEnabled property is set to true. It is enabled by default. You cannot change the value of ObjectTrackingEnabled to false if the DataContext is already tracking state.
We set ObjectTrackingEnabled to false when we are just doing reads with no intention of making any changes. You cannot call SubmitChanges when ObjectTrackingEnabled is set to false.
I hope that helps.
Yes, a DataContext will retain references to all of the objects that have ever been associated with it. It might be that objects that have been deleted will be expelled from the cache, but otherwise it holds on to them, no matter how much memory it will cost.
Depending on the architecture and the size of the data set, the worker process will, at some point, run out of memory and crash. The only remedy would be to disable object tracking (through ObjectTrackingEnabled) or to refactor the code so it either uses a single DataContext per request, or (and this is not recommended) to regularly recycle the application-wide DataContext.
I'm using a dynamic language (Clojure) to create CUDA contexts in a interactive development way using JCuda. Often I will call an initializer that includes the call to jcuda.driver.JCudaDriver/cuInit. Is it safe to call cuInit multiple times? In addition, is there something like a destroy method for cuInit? I ask since its possible for an error code CUDA_ERROR_DEINITIALIZED to be returned.
To answer the question, yes it is probably safe to call cuInit multiple times. I haven't noticed any side effects from doing so.
Note, however, that cuInit only triggers one-time initialisation processes inside the API. It doesn't do anything with devices, or contexts and it definitely can't return CUDA_ERROR_DEINITIALIZED. Doing the steps you would do after calling cuInit in an application (ie. creating a context) would have real implications - doing so creates a new context each time you call it and resource exhaustion will occur if contexts are not actively destroyed. There is no equivalent deinitialisation call for the API. I guess the intention is that once intialised, the runtime API is expected to stay in that state until an application terminates.
Non-trivial native extensions will require per-interpreter data
structures that are dynamically allocated.
I am currently using Tcl_SetAssocData, with a key corresponding
to the extension's name and an appropriate deletion routine,
to prevent this memory from leaking away.
However, Tcl_PkgProvideEx also allows one to record such
information. This information can be retrieved by
Tcl_PkgRequireEx. Associating the extension's data structures
with its package seems more natural than in the "grab-bag"
AssocData; yet the Pkg*Ex routines do not provide an
automatically invoked deletion routine. So I think I need
to stay with the AssocData approach.
For which situations were the Pkg*Ex routines designed for?
Additionally, the Tcl Library allows one to install
ExitHandlers and ThreadExitHandlers. Paraphasing the
manual, this is for flushing buffers to disk etc.
Are there any other situations requiring use of ExitHandlers?
When Tcl calls exit, are Tcl_PackageUnloadProcs called?
The whole-extension ClientData is intended for extensions that want to publish their own stub table (i.e., an organized list of functions that represent an exact ABI) that other extensions can build against. This is a very rare thing to want to do; leave at NULL if you don't want it (and contact the Tcl core developers' mailing list directly if you do; we've got quite a bit of experience in this area). Since it is for an ABI structure, it is strongly expected to be purely static data and so doesn't need deletion. Dynamic data should be sent through a different mechanism (e.g., via the Tcl interpreter or through calling functions via the ABI).
Exit handlers (which can be registered at multiple levels) are things that you use when you have to delete some resource at an appropriate time. The typical points of interest are when an interpreter (a Tcl_Interp structure) is being deleted, when a thread is being deleted, and when the whole process is going away. What resources need to be specially deleted? Well, it's usually obvious: file handles, database handles, that sort of thing. It's awkward to answer in general as the details matter very much: ask a more specific question to get tailored advice.
However, package unload callbacks are only called in response to the unload command. Like package load callbacks, they use “special function symbol” registration, and if they are absent then the unload command will refuse to unload the package. Most packages do not use them. The use case is where there are very long-lived processes that need to have extra upgradeable functionality added to them.