Windows Phone - Background Task - Broken at DataContractJsonSerializer.WriteObject - json

I am using Background Task in Windows Phone Mango. I need to send data to server using JSON format. But when DataContractJsonSerializer.WriteObject function is executed, nothing happens thereafter.
Has anyone experienced the same with Background Task in Windows Phone Mango?

It is possible that the operation is taking your app over the 6MB memory limit, and the phone is killing it.
You can run with the debugger attached: http://msdn.microsoft.com/en-us/library/microsoft.phone.scheduler.scheduledactionservice.launchfortest(v=vs.92).aspx
This will let you see what is happening. Also consider logging the amount of memory your app is using to see if you are approaching the limit: http://msdn.microsoft.com/en-us/library/microsoft.phone.info.devicestatus(v=vs.92).aspx

Be careful calling any type of serialization library (or any other library for that matter) as it will very quickly bump your memory usage over the 6MB limit, which will silently kill your agent with no errors.
Also note that on a real device your agent will typically start with 4-4.5 meg used already, significantly higher than on the emulator. That means all your code and the libraries it calls need to use less than 1.5 meg in a worst-case scenario.

Related

My Periodic Task just does not work

While attached to debugger it runs just fine. The Periodic Task is invoked and runs over and over, but when I deploy it to my device It seems to run 1-2 times and then stops.
What It does is setting the live tile background image from isolated storage. The images are created in the application and then saved to isolated storage. As mentioned it works well while attached to the debugger.
The only constraint I could think that could break it would be the memory cap. The application creates and saves 40 images of ~25kB each, and that isn't 1 MB! The application is maybe <4 MB, so that is 5 MB... a lot less than the 11 MB minimal requirement.
So it can't be the memory cap kicking in. Two consecutive unhandled crashes should also break the task, but I've thrown all the code in the task's OnInvoke() in a try/catch.
Now I'm out of ideas what stopping my periodic task when running without being connected to visual studio running in debugger. Any clues?
Firstly are you using Windows 8.1 phone by any chance? Since there is an issue with Periodic tasks do not run on windows phone 8.1 devices as you can see on this forum
Background agent can’t use more than 6MB of memory. You can get the current memory usage using the following snippet :
var memory = DeviceStatus.ApplicationMemoryUsageLimit
- DeviceStatus.ApplicationCurrentMemoryUsage;
automatically executed by the OS each 30 minutes
the operation can’t exceed 25 seconds per run
if the phone switch to battery saver mode the background agent may not be executed
on some devices only 6 background agents may be planned simultaneously
agents can’t use more that 6MB of memory
agents have to be re-planned each 2 weeks
an agent that crashes two times is automatically disabled by the system
Periodic tasks are unscheduled after two consecutive crashes. You need to make sure that this doesn't happen (check internet connectivity if required, set a timeout on web requests, etc.).
You should place your code in a try/catch block and log exceptions in the Isolated Storage to see what happened afterwards.
Here is the list of constraints that apply on scheduled agents (MSDN): Constraints for all Scheduled Task Types
Here is also a series of blog posts that could help you: Windows Phone: Background Agents Pitfalls
Have you actually measured and logged the memory that's being used? What you're saying isn't very correct:
When the background agent starts it has already taken 5-6MB to load what it needs from the .NET framework.
If you mean that the compressed files are 25KB each, you should know that the images in the memory are not compressed (at least not that much).
There are two things you can try:
Use this property and check the peak memory usage: DeviceStatus.ApplicationPeakMemoryUsage. Write it to some file (maybe every 5 images or so) and check if it's okay. Paste the results, please.
Note: When testing the memory usage, it's best to build the app in "Release" and run it without debugging on a device. That's most accurate. There are some minor variations, so you should run the agent several times to be sure it's working within the limits. You can force start it from the app using ScheduledActionService.LaunchForTest.
Also, I'd suggest you subscribe to the Application.Current.UnhandledException event and mark all exceptions as handled (and log them, so that you can fix them). That's for extra safety.
P.S. When the background agent stops executing, is it "blocked" in the list of background tasks on the device?

Knowing ressources taken by libgdx

I've made a game using libgdx. When I launch it, it takes ~30 Mo of RAM. But the memory taken in RAM keep expanding with time, whereas all of the textures are loaded.
Is there a way to know the ressources used by libgdx?
Yes there is. Start the jvisualvm.exe from your java sdk and connect to your running application. (Program/java/jdk_x_x/bin) It does show you the real RAM usage and the created classes and so on (See the Monitor tab).
Moreover you can profile with it to check if there are performance issues. It also can track the RAM usage. Check out the sampler tab for profiling. Simply start the sampler play a bit and shut down the game. After that it asks if you like to have a snapshot of the current status. Or just stop sampling and check the data. Take it and check than which of your stuff does take the most RAM and so on.
Else go for logging your Assetloading and check if you meight make misstakes.

Why does Flash Builder 4.6 Profiler seem to leak Strings, whereas Debug mode GC's as expected

While unit profiling my classes I noticed that the String class endlessly accumulates (eating up over 90% of the memory in my sizable app). Luckily this is only while running in Profiler mode of Flash Builder 4.6. In debug or deployment (as AIR) memory usage levels off as expected using embedded on-screen memory profilier (Mr Doobs Stats).
To verify I made a test app that was simply a URLLoader continuously loading a text file. When running in Profilier mode using URLLoaderDataFormat.String the String data is never GC'd and grows continuously whereas using URLLoaderDataFormat.BINARY the data is nearly immediately GC'd and stays level.
I hesitate to call this a bug, because possibly it's necessary part of the way the Profilier works… but perhaps this is abnormal for the Profiler? This is the essence of my StackOverflow inquiry.
At any rate, this burned up a couple work-days for me, so if you're Googling wondering why the String class is growing like crazy and never getting GC'd consider measuring your apps memory usage outside the Profilier to verify. In my case I was mislead into thinking I had run into some problem with Master Strings — though it's good understand Master Strings and their impact on memory (see:) don't get mislead like I did.

Run my application in a simulated low memory, slow CPU environment

I want to stress-test my application this way, because it seems to be failing in some very old client machines.
At first I read a bit about QEmu and thought about hardware emulation, but it seems a long shot. I asked at superuser, but didn't get much feedback (yet).
So I'm turning to you guys... How do you this kind of testing?
I'm not sure about slowing a CPU but if you use a virtual machine, like VMWare, you can control how much RAM is actually used. I run it on a MBP at home with 8GB and my WinXP VM is capped at 1.5 GB RAM.
EDIT: I just checked my version of VMWare and I can control the number of cores it can use. It's definitely not the same as a slower CPU but it might highlight some issues for you.
Since it's not entirely clear that your app is failing because of the old hardware or the old OS, a VM client should allow you to test various versions of OSes rather quickly. It came in handy for me a few years back when I was trying to get a .Net 2.0 app to run on Win98 (it can be done though I don't remember how I got it working...).
Virtual Box is a free virtual machine similar to VMWare. It also has the capacity to reduce available memory. It can restrict how many CPUs are available, but not how fast those CPUs are.
Try cpulimit, most distro includes it (Ubuntu does) http://www.digipedia.pl/man/doc/view/cpulimit.1
If you want to lower the speed of your cpu you can easily do this by modifying a fork bomb program
int main(){
int x=0;
int limit = 10
while( x < limit ){
int pid = fork();
if( pid == 0 )
while( 1 ){}
else
x++;
}
}
This will slow down your computer quite quickly, you may want to change the limit variable to a higher number. I must warn you though this can be dangerous, because if implemented wrong you could fork bomb your system leaving it useless unless you restart it. Read this first if you don't understand what this code will do.
On POSIX (Unix) systems you can apply run limits to processes (that is, to executions of a program). The system call to do this is called setrlimit(), and most shells enable you to use the ulimit built-in to set them from the command-line (plain POSIX ulimit is not very useful). Using these you can run a program with low limits to simulate a smaller computer.
POSIX systems also provide a nice command for running a program at lower CPU priority, which can simulate a slower CPU if you also ensure there is another CPU intensive progam running at the same time.
I think it's pretty unlikely that cpu speed is going to exercise very many bugs; On the other hand, it's much more likely for different cpu features to matter. Many VM implementations provide ways of toggling on and off certain cpu features; qemu in particular permits a high level of control over what's available to the CPU.
Think outside the box. Which application of the ones you use regularly does this?
A debugger of course! But, how can you achieve such a behavior, to emulate a low cpu?
The secret to your question is asm _int 3. This is the assembly "pause me" command that is send from the attached debugger to the application you are debugging.
More about int 3 to this question.
You can use the code from this tool to pause/resume your process continuously. You can add an interval and make that tool pause your application for that amount of time.
The emulated-cpu-speed would be: (YourCPU/Interval) -0.00001% because of the signaling and other processes running on your machine, but it should do the trick.
About the low memory emulation:
You can create a wrapper class that allocates memory for the application and replace each allocation with call to this class. You would be able to set exactly the amount of memory your application can use before it fails to allocate more memory.
Something such as: MyClass* foo = AllocWrapper(new MyClass(arguments or whatever));
Then you can have the AllocWrapper allocating/deallocating memory for you.
On Linux, you can use ulimit as Raedwald said. On Windows, you can use the SetProcessWorkingSetSize system call. But these only set a limit on a per process basis. In reality, parts of the system will start to fail in a stressed environment. I would suggest using the Sysinternals' testlimit tool to stress the entire machine.
See https://serverfault.com/questions/36309/throttle-down-cpu-speed-of-vmware-image
were it is claimed free-as-in-beer VMware vSphere Hypervisor™ (ESXi) allows you to select the virtual CPU speed on top of setting the memory size of the virtual machine.

CUDA apps time out & fail after several seconds - how to work around this?

I've noticed that CUDA applications tend to have a rough maximum run-time of 5-15 seconds before they will fail and exit out. I realize it's ideal to not have CUDA application run that long but assuming that it is the correct choice to use CUDA and due to the amount of sequential work per thread it must run that long, is there any way to extend this amount of time or to get around it?
I'm not a CUDA expert, --- I've been developing with the AMD Stream SDK, which AFAIK is roughly comparable.
You can disable the Windows watchdog timer, but that is highly not recommended, for reasons that should be obvious.
To disable it, you need to regedit HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Watchdog\Display\DisableBugCheck, create a REG_DWORD and set it to 1.
You may also need to do something in the NVidia control panel. Look for some reference to "VPU Recovery" in the CUDA docs.
Ideally, you should be able to break your kernel operations up into multiple passes over your data to break it up into operations that run in the time limit.
Alternatively, you can divide the problem domain up so that it's computing fewer output pixels per command. I.e., instead of computing 1,000,000 output pixels in one fell swoop, issue 10 commands to the gpu to compute 100,000 each.
The basic unit that has to fit within the time slice is not your entire application, but the execution of a single command buffer. In the AMD Stream SDK, a long sequence of operations can be broken up into multiple time slices by explicitly flushing the command queue with a CtxFlush() call. Perhaps CUDA has something similar?
You should not have to read all of your data back and forth across the PCIX bus on every time slice; you can leave your textures, etc. in gpu local memory; you just have some command buffers complete occasionally, to prove to the OS that you're not stuck in an infinite loop.
Finally, GPUs are fast, so if your application is not able to do useful work in that 5 or 10 seconds, I'd take that as a sign that something is wrong.
[EDIT Mar 2010 to update:] (outdated again, see the updates below for the most recent information) The registry key above is out-of-date. I think that was the key for Windows XP 64-bit. There are new registry keys for Vista and Windows 7. You can find them here: http://www.microsoft.com/whdc/device/display/wddm_timeout.mspx
or here: http://msdn.microsoft.com/en-us/library/ee817001.aspx
[EDIT Apr 2015 to update:] This is getting really out of date. The easiest way to disable TDR for Cuda programming, assuming you have the NVIDIA Nsight tools installed, is to open the Nsight Monitor, click on "Nsight Monitor options", and under "General" set "WDDM TDR enabled" to false. This will change the registry setting for you. Close and reboot. Any change to the TDR registry setting won't take effect until you reboot.
[EDIT August 2018 to update:]
Although the NVIDIA tools allow disabling the TDR now, the same question is relevant for AMD/OpenCL developers. For those: The current link that documents the TDR settings is at https://learn.microsoft.com/en-us/windows-hardware/drivers/display/tdr-registry-keys
On Windows, the graphics driver has a watchdog timer that kills any shader programs that run for more than 5 seconds. Note that the Xorg/XFree86 drivers don't do this, so one possible workaround is to run the CUDA apps on Linux.
AFAIK it is not possible to disable the watchdog timer on Windows. The only way to get around this on Windows is to use a second card that has no displayed screens on it. It doesn't have to be a Tesla but it must have no active screens.
Resolve Timeout Detection and Recovery - WINDOWS 7 (32/64 bit)
Create a registry key in Windows to change the TDR settings to a
higher amount, so that Windows will allow for a longer delay before
TDR process starts.
Open Regedit from Run or DOS.
In Windows 7 navigate to the correct registry key area, to create the
new key:
HKEY_LOCAL_MACHINE>SYSTEM>CurrentControlSet>Control>GraphicsDrivers.
There will probably one key in there called DxgKrnlVersion there as a
DWord.
Right click and select to create a new key REG_DWORD, and name it
TdrDelay. The value assigned to it is the number of seconds before
TDR kicks in - it > is currently 2 automatically in Windows (even
though the reg. key value doesn't exist >until you create it). Assign
it with a new value (I tried 4 seconds), which doubles the time before
TDR. Then restart PC. You need to restart the PC before the value will
work.
Source from Win7 TDR (Driver Timeout Detection & Recovery)
I have also verified this and works fine.
The most basic solution is to pick a point in the calculation some percentage of the way through that I am sure the GPU I am working with is able to complete in time, save all the state information and stop, then to start again.
Update:
For Linux: Exiting X will allow you to run CUDA applications as long as you want. No Tesla required (A 9600 was used in testing this)
One thing to note, however, is that if X is never entered, the drivers probably won't be loaded, and it won't work.
It also seems that for Linux, simply not having any X displays up at the time will also work, so X does not need to be exited as long as you screen to a non-X full-screen terminal.
This isn't possible. The time-out is there to prevent bugs in calculations from taking up the GPU for long periods of time.
If you use a dedicated card for CUDA work, the time limit is lifted. I'm not sure if this requires a Tesla card, or if a GeForce with no monitor connected can be used.
The solution I use is:
1. Pass all information to device.
2. Run iterative versions of algorithms, where each iteration invokes the kernel on the memory already stored within the device.
3. Finally transfer memory to host only after all iterations have ended.
This enables control over iterations from CPU (including option to abort), without the costly device<-->host memory transfers between iterations.
The watchdog timer only applies on GPUs with a display attached.
On Windows the timer is part of the WDDM, it is possible to modify the settings (timeout, behaviour on reaching timeout etc.) with some registry keys, see this Microsoft article for more information.
It is possible to disable this behavior in Linux. Although the "watchdog" has an obvious purpose, it may cause some very unexpected results when doing extensive computations using shaders / CUDA.
The option can be toggled in your X-configuration (likely /etc/X11/xorg.conf)
Adding: Option "Interactive" "0" to the device section of your GPU does the job.
see CUDA Visual Profiler 'Interactive' X config option?
For details on the config
and
see ftp://download.nvidia.com/XFree86/Linux-x86/270.41.06/README/xconfigoptions.html#Interactive
For a description of the parameter.