I have an application that uses the Google Map Javascript API v3 and a UIWebView to display a map onscreen. While on this map I can use the app to collect multiple points of GPS data to represent a line.
After collecting 1460-1480 points the app quits unexpectedly (pinch zooming on the map makes the app quit before the 1400+ threshold is reached). It appears to be a memory issue (my app is the blue wedge of the pie chart).
The map screen does receive multiple memory warnings that are handled in an overridden DidReceiveMemoryWarning method in this screen. There is some prior code that called NSUrlCache.SharedCache.RemoveAllCachedResponses.
public override void DidReceiveMemoryWarning()
{
// BEFORE
uint diskUsage = NSUrlCache.SharedCache.CurrentDiskUsage;
uint memUsage = NSUrlCache.SharedCache.CurrentMemoryUsage;
int points = _currentEntityManager.GeometryPointCount;
Console.WriteLine(string.Format("BEFORE - diskUsage = {0}, memUsage = {1}, points = {2}", diskUsage, memUsage, points));
NSUrlCache.SharedCache.RemoveAllCachedResponses();
// AFTER
diskUsage = NSUrlCache.SharedCache.CurrentDiskUsage;
memUsage = NSUrlCache.SharedCache.CurrentMemoryUsage;
points = _currentEntityManager.GeometryPointCount;
Console.WriteLine(string.Format("AFTER - diskUsage = {0}, memUsage = {1}, points = {2}", diskUsage, memUsage, points));
base.DidReceiveMemoryWarning();
}
I added the BEFORE and AFTER sections so I could track cache contents before and after RemoveAllCachedResponses is called.
The shared cache is configured when the application starts (prior to my working on this issue it was not being configured at all).
uint cacheSizeMemory = 1024 * 1024 * 4;
uint cacheSizeDisk = 1024 * 1024 * 32;
NSUrlCache sharedCache = new NSUrlCache(cacheSizeMemory, cacheSizeDisk, "");
NSUrlCache.SharedCache = sharedCache;
When we're on this screen collection point data and we receive a low memory warning, RemoveAllCachedResponses is called and the Before/After statistics are printed to the console. Here are the numbers for the first low memory warning we receive.
BEFORE - diskUsage = 2258864, memUsage = 605032, points = 1174
AFTER - diskUsage = 1531904, memUsage = 0, points = 1174
Which is what I would expect to happen - flushing the cache reduces disk and memory usage (though I would expect the the disk usage number to also go to zero).
All subsequent calls to RemoveAllCachedResponses display these statistics (this Before/After is immediately prior to the app crashing).
BEFORE - diskUsage = 1531904, memUsage = 0, points = 1471
AFTER - diskUsage = 1531904, memUsage = 0, points = 1471
This leads me to believe one of two things - 1. RemoveAllCachedResponses is not working (unlikely) or 2. there's something in the disk cache that can't be removed because it's currently in use, something like the current set of map tiles.
Regarding #2, I'd like to believe this, figuring the reduction in disk usage on the first call represents a set of tiles that were no longer being used because of a pinch zoom in, but no pinch zooming or panning at all was done on this map, i.e. only one set of initial tiles should have been downloaded and cached.
Also, we are loading the Google Map Javascript API file as local HTML so it could be that this file is what's remaining resident in the cache. But the file is only 18,192 bytes which doesn't jib with the 1,531,904 bytes remaining in the disk cache.
I should also mention that the Android version of this app (written with Xamarin.Android) has no such memory issue on its map screen - it is possible to collect 5500+ points without incident).
So why does the disk cache not go to zero when cleared?
Thanks in advance.
I am confused about this too. You can take a look at the question I asked.
removeAllCachedResponses can not clear sharedURLCache?
The comrade said "it has some additional overhead in the cache and it's not related to real network data."
Related
I am using the LibTiff.NET library to load GeoTiff data in C# (inside Unity).
**NOTE - I looked at GDAL also, but faced similar issues as outlined below, and would much prefer to use LibTiff if possible.
I would ultimately like to be able to take a lat/long value and have a function that returns a chunk of pixel data for a 50m area around that point, streamed from a GeoTiff image on disk (not storing whole image in RAM).
I have a test file that is representative of what my software will be given in production.
I am trying to figure out how to read or compute the lat/long extents of the test file image, as I can't find a good tutorial or sample online which contains this functionality.
I can read the width+height of the file in the TiffTags, but many other values that seem critical for computing the extents such as X and Y resolutions are not present.
It also appears like the lat/long extents (or a bounding box) are not present in the tags.
At this point I am led to believe there may be more tags or header data that I am not familiar with because when I load the test file into Caris EasyView I can see a number of properties that I would like to read or compute from the file:
Is it possible to obtain this data using LibTiff?
Or is there a better system I should use? (wrapped GDAL maybe?)
** NOTE: I cannot link the test file due to NDA, plus it's enormous
This is for a 32-bit geotiff:
int width = tiff.GetField(TiffTag.IMAGEWIDTH)[0].ToInt();
int height = tiff.GetField(TiffTag.IMAGELENGTH)[0].ToInt();
int samplesPerPixel = tiff.GetField(TiffTag.SAMPLESPERPIXEL)[0].ToInt();
int bitsPerSample = tiff.GetField(TiffTag.BITSPERSAMPLE)[0].ToInt();
int bytesPerSample = bitsPerSample / 8;
byte[] scanline = new byte[tiff.ScanlineSize()];
float[] scanline32Bit = new float[tiff.ScanlineSize() / 4];//divide by 4 for byte word
FieldValue[] modelTiePointTag = tiff.GetField(TiffTag.GEOTIFF_MODELTIEPOINTTAG);
byte[] modelTransformation = modelTiePointTag[1].GetBytes();
double originLon = BitConverter.ToDouble(modelTransformation, 24);
double originLat = BitConverter.ToDouble(modelTransformation, 32);
FieldValue[] modelPixelScaleTag = tiff.GetField(TiffTag.GEOTIFF_MODELPIXELSCALETAG);
byte[] modelPixelScale = modelPixelScaleTag[1].GetBytes();
double pixPerLong = BitConverter.ToDouble(modelPixelScale, 0);
double pixPerLat = BitConverter.ToDouble(modelPixelScale, 8) * -1;
Take this bit of a method:
int CMeetingScheduleAssistantApp::DoMessageBox(LPCTSTR lpszPrompt, UINT nType, UINT nIDPrompt)
{
CString strContent = CString(lpszPrompt);
CString strTitle = CString();
if (!CTaskDialog::IsSupported())
return CWinAppEx::DoMessageBox(lpszPrompt, nType, nIDPrompt);
ENSURE(strTitle.LoadString(AFX_IDS_APP_TITLE));
CTaskDialog dlgTaskMessageBox(strContent, _T(""), strTitle);
// =================================
// Can this be calculated just once?
HDC screen = GetDC(nullptr);
auto hSize = static_cast<double>(GetDeviceCaps(screen, HORZSIZE));
auto hRes = static_cast<double>(GetDeviceCaps(screen, HORZRES));
auto PixelsPerMM = hRes / hSize; // pixels per millimeter
auto MaxPixelWidth = PixelsPerMM * 150.0;
auto PixelWidth = (hRes / 100.0) * 30.0;
// =================================
int iDialogUnitsWidth = MulDiv(
min(static_cast<int>(PixelWidth), static_cast<int>(MaxPixelWidth)), 4, LOWORD(GetDialogBaseUnits()));
dlgTaskMessageBox.SetDialogWidth(iDialogUnitsWidth);
// Code snipped
}
Is it possible to adjust this function so that it calculates the MaxPixelWidth value only once? Without the need to add other variables to my class?
My objective is to allow multiple calls to DoMessageBox and and only calculate the max width just the once.
That can easily be done by marking the function local value static. Static variables are initialized at most once. Since you need to initialize the variable to the result of executing code, another power feature of C++ is required: Immediately evaluated lambda expressions.
Replacing the code between the // ================================= comments with the following accomplishes what's being asked for.
static auto const MaxPixelWidth = []() {
HDC screen = GetDC(nullptr);
auto hSize = static_cast<double>(GetDeviceCaps(screen, HORZSIZE));
auto hRes = static_cast<double>(GetDeviceCaps(screen, HORZRES));
auto PixelsPerMM = hRes / hSize; // pixels per millimeter
ReleaseDC(screen);
return PixelsPerMM * 150.0;
}();
This ensures that MaxPixelWidth is initialized at most once, is not accessible from outside the function, and can be made const to prevent accidental changes to it.
Live demo to illustrate the concept.
A few things to keep in mind to help you understand the consequences, and make a judicious decision on whether you want to use the above:
Using function-local static variables isn't free. If you look at Compiler Explorer's disassembly you'll observe two details:
The compiler allocates a hidden flag that stores whether the static has been initialized.
The hidden flag is updated under a lock to prevent concurrent threads from initializing the static multiple times.
Prior to any access of the static variable, the compiler emits code to check the initialization flag. Both Clang and GCC seem to use double-checked locking to avoid taking a lock (with non-trivial cost) on every access. The space and time overhead in this case is negligible. I tried understanding MSVC's code as well, but couldn't make much sense out of it. Access to the static might be more expensive by always taking a lock.
Bottom line: Using so-called magic statics isn't strictly a zero-cost abstraction. You're always paying for the flag and a thread-safe implementation, even if neither is strictly required in your use case.
I'm building a simple application using portaudio, but there is something I can't quite explain when it comes to ASIO drivers, and to the behavior of Pa_GetDeviceInfo() and PaAsio_GetAvailableBufferSizes()
Let's say I use an external application to record audio from my ASIO device at 48 kHz.
At this point if I run the "pa_devs" example that ships with portaudio, I get the below:
[ Default ASIO Input, Default ASIO Output ]
Name = xxx
Host API = ASIO
Max inputs = 8, Max outputs = 8
Default low input latency = 0.0196
Default low output latency = 0.0196
Default high input latency = 0.0196
Default high output latency = 0.0196
ASIO minimum buffer size = 864
ASIO maximum buffer size = 864
ASIO preferred buffer size = 864
ASIO buffer granularity = 0
Default sample rate = 44100.00
Now if I make a second recording at 192 kHz, and subsequently run "pa_devs" again, I get this:
[ Default ASIO Input, Default ASIO Output ]
Name = xxx
Host API = ASIO
Max inputs = 8, Max outputs = 8
Default low input latency = 0.0784
Default low output latency = 0.0784
Default high input latency = 0.0784
Default high output latency = 0.0784
ASIO minimum buffer size = 3456
ASIO maximum buffer size = 3456
ASIO preferred buffer size = 3456
ASIO buffer granularity = 0
Default sample rate = 44100.00
So what seems to be happening is that the buffer size is automatically adjusted based on the sample rate, to maintain a fixed latency in absolute terms:
864/48000 = 3456/192000 = 18 ms
Now here is the real question. As opposed to "pa_devs", my own application doesn't start and stop between recordings, but is constantly running on the side.
However, if between recordings my app calls Pa_GetDeviceInfo() and PaAsio_GetAvailableBufferSizes() then I invariably get the same latencies and the same ASIO buffer sizes. Basically it doesn't seem to see that the external recording application has changed the sample rate between successive calls. Any ideas what can be causing that ?
[edit]: using pyaudio builds from here (with ASIO support) as a different observation mechanism, I find that executing pyaudio.PyAudio().get_device_info_by_index(N) also invariably returns the same answer, even though the sample rate has been changed (which I can also monitor on the ASIO control panel)
Yet as a different observation mechanism, I have an Audio Precision box. On the ASIO connector settings, it instantly reflects any changes of sample rate and changes of buffer sizes as soon as recording starts on the external application. How come pyaudio and my own app do not get notified of changes too ?
At least now I think I understand the root cause of the problem. I'm not too sure if that's a bug or a design intent.
When the parameters of devices get changed externally (e.g. current sample rate or default sample rate), portaudio won't reflect these changes (in terms of sample rate but also in terms of buffer sizes and latencies) as long as all previous calls to Pa_Initialize() have not been closed by a Pa_Terminate() and that Pa_Initialize() is eventually called again.
So if an app calls Pa_Initialize() at startup, it will get the same replies from all subsequent calls to Pa_GetDeviceInfo() and PaAsio_GetAvailableBufferSizes(), even if devices properties are changed externally in the meantime.
Rather, it seems an app should always call Pa_Initialize() and Pa_Terminate() in pairs every time it wants to do anything, be it as simple as Pa_GetDeviceInfo(). It seems to raise further questions:
what about pyaudio ? it exposes terminate() but doesn't expose initialize(). I haven't found a way to force an update of devices properties. Even reload(pyaudio) doesn't do the trick.
What about apps that are concurrently running and that load the same shared portaudio library ? If they make interleaved calls to Pa_Initialize() and Pa_Terminate(), they presumably won't get correct values ?
Does it make sense to set batchSize = 1? In case I would like to process files one-at-a-time?
Tried batchSize = 1000 and batchSize = 1 - seems to have the same effect
{
"version": "2.0",
"functionTimeout": "00:15:00",
"aggregator": {
"batchSize": 1,
"flushTimeout": "00:00:30"
}
}
Edited:
Added into app setings:
WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT = 1
Still the function is triggered simultaneously - using blob trigger. Two more files were uploaded.
From https://github.com/Azure/azure-functions-host/wiki/Configuration-Settings
WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT = 1
Set a maximum number of instances that a function app can scale to. This limit is not yet fully supported - it does work to limit your scale out, but there are some cases where it might not be completely foolproof. We're working on improving this.
I think I can close this issue. There is no easy way how to set one-message-one-time feature in multiple function apps instances.
I think your misunderstand the batchSize meaning with aggregator. This batchSize means Maximum number of requests to aggregate. You could check here and about the aggregator it's configured to the runtime agregates data about function executions over a period of time.
From your description, it's similar to the Azure Queue batchSize. It sets the number of queue messages that the Functions runtime retrieves simultaneously and processes in parallel. And If you want to avoid parallel execution for messages received on one queue, you can set batchSize to 1(This means one-message-one-time).
I am going to save enough big amounts of data in my WP8 app using the handy IsolatedStorageSettings dictionary. However, the first question that arises is how big is it?
Second, in the documentation for the IsolatedStorageSettings.Save method we can find this:
If more space is required, use the IsolatedStorageFile.IncreaseQuotaTo
method to request more storage space from the host.
Can we estimate the amount of required memory and increase the room for IsolatedStorageSettings accordingly? What if we need to do that dynamically, as the user is entering new portions of data to store persistently? Or, maybe, we need to use another technique for that (though I would like to stay with the handy IsolatedStorageSettings class)?
I have found the answer to the first part of my question in this article: How to find out the Space in isolated storage in Windows Phone?. Here is the code to get the required value on a particular device with some enhancements:
long availablespace, Quota;
using (var store = IsolatedStorageFile.GetUserStoreForApplication())
{
availablespace = store.AvailableFreeSpace ;
Quota = store.Quota ;
}
MessageBox.Show("Available : " + availablespace.ToString("##,#") + "\nQuota : " + Quota.ToString("##,#));
The 512Mb WP8 emulator gave me the following values for a minimal app with few strings saved in IsolatedStorageSettings:
Lumia 920 reports even a much bigger value - about 20Gb, which gladdens my heart. Such a big value (which, I think, depends on the whole available memory in the device) will allow me to use the IsolatedStorageSettings object for huge amounts of data.
As for a method one can use to estimate the amount of data, I guess, this can be done only experimentally. For instance, when I added some strings to my IsolatedStorageSettings, the available space was reduced by 4Kb. However, adding the same portion of data again did not show any new memory allocation. As I can see, it is allocated by blocks of 4Kb.