AS3: Is there a limit to the number of SharedObjects? - actionscript-3

Is there a limit to the NUMBER of SharedObject files a single application can store by default? Is there an overall size limit? All I can find information on is the size limit of an individual file, which defaults to 100kb.
I apologize if this has been asked, I just can't find it!

As far as I know, the overall size is only limited by the user's file system for the location of shared data storage, say if the user's Windows's C: drive is only 40GB, and software plus OS takes 35GB, there's only 5GB space left to store various "temporary" files including shared object data. The number of SOs is not limited by Flash/browser engine.
The size limit is not of "individual file" as this can too get pretty high, but for the site where your SWF is located, and is shared among all of the SWFs from that site. This indeed defaults to 100kb per site, and this is harsh, but is remediable by telling the user to allow this site to store more.

Related

Storing objects/variables outside of volatile memory

OVERVIEW
I don’t have a lot of experience programming, but I’m working on a hybrid mobile app using Cordova. This app is going to have a large amount of static (not changing) data. Some of this data will be referenced about once every minute, complete some simple operations based on that reference, and that will determine which object will be referenced in the next iteration of the loop.
From what I understand all that an object or variable is, is a reserved space within memory identified using a name. Which in hardware terms is synonyms with volatile storage or RAM. Because I will be working with mobile devices I am afraid that the massive amounts of objects I predict I will be working with (say close to 10,000), will max out the devices memory pretty fast.
My initial thought is to store this collection of static data in local storage instead of declaring these objects within the code itself. Then I would reference that file for the data when needed with each iteration of my loop, which processes once every minute. I don’t have experience with JSON but from what I know about it, this seems like it could be a good option.
BREAKDOWN
• I’m using typescript and Cordova.
• I will possibly be working with 10s of thousands of static objects.
• These objects will all be using one of a few interfaces as an outline.
• A few of these objects will be referenced for some information about once every minute.
• That information will be used to perform very simple operations.
• The Id of the object that was referenced may need to be saved permanently for future use.
• Those operations will determine what objects need to be referenced in the next iteration.
QUESTION(S)
So, my question is this. Am I correct in my understanding of how objects are stored? If so, will this number of objects be enough to max out a mobile devices RAM? Is my thought of storing all the static information in something like a JSON file and then referencing the individual objects in that file as needed plausible?
Not quite correct. Modern operating systems don't always map the application's memory to the hardware RAM.
Let's say you have a phone that only has 256MB of total RAM, but your application ends up loading 128MB of data into memory. Does that mean you can only use one more application that can load 128MB of memory? What about the OS itself using memory? The answer is that, the OS will move some of the data from the primary memory (e.g. RAM), into secondary storage (e.g. solid-state drive,) making room for your app and other apps to do their work as needed. If the data that was moved out of the RAM is needed again, the OS can move it back into the RAM from the SSD. This is called paging, and it's one of the many different pieces that make up the operating system's memory management. Most of it is done without your application code having to be aware of it.
Of course, even though the OS does a pretty great job of making memory available to your application, you still want to write code that's still memory efficient. Specially on mobile phones.
For your specific example, your suggestion of storing the static data in local storage is a good start. But it has some drawbacks as well that you should be aware; and some questions you should answer.
Can you divide up the data so that you can load only the portion you need at a time? Or do you need to have all of it loaded anyway?
Can you store your data in a more compressed data structure? (see for example Tries)
How frequently will you be loading the data from local storage?
Will loading the data from local storage take too long (e.g. if your loop does a thousand iterations, and during each iteration loads a lot of static data from the disk, it might end up being really slow).
Good luck!

Request unlimited increase in local storage allocation from actionscript?

I have been researching the SharedObject local storage option for graphics and player state information in a game I am creating. It seems that if you put the number of bytes in the flush() method Flash will prompt the user to increase the amount of data they will allow to be stored locally with a slider to pick an amount. In studying FarmVille they have just implemented an option to decrease load times by doing this However...
they request the amount to be unlimited. Is there a parameter in flush(), or method for SharedObject, that lets you request a specific amount to increase, or for the aforementioned unlimited?

Are 0 bytes files really 0 bytes?

I have a simple question.
When we create a file, let's say "abc.txt" and leave it blank.
The OS will show that the file is 0 bytes size and takes 0 bytes on disk.
If we save 100 of these 0 bytes file into a folder, the OS will also say that the folder's total size is 0 bytes.
This may sound logical because there is nothing in the file. But should not these files take at least a few bytes in the storage device?
After all, we save it somewhere and named it something. Shouldn't the file's name and possibly some other headers at least takes up some space?
No, they still occupy a few bytes on the file system. Otherwise I would implement a magic filesystem that stored everything encoded in the filenames on empty files.
It actually boils down to a matter of definition. Either the "size of a file" refers to the size of the content of the file, or it refers to the "difference" it makes in terms of free bytes on the underlying file system (that is, size of content (rounded up to the closest block- or cluster-size) + bytes used for it's inode).
These details are stored into what is known as File Allocation Table (talking in Windows FAT context) traditionally. They are created when we format the hard drive. Some predefined space is allocated for it. I don't think the size of it changes.
For example, you format a 100 GB hard drive, only 90+ GB is available for you to use. Other space is used by the file system to manage/remember each file/folder that is saved on the hard drive and where it is saved.
The answer to this question is file-system dependent.
For example, on NTFS an empty file takes up a cluster, and a cluster has a size that depends on your hard disk size.
Here you can read some common cluster size for Windows' file systems.
The fact that they are present on disk means that a record has been created for them, which of course requires some amount of memory. The 0 bytes simply corresponds to the logical size of the file rounded to the granularity displayed in the UI, but even then it likely contains a file header which will depend on the file format.

On a Windows Mobile device, does it make sense to cache data "in memory"?

I'm writing a Windows CE application, and I want to play a sound (a short wav file) when something happens. Since this sound will be played often, my first instinct was to load the wav file into a memory stream and reuse that stream instead of reading the file every time.
But then it occured to me that these Windows Mobile devices only have one kind of memory, which is used both for data storage (= the file system) as well as for program memory; there's even a nice slider in the control panel which you can use to delegate memory to either storage or program execution. So, theoretically, reading a file from the file system (or some value from a SQL Server CE database) should take (almost) the same amount of time as reading this value from some in-memory object, right?
Is this assumption correct (i.e., in-memory caching on application level doesn't make sense here) or did I miss something? For simplicity, let's assume that only the internal memory of the device is used (no memory card).
The assumption may or may not be valid. Where in storage does it reside? If it's persistent storage (like a storage card folder or anything else that remains when you hard reset) then it's backed by Flash, which is way, way slower than RAM and there will be a difference in load perf, though how much it might impact your app I can't say - only testing will tell you that.
When I want to play a short WAV file on Windows Mobile (e.g. notification sound). I usually add it as a resource to my executable. AFAIK resources are loaded into RAM since they are part of the executable image. You can then conveniently call PlaySound() with the SND_RESOURCE (and probably OR that with SND_ASYNC too so the call isn't going to block while the file is being played) flag.

What are the advantages of memory-mapped files?

I've been researching memory mapped files for a project and would appreciate any thoughts from people who have either used them before, or decided against using them, and why?
In particular, I am concerned about the following, in order of importance:
concurrency
random access
performance
ease of use
portability
I think the advantage is really that you reduce the amount of data copying required over traditional methods of reading a file.
If your application can use the data "in place" in a memory-mapped file, it can come in without being copied; if you use a system call (e.g. Linux's pread() ) then that typically involves the kernel copying the data from its own buffers into user space. This extra copying not only takes time, but decreases the effectiveness of the CPU's caches by accessing this extra copy of the data.
If the data actually have to be read from the disc (as in physical I/O), then the OS still has to read them in, a page fault probably isn't any better performance-wise than a system call, but if they don't (i.e. already in the OS cache), performance should in theory be much better.
On the downside, there's no asynchronous interface to memory-mapped files - if you attempt to access a page which isn't mapped in, it generates a page fault then makes the thread wait for the I/O.
The obvious disadvantage to memory mapped files is on a 32-bit OS - you can easily run out of address space.
I have used a memory mapped file to implement an 'auto complete' feature while the user is typing. I have well over 1 million product part numbers stored in a single index file. The file has some typical header information but the bulk of the file is a giant array of fixed size records sorted on the key field.
At runtime the file is memory mapped, cast to a C-style struct array, and we do a binary search to find matching part numbers as the user types. Only a few memory pages of the file are actually read from disk -- whichever pages are hit during the binary search.
Concurrency - I had an implementation problem where it would sometimes memory map the file multiple times in the same process space. This was a problem as I recall because sometimes the system couldn't find a large enough free block of virtual memory to map the file to. The solution was to only map the file once and thunk all calls to it. In retrospect using a full blown Windows service would of been cool.
Random Access - The binary search is certainly random access and lightning fast
Performance - The lookup is extremely fast. As users type a popup window displays a list of matching product part numbers, the list shrinks as they continue to type. There is no noticeable lag while typing.
Memory mapped files can be used to either replace read/write access, or to support concurrent sharing. When you use them for one mechanism, you get the other as well.
Rather than lseeking and writing and reading around in a file, you map it into memory and simply access the bits where you expect them to be.
This can be very handy, and depending on the virtual memory interface can improve performance. The performance improvement can occur because the operating system now gets to manage this former "file I/O" along with all your other programmatic memory access, and can (in theory) leverage the paging algorithms and so forth that it is already using to support virtual memory for the rest of your program. It does, however, depend on the quality of your underlying virtual memory system. Anecdotes I have heard say that the Solaris and *BSD virtual memory systems may show better performance improvements than the VM system of Linux--but I have no empirical data to back this up. YMMV.
Concurrency comes into the picture when you consider the possibility of multiple processes using the same "file" through mapped memory. In the read/write model, if two processes wrote to the same area of the file, you could be pretty much assured that one of the process's data would arrive in the file, overwriting the other process' data. You'd get one, or the other--but not some weird intermingling. I have to admit I am not sure whether this is behavior mandated by any standard, but it is something you could pretty much rely on. (It's actually agood followup question!)
In the mapped world, in contrast, imagine two processes both "writing". They do so by doing "memory stores", which result in the O/S paging the data out to disk--eventually. But in the meantime, overlapping writes can be expected to occur.
Here's an example. Say I have two processes both writing 8 bytes at offset 1024. Process 1 is writing '11111111' and process 2 is writing '22222222'. If they use file I/O, then you can imagine, deep down in the O/S, there is a buffer full of 1s, and a buffer full of 2s, both headed to the same place on disk. One of them is going to get there first, and the other one second. In this case, the second one wins. However, if I am using the memory-mapped file approach, process 1 is going to go a memory store of 4 bytes, followed by another memory store of 4 bytes (let's assume that't the maximum memory store size). Process 2 will be doing the same thing. Based on when the processes run, you can expect to see any of the following:
11111111
22222222
11112222
22221111
The solution to this is to use explicit mutual exclusion--which is probably a good idea in any event. You were sort of relying on the O/S to do "the right thing" in the read/write file I/O case, anyway.
The classing mutual exclusion primitive is the mutex. For memory mapped files, I'd suggest you look at a memory-mapped mutex, available using (e.g.) pthread_mutex_init().
Edit with one gotcha: When you are using mapped files, there is a temptation to embed pointers to the data in the file, in the file itself (think linked list stored in the mapped file). You don't want to do that, as the file may be mapped at different absolute addresses at different times, or in different processes. Instead, use offsets within the mapped file.
Concurrency would be an issue.
Random access is easier
Performance is good to great.
Ease of use. Not as good.
Portability - not so hot.
I've used them on a Sun system a long time ago, and those are my thoughts.