AIR FileStream larger than 10GB - actionscript-3

I am developing a program that must load very large files, larger than 10GB
AIR has no problem opening it, reading bytes, etc.
However, it seems I can't really process it past the 10GB point or so by setting the FileStream.position property... any idea why?
Is there any way to read/process bytes from a file of arbitrary size (as long as OS supports it) in AIR?

Related

Three.js freezes Chrome completely, huge texture in GLTF model

I want to load a ludicrous, binary glTF object with only a few polygons (~250), and a huge texture of size 10,000 x 5,500 pixels. The file is "only" 20MB in size.
When I load it using Three.js, Chrome hangs in its entirety for nearly 15 seconds. When looking in the profiler, pretty much nothing is going on during the freezing time.
If you want to load the file yourself, you can download it at https://phychi.com/uni/threejs/models/freezing-monster.glb, and the whole scene can be visited at https://phychi.com/uni/threejs/ (until I've found a solution or given up).
The behavior stays the same, whether I call GLTFLoader.load(), GLTFLoader.loadAsync(), or create my own Promise, and call .then(addToScene), without any awaits.
Does somebody have a magical solution? Or if not, how could I profile it more efficiently, seeing the internal calls? Or should I just open a bug report for Chrome/Three.js?
PS: Windows 10 Personal, Ryzen 5 2600, 32 GB RAM, RX 580 8GB.
The issue should be resolved by upgrading the library to r135(the current release).
The releases r133 and r134 have a change that introduced a performance regression on Windows when using sRGB encoded textures.

Storing objects/variables outside of volatile memory

OVERVIEW
I don’t have a lot of experience programming, but I’m working on a hybrid mobile app using Cordova. This app is going to have a large amount of static (not changing) data. Some of this data will be referenced about once every minute, complete some simple operations based on that reference, and that will determine which object will be referenced in the next iteration of the loop.
From what I understand all that an object or variable is, is a reserved space within memory identified using a name. Which in hardware terms is synonyms with volatile storage or RAM. Because I will be working with mobile devices I am afraid that the massive amounts of objects I predict I will be working with (say close to 10,000), will max out the devices memory pretty fast.
My initial thought is to store this collection of static data in local storage instead of declaring these objects within the code itself. Then I would reference that file for the data when needed with each iteration of my loop, which processes once every minute. I don’t have experience with JSON but from what I know about it, this seems like it could be a good option.
BREAKDOWN
• I’m using typescript and Cordova.
• I will possibly be working with 10s of thousands of static objects.
• These objects will all be using one of a few interfaces as an outline.
• A few of these objects will be referenced for some information about once every minute.
• That information will be used to perform very simple operations.
• The Id of the object that was referenced may need to be saved permanently for future use.
• Those operations will determine what objects need to be referenced in the next iteration.
QUESTION(S)
So, my question is this. Am I correct in my understanding of how objects are stored? If so, will this number of objects be enough to max out a mobile devices RAM? Is my thought of storing all the static information in something like a JSON file and then referencing the individual objects in that file as needed plausible?
Not quite correct. Modern operating systems don't always map the application's memory to the hardware RAM.
Let's say you have a phone that only has 256MB of total RAM, but your application ends up loading 128MB of data into memory. Does that mean you can only use one more application that can load 128MB of memory? What about the OS itself using memory? The answer is that, the OS will move some of the data from the primary memory (e.g. RAM), into secondary storage (e.g. solid-state drive,) making room for your app and other apps to do their work as needed. If the data that was moved out of the RAM is needed again, the OS can move it back into the RAM from the SSD. This is called paging, and it's one of the many different pieces that make up the operating system's memory management. Most of it is done without your application code having to be aware of it.
Of course, even though the OS does a pretty great job of making memory available to your application, you still want to write code that's still memory efficient. Specially on mobile phones.
For your specific example, your suggestion of storing the static data in local storage is a good start. But it has some drawbacks as well that you should be aware; and some questions you should answer.
Can you divide up the data so that you can load only the portion you need at a time? Or do you need to have all of it loaded anyway?
Can you store your data in a more compressed data structure? (see for example Tries)
How frequently will you be loading the data from local storage?
Will loading the data from local storage take too long (e.g. if your loop does a thousand iterations, and during each iteration loads a lot of static data from the disk, it might end up being really slow).
Good luck!

Are 0 bytes files really 0 bytes?

I have a simple question.
When we create a file, let's say "abc.txt" and leave it blank.
The OS will show that the file is 0 bytes size and takes 0 bytes on disk.
If we save 100 of these 0 bytes file into a folder, the OS will also say that the folder's total size is 0 bytes.
This may sound logical because there is nothing in the file. But should not these files take at least a few bytes in the storage device?
After all, we save it somewhere and named it something. Shouldn't the file's name and possibly some other headers at least takes up some space?
No, they still occupy a few bytes on the file system. Otherwise I would implement a magic filesystem that stored everything encoded in the filenames on empty files.
It actually boils down to a matter of definition. Either the "size of a file" refers to the size of the content of the file, or it refers to the "difference" it makes in terms of free bytes on the underlying file system (that is, size of content (rounded up to the closest block- or cluster-size) + bytes used for it's inode).
These details are stored into what is known as File Allocation Table (talking in Windows FAT context) traditionally. They are created when we format the hard drive. Some predefined space is allocated for it. I don't think the size of it changes.
For example, you format a 100 GB hard drive, only 90+ GB is available for you to use. Other space is used by the file system to manage/remember each file/folder that is saved on the hard drive and where it is saved.
The answer to this question is file-system dependent.
For example, on NTFS an empty file takes up a cluster, and a cluster has a size that depends on your hard disk size.
Here you can read some common cluster size for Windows' file systems.
The fact that they are present on disk means that a record has been created for them, which of course requires some amount of memory. The 0 bytes simply corresponds to the logical size of the file rounded to the granularity displayed in the UI, but even then it likely contains a file header which will depend on the file format.

On a Windows Mobile device, does it make sense to cache data "in memory"?

I'm writing a Windows CE application, and I want to play a sound (a short wav file) when something happens. Since this sound will be played often, my first instinct was to load the wav file into a memory stream and reuse that stream instead of reading the file every time.
But then it occured to me that these Windows Mobile devices only have one kind of memory, which is used both for data storage (= the file system) as well as for program memory; there's even a nice slider in the control panel which you can use to delegate memory to either storage or program execution. So, theoretically, reading a file from the file system (or some value from a SQL Server CE database) should take (almost) the same amount of time as reading this value from some in-memory object, right?
Is this assumption correct (i.e., in-memory caching on application level doesn't make sense here) or did I miss something? For simplicity, let's assume that only the internal memory of the device is used (no memory card).
The assumption may or may not be valid. Where in storage does it reside? If it's persistent storage (like a storage card folder or anything else that remains when you hard reset) then it's backed by Flash, which is way, way slower than RAM and there will be a difference in load perf, though how much it might impact your app I can't say - only testing will tell you that.
When I want to play a short WAV file on Windows Mobile (e.g. notification sound). I usually add it as a resource to my executable. AFAIK resources are loaded into RAM since they are part of the executable image. You can then conveniently call PlaySound() with the SND_RESOURCE (and probably OR that with SND_ASYNC too so the call isn't going to block while the file is being played) flag.

How much is File I/O a performance factor in web development?

I know the mantra is that the database is always the long pole in the tent anytime a page is being generated server-side.
But there's also a good bit of file i/o going on on a web server. Scripted code is replete with include/require statements. Moreover, its typically a practice to store templated html outside the application in files which are loaded and filled in accordingly.
How much of a role does file i/o play when concened with web development? Does it ever become an issue? When is it too much? Do web servers/languages cache anything?
Has it ever really mattered in your experience?
10 years ago, disks were so much faster than processors that you didn't have to worry about it so much. You'd run out of CPU (or saturate your NIC) before disk became an issue. Nowadays, CPUs and gigabit NICs could make disk the bottleneck, BUT....
Most non-database disk usage is so easily parallelizable. If you haven't designed your hosting architecture to scale horizontally by adding more systems, that's more important than fine-tuning disk access.
If you have designed to scale horizontally, usually just buying more servers is cheaper than trying to figure out how to optimize disk. Not to mention, things like SSD or even RAM disks for your templates will make it a non-issue.
It's very rare to have a serving architecture that scales horizontally, popular enough to cause scalability problems, but not profitable enough to afford another 1u in your rack.
File I/O will only become a factor (for static content and static page includes) if your bandwidth to the outside world is similar to your disk bandwidth. This would imply either you have a really fast connection, are serving content on a fast LAN, or have really slow disks (or are having a lot of disk contention). So most likely the answer is no.
Of course, this assumes that you are not loading a large file only for a small portion of the file.
File I/O is one of many factors, including bandwidth, network connectivity, memory etc, which may affect the performance of a web application. The most effective way to determine if file I/O is causing you any issues is to run some profiling on the server and see if this represents is the bounding factor on your performance.
A lot of this will depend upon what types of files you are loading from disk, lots of small files will have very different properties to a few large files. Web servers can cache files, both internally in memory and can indicate to a client that a file (e.g. an image) can be cached and so does not need to be requested every time.
Do not prematurely optimize. Its evil, or something.
However, I/O is about the slowest thing you can do on a computer. Try to keep it at a minimum, but don't let Knuth see what you're doing.
I would say that file IO speed only becomes an issue if you are serving tons of static content. When you are processing data, and executing code to render the pages, the time to read the page itself from disk is negligible. File I/O is important in cases where the static files you are serving up are unable to fit into memory, such as when you are serving video or image files. It could also happen with html files, but since the size is so small with html files, this is less likely.