In our windows store app we save the files that the users are creating in an epub file, which is a zip archive with file extension .epub
The app is written primarily in HTML and JS, but to handle the writing to the zip archive we use some C# in a helper.
This all works, but I have found that the zip archive can become corrupted if the app suspends whilst writing to the zip, as sometimes when adding a particularly large file to the zip, say a 100mb video file, the operation does not complete in the 5 seconds allowed from oncheckpoint.
Are there any ways that I could avoid this problem? As far as I can see there is just no way to write a large file to a zip and be 100% sure that it won't get corrupted if the app suspends.
I agree with you that there is just no way to write a large file to a zip and be 100% sure that it won't get corrupted if the app suspends.
As far as I know, when an App was suspended, the memory owned by the app will not be released, so you don’t need to worry about the data missing in memory when suspending.
The thing you need to worry is user quit the app before the data was persistent.
But some extra designs may improve some user experience and avoid data losing.
auto-save
For example, persistent the changes when the object was changed by user.
show user saving progress
Using the progress UI to let user know the saving is in-progress and he/she will lost the data if quit the app.
Related
I am saving data as part of a game, using a CSV file, and want to set it to read-only so that the user cannot modify it (system designed for not very experienced users).
Is there any way to save these files so that they are read-only?
Unfortunately it seems that godot's File API does not provide a mechanism to change file permissions. You could try using an encrypted file, which will prevent the user from trivially viewing it as a CSV file (e.g. it shouldn't open by default in their spreadsheet program). However, an encrypted file can still be overwritten and corrupted, and this will hinder modding for players that enjoy digging around game files.
You could write a proposal to include permissions functionality in the File API, or write the saving code in a language other than GDScript, where you'd have access to a standard library with this functionality. You could write a GDNative extension that supports this.
Ultimately you have to decide how important it is to fool-proof your system. A determined user will find ways to break things.
I would like to know if there is a way to automate the process of saving a spreadsheet into an HTML file.
I have an OpenOffice spreadsheet that is located at a public file server inside my company´s LAN. A group of people work by editing and entering data into that spreadsheet, but others should only have read access. Since permissions policies can get a little bit complicated with OpenOffice, I thought it might be convenient for those who should only read the data to open the file in their web browsers by entering the route to the file or via a shortcut (specially since a lot of the read-only users are spreadsheet illiterates).
How can I achieve for this an HTML file to be updated every time the spreadsheet is saved by editing users, so read-only users can have the latest version?
Ideally, use a document management system that will keep older versions of the file to prevent mistaken edits by multiple people. Most DMS's provide some users with the ability to edit and others with the ability to read the document.
However, I once worked on a project where we used a large commercial DMS that was too complex for the read-only users. So we also ran a web server that provided read-only access to the documents.
Running a web server such as Apache HTTP Server, it is possible to do what you are asking even without a DMS. Provide a web form for people to submit edits to the document. When an edited document is submitted:
Save a copy of the old file.
Update the main version to the new file.
Run a command line job to convert the document to HTML.
Then read only users can view the HTML file by browsing to the web server.
Without a DMS or a web server, the best that can be done is to set file system permissions, as #mb21 said. That would certainly be easier to set up, and might be good enough depending on your needs.
I have a flash application which runs on web. I need to store images and audio files onto the clients local disk(don't want to store on web) without prompting the client. I have already tried with shared object. But since shared object space is limited to 100 KB per domain I am searching for alternatives.
If someone has better solutions please let me know.
Thanks.
You can't do this with the Flash Player by itself. SharedObject and FileReference/save() are intentionally designed to allow the user to have authority over local storage. It would be a security concern if users did not.
Using an AIR application, though, you can do this using File and FileStream, or EncryptedLocalStore.
I had the same problem with creating log files and writing to them
The only solution for me was to create a localhost WebService (used WCF), so i could use URLRequest to the localhost and pass data to service - which then updated file or created it.
But in your way, if you want to store things from the user to your disk, maybe you could also somehow play with the Web services. Just need to try.
Visual studio has almost complete Web service template - just edit for your purposes.
I have a front-end Access 2007 apllication which talks to MySql server.
I want to have a feature where the application on the user's computer can detect that there is a new version on the network (which is not difficult) and download the latest version to the local drive and launch it.
Does anybody has any knowledge or exprience how this can be done?
Thanks
Do you actually need to find out if there is a newer version?
We have a similar setup as well, and we just copy the frontend and all related files every time someone starts the application.
Our users don't start Access or the frontend itself. They actually start a batch file which looks something like this:
#echo off
xcopy x:\soft\frontend.mde c:\app\ /Y
c:\app\frontend.mde
When we started writing our app, we thought about auto-updating as well and decided that just copying everything everytime is enough.
We have enough bandwidth, so the copying doesn't create any performance problems (with about 200 users).
Plus, it makes some things easier for me as a developer when I can be sure that each time the application is started, the frontend is overwritten anyway.
I don't have to care about auto-compacting the frontend when it's closed (and users complaining that closing the app takes too long...), and I don't have to deal with corrupted frontends after crashes.
#Lumis - concerning the custom icon:
Ok, maybe I should have made this more clear. There is only one batch file, and it's in the same network folder as the frontend.
The users just have links on their desktops which all point to the same batch file in the network folder.
This means that:
future changes to the batch file are easy, because it's only one single
file in one central place
we can change the icon, because
what the user sees is a normal Windows link
(By the way, we did not change the icon. Our app is for internal use only, and I'm working in a manufacturing company, which means that all but very few users are absolutely non-technical and couldn't care less about the icon, as long as it's the same on all machines and they know how it looks like so they can find it quickly on their desktop...)
Tony Toews has one: Access Auto FE Updater
It appears to be free, but I'm not 100% sure.
Lumis's option is solid, however if you want to check the version and only copy the database when their is a new version, have a 'Version' field in a back end table, and a 'Version' constant in a front end module. Keep these in sync with each new production release. Compare the table version against the version in the module when the main form of the front end database opens.
If they don't match, have the database close, but have the database call a batch file as the last bit of code to run as it's closing. The database should finish closing before the batch file begins it's copy process. If needed, place a minor delay in the batch file code just to be sure there are no file locking issues.
My app is highly data driven, and needs to be frequently updated. Currently the MySQL database is dumped to an xml file via PHP, and when the app loads it downloads this file. Then it loads all the values in to NSMutableArray's inside of a data manager class which can be accessed anywhere in the app.
Here is the issue, the XML file produced is about 400kb, and this apparently takes several minutes to download on the EDGE network, and even for some people on 3G. So basically I'm looking for options on how to correctly cache or optimize my app's download process.
My current thought is something along the lines of caching the entire XML file on to the iPhone's hard disk, and then just serving that data up as the user navigates the app, and loading the new XML file in the background. The problem with this is that the user is now always going to see the data from the previous run, also it seems wasteful to download the entire XML file every time if only one field was changed.
TLDR: My iPhone app's download of data is slow, how would one properly minimize this effect?
I've had to deal with something like this in an app I developed over the summer.
I what did to solve it was to do an initial download of all the data from the server and place that in a database on the client along with a revision number.
Then each time the user connects again it sends the revision number to the server, if the revision number is smaller than the server revision number it sends across the new data (and only the new data) from the server, if its the same then it does nothing.
It's fairly simple and it seems to work pretty well for me.
This method does have the drawback that your server has to do a little more processing than normal but it's practically nothing and is much better than wasted bandwidth.
My suggestion would be to cache the data to a SQLite database on the iPhone. When the application starts, you sync the SQLite database with your remote database...while letting the user know that you are loading incremental data in the background.
By doing that, you get the following:
Users can use the app immediately with stale data.
You're letting the user know new data is coming.
You're storing the data in a more appropriate format.
And once the most recent data is loaded...the user gets to see it.