I am fairly new to Windows Phone development. We have a scenario where we allow user to upload or download files but along with authentication (oAuth, NTLM, forms all standard mechanism but not limited to oAuth).
Now so far our RnD suggest that we have following options
1- Resource Intensive Agent
The constraints associated with Resource Intensive (like Minimum battery etc.) have lead us to drop this option
2- Periodic Agent
A relatively better option, however as they run after 30 minutes and the constraint of 10 minutes duration gives us doubt that on mobile if user wants to upload a video of say 1-2 GB, it does not guarantee competition and u can anticipate other problems associated with this approach.
3- Background File Transfer
This is the best option in our scenario however my colleague told me that it does not support basic windows authentication and that we cannot change user-agent etc.
4- On Application
Another option is to perform network operation on application but we cant retain user on application for longer duration and also after sometime lock screen would appear. So...
Can anyone who have experienced similar scenario or from product team can guide here. It's a common scenario, are we missing something here? or is it really API limitation?
Resource Intensive Agents will indeed not work for your use case because they require external power to work. Not to mention that if the user receives a phone call the agent terminates.
Periodic Agent Have a 25 second limited duration, not 10 minutes (10 minutes are in resource intensive agents), so they are really no an option if you need to upload a gigabyte of information.
Background File Transfers have a hard limit of 100 megabytes. (It's even less on cellular internet).
On Application is a very possible option, you can prevent the phone from going to lock screen if that's a problem. The bigger issue here is that the user is pretty much stuck for the duration of the upload. More importantly, this seems to be your only option out of the four you mentioned.
Related
I've an embedded system which runs firmware and has USB mass storage with size 79kB. So when you plug in the device to any computer(MAC/Windows), it pops as a 79kB flash drive. The firmware creates files which has transaction records. The objective is to display these transactions (tables and simple graphs) to the user. I've narrowed down to a web browser. So the user (with MAC/Windows PC) can plug in the USB device mass storage and open an HTML file in the mass storage drive and view all the transactions in the form of tables and simple bar graphs. The tricky part comes here: the device(firmware) needs to update it's clock, and this time input has to be sourced from the MAC/Windows PC. How can this be achieved?
This is the minimum requirement. Further, through the web browser the user wants to write some configuration parameters for e.g. through a text box and a submit button in the HTML page.
NOTE: Here the device has USB mass storage type and the web browser approach were selected so that there is no prerequisites for the user.
Please suggest an alternative if this can be done using another approach for e.g. a different class of USB or some other application locally available on MAC/Windows desktop/laptop. For e.g. the application should run on both on Mac and Windows i.e. the code should be the same but can be built into separate packages one for Mac and the other (.exe) for Windows. Please suggest a platform for this that has same source but can be built for both mac and windows. Thanks!
As far as I know, there is no way a web browser could write to a file. If such a thing was possible, it would be a huge security issue.
You have to write a piece of native software to do all the tasks you name. That can be done in pretty much any programming language, and if you're developing embedded systems I reckon you must have some experience in programming.
I'm looking at doing something similar and have an idea, though you may be better equipped to run with it than I am. Have the define contain a directory called "SET_DATE" with files "YEAR15" through "YEAR99", "MON01" through "MON12", "DATE01" through "DATE31", "H00" through "H23", "M00" through "M59", "S00" through "S59", and "SET"; each such file should start at a different sector, though none of the sectors in question need to contain any data (they need not physically be stored anywhere). To set the date to July 4, 2020 at 12:34:56pm, read the following files in sequence:
SET_DATE/YEAR20
SET_DATE/MONTH07
SET_DATE/DATE04
SET_DATE/H12
SET_DATE/M34
SET_DATE/S56
SET_DATE/SET
The last access should cause the unit to set its clock. If a user might want to set the clock more than once, that could be accommodated by either having a bunch of essentially-identical directories under SET_DATE (so setting the date the first time would use SET_DATE/00/YEAR20, the second time SET_DATE/01/YEAR20, etc.) and/or having the drive unmount/remount itself if necessary to clear out any caching.
I would think it unwise to have directory fetches trigger actions, since Windows or an anti-virus tool might decide to pre-cache all the directories in a drive when it is mounted. I would not expect Windows or a browser to eagerly load files, however, so I would think one could have read accesses trigger actions.
I've used background transfer service (BTS) API for Windows Phone in two apps and experienced very bad problems. It became one of the main source of bug in the two apps as for some reasons, download are often refusing to start, whatever I set in the flags (Connected to wifi, not connected, connected to a power outlet, etc.), and it was random from a user to another. This and bad response from the servers.
Is there a more customized way to achieve it? Which threads or loop remains alive in my app when I'm navigating to the external:// world? I should probably check with counters.
My main question remains: appart from the BTS, is there something to allow a 3-4 megs file to upload even if I navigate out from my app to play an mp3 from an external:// app?
Once you exit your app, you are pretty much shut down. You can masquerade as a location tracking background agent to remain in the background when you get deactivated, though you'll suck battery and I believe there can only be one of these active at a time. Generally, highly not recommended (and you'll probably fail certification).
A better way to do this if BTS is not to your liking is to use a ResourceIntensiveTask. This will only be triggered when the user is plugged in and has WiFi but will allow you to run whatever you want for as long as the conditions are met (for example, at night) which should be plenty of time to upload a 3-4 MB file.
I am wondering if someone knows the best method for storing data in a global DB against a mobile device (iOS and Android)?
I am building an app that writes/retrieves information based on a query however I need to know if any of the records returned were sent from that device.
Basically the idea is that if a user submits some information (which is stored in the DB) they gain access to additional features of the app. When the app is launched, I will check the DB to see if they submitted information in the past and allow access to other areas.
I use local storage for the information they submitted but also store remotely so if the local storage becomes corrupted for any reason there is still a record of the information the user submitted.
The ID needs to be unique to the device as there could be 100 of users (hoping for millions) so the ID needs to be unique enough that it will never conflict with another device. Any information submitted will be available for retrieval by all other users.
Thanks :)
There are three options as I see it:
1. User
You can create a typical username + password user scheme and use this to verify the user. A possible advantage of this method would be that the user can log in from any of their devices (for instance, under your method a user using the app from their iPhone and iPad would have two different views - which you may not want). Of course, this means forcing every user of the app to register within your system, which is not ideal.
2. App Install
You can uniquely identify an app install by having your app generate a UUID the first time that the app is run (you can use an AS3 helper library to generate the UUID). You can store this UUID locally and send it along with every request the app makes. The downside to this approach is that it doesn't uniquely identify the device - only a specific app install. For instance, if the user deletes the app and then reinstalls it at a later point, it will now count as a new unique device, even though the user is on the same device.
3. Device
AIR does not have a built-in way of reading device identifying info. However, you can retrieve device info through AIR Native Extensions, for example this one can get the MAC address and some other things. There are privacy concerns and other issues involved in reading and storing device info such as these, so you are probably best served trying to implement the OpenUDID project as an AIR Native Extension, since they have already dealt with all such issues. Unfortunately, I have never looked too far into developing ANE's so I am not sure how complicated or feasible it will be to turn OpenUDID into an ANE.
Summary: I would recommend the app install method due to the ease of implementation. If you really need the unique device and are worried about the multiple app installs case, you will have to work out how to use native extensions to get the info you need. If you decide that you would rather identify by user rather than device, use the user method.
As of now I don't think its possible to get the hardware devices guid using air mobile. However you do have a couple of options.
If the MAC address is good enough for you there is an ANE that will let you grab it on both iOS and Android.
http://www.adobe.com/devnet/air/native-extensions-for-air/extensions/networkinfo.html
and an example of how to use it
http://cookbooks.adobe.com/post_Getting_NetworkInfo_from_both_Android_and_iOS-19473.html
You could also write your own ANE, it should be pretty simple to wrap both Android and iOS implementations.
Objective-c: [[UIDevice currentDevice] uniqueIdentifier]
Android: TelephonyManager.getDeviceId()
If your app requires any kind of user account or login the best option would be to store this setting in the remote db.
I need an offline caching system where my app can store about 0.5 MB of data. It is preferred that there is no interaction required by the user, but small amount of user interaction might be acceptable
Currently, Microsoft's Silverlight is being used to store data offline. It is a large download for the plugin, and not installed as standard on most machines.
I have been considering cookies - but they are far too volatile. I can imagine numerous reasons someone might clear their browser cache and lose all their data.
I am not sure about HTML 5 storage, and how volatile it is in practice.
I have been looking into flash, which is installed on over 97% of windows computers. It seems I can load data from a user selected file, and save data to a user selected file.
My questions...
How big is the microsoft silverlight plugin download (in MB) for windows? (I think about 8mb, but did not get clear answer from the internet)
How can users accidentally (or deliberately without realizing the consequence) clear their HTML 5 storage on common browsers?
Is there a way to get flash to store or load data from local files without user interaction?
Is there another alternative I have not considered?
Well you could use Flash shared Object storage, which will allow between 0 and unlimited space. Check this settings panel for details of your own settings to get a better idea of what I mean.
http://www.macromedia.com/support/documentation/en/flashplayer/help/settings_manager03.html
Of course this does mean that the user will have to allow third party flash content to be stored locally, which is the default. Also the default storage space is 100KB, with the user being prompted to allow for a larger amount unless they have previously increased the default themselves. So that's a small draw back, but still workable.
I am not sure how you would access the shared object from within a silverlight app, as I have only used it via a Flash swf. I will do some digging around using javascript and get back to you on that.
Also there is another post that may help you:
Javascript bridge to Flash to store SO "cookies" within flash
It sounds like what you need is isolated storage.
I use it with all my silverlight apps and it couldn't be easier to use. With only a few calls you can store and retrieve data programatically.
Edit: I was thinking that your app is already programmed in Silverlight. What is your app programmed in? Is it simply HTML/CSS at the moment?
As far as I know, at the current moment, late 2011 the max-connections-per-server limit remains 6. Please correct me if I am wrong. This is bad that we cannot fix this easily as in Firefox. As far as I know this value is hardcoded.
One of the solutions is to download the Chromium's sources and rebuild them. Is there a more easy solution?
Is there any tricky way to hack this without creating a dozen of mirror-domains?
Why I'm asking the question: My task is to create a html-javascript slideshow that will run inside a fullscreened browser, and a huge monitor is hanging on the wall. The javascript is really complicated, it preloads photos and makes a lot of ajax calls to my web services. If WIFI connection is slow, if 6 photos are loading, the AJAX calls fail, the application runs bad. I want a fast solution based, on http or browser or ubuntu tweak something else, because rebuilding the javascript app will take days.
Offtopic: do you know any other things that can be tweaked in my concrete situation?
IE is even worse with 2 connection per domain limit. But I wouldn't rely on fixing client browsers. Even if you have control over them, browsers like chrome will auto update and a future release might behave differently than you expect. I'd focus on solving the problem within your system design.
Your choices are to:
Load the images in sequence so that only 1 or 2 XHR calls are active at a time (use the success event from the previous image to check if there are more images to download and start the next request).
Use sub-domains like serverA.myphotoserver.com and serverB.myphotoserver.com. Each sub domain will have its own pool for connection limits. This means you could have 2 requests going to 5 different sub-domains if you wanted to. The downfall is that the photos will be cached according to these sub-domains. BTW, these don't need to be "mirror" domains, you can just make additional DNS pointers to the exact same website/server. This means you don't have the headache of administrating many servers, just one server with many DNS records.
I don't know that you can do it in Chrome outside of Windows -- some Googling shows that Chrome (and therefore possibly Chromium) might respond well to a certain registry hack.
However, if you're just looking for a simple solution without modifying your code base, have you considered Firefox? In the about:config you can search for "network.http.max" and there are a few values in there that are definitely worth looking at.
Also, for a device that will not be moving (i.e. it is mounted in a fixed location) you should consider not using Wi-Fi (even a Home-Plug would be a step up as far as latency / stability / dropped connections go).
BTW, HTTP 1/1 specification (RFC2616) suggests no more than 2 connections per server.
Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy. A proxy SHOULD use up to 2*N connections to another server or proxy, where N is the number of simultaneously active users. These guidelines are intended to improve HTTP response times and avoid congestion.
There doesn't appear to be an external way to hack the behaviour of the executables.
You could modify the Chrome(ium) executables as this information is obviously compiled in. That approach brings a lot of problems with support and automatic upgrades so you probably want to avoid doing that. You also need to understand how to make the changes to the binaries which is not something most people can pick up in a few days.
If you compile your own browser you are creating a support issue for yourself as you are stuck with a specific revision. If you want to get new features and bug fixes you will have to recompile. All of this involves tracking Chrome development for bugs and build breakages - not something that a web developer should have to do.
I'd follow #BenSwayne's advice for now, but it might be worth thinking about doing some of the work outside of the client (the web browser) and putting it in a background process running on the same or different machines. This process can handle many more connections and you are just responsible for getting the data back from it. Since it is local(ish) you'll get results back quickly even with minimal connections.