As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Someone recently directed me to the W3C spec on widgets:
http://www.w3.org/TR/widgets/
Developers can make web apps work offline via a browser's application cache. I had asked how users were supposed to know they could use certain websites offline, which is when the the person brought up the widget spec. It makes sense to split the packaging of an app apart from its offline storage ability. However, after googling around and reading up on widgets, I couldn't find any recent articles on the subject (most articles seemed to be from around 2010). Eventually I found Opera's SDK, but there was a message at the beginning indicating that they were removing the functionality:
Starting with Opera 12, Opera Widgets will be turned off for new users
and completely removed in a later release.
source: http://dev.opera.com/articles/view/creating-your-first-opera-widget/
Are W3C widgets a dead technology? And if so, is there any cross-browser technology being developed for the packaging of web apps? I'm curious because I think offline storage is interesting, but don't see how users would know that even when they don't have an internet connection, they could browse to a particular URL and have it work, unless the browser told them which apps they had installed (or unless every site that supported offline storage explained it to them).
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am building an application for a friend's event company. The software will only be used by a handful of people who run the events.
These are the essential requirements:
The software will capture basic data input regarding the event and
competitors.
The software will need to work offline - an Internet connection
cannot be guaranteed in venues.
The software will locally store data which is to be synced to a
remote database when an Internet connection is available.
The software will display a second window sent to a projector screen and displaying updates
to the audience.
The software will need to record data via a serial port for each event.
Though this might traditionally be a desktop application, I think there are good reasons for trying to build something like as a web app namely:
Easier for me to build / maintain / test.
Cheaper (.NET would be my first port of call for desktop but I heard Microsoft are
abandoning VS Express for Windows 8).
Platform independent - if an onsite laptop failure occurs, the ability to use another
machine without installing and configuring the software is available, as is the possibility
of future hardware upgrades.
As I have not yet used the offline capabilities of HTML5. I'm wondering are there any caveats before going down this route - is a desktop app better, or another solution?
(I know I'd have to create a Java Applet for the serial port communication as demonstrated here.)
Since you need to communicate with hardware I wouldn't bother with HTML5 and possibly Java applets. Just go with a desktop application.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am new to Native Client; and new to plugins/extensions too. It strikes me that plugins/extensions are "better" than NaCl in some respects:
Can compile plugin/extension anywhere/anyhow to produce a plain old DLL or a .so; NaCL needs binary produced only by the NaCl toolchain.
Plugin/extension is portable across browsers (e.g., it is supposed to run in FIrefox and more, as well as in Chrome). This because plugins/extensions adhere to a de facto standard introduced in Netscape 3.
If that's all true, then what are the advantages of NaCl over plugins/extensions?
In a word: security. NPAPI plugins are unsandboxable. They're native code, running out-of-process and outside of the browser's sandbox, meaning that they can do anything at all on your machine.
NaCl, on the other hand, runs inside Chrome's sandbox, and provides access only to a well-defined set of APIs. Clever compilation tricks ensure that code can't break out and start (intentionally or accidentally) maliciously executing untrusted methods.
http://www.chromium.org/nativeclient/getting-started/getting-started-background-and-basics is a good resource for an overview of the differences. I'd recommend at least skimming it to get an idea of what NaCl is trying to achieve.
First, you keep saying "plugins/extensions", but extensions and NPAPI plugins are completely different. NPAPI plugins are binary, and (as you said) cross-browser. Extensions are per-browser; each browser has their own set of extension APIs and capabilities, but they are generally written in HTML/CSS/JS.
As for your question: in addition to the very important security aspect mentioned in another answer: platform portability. If you want to do drawing, event handling, play sounds, etc. in NPAPI you need to write three completely different implementations--Windows, Mac, and Linux--and you need to ship three separate copies of your plugin. NaCl/Pepper has platform-neutral abstractions for everything.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am using Ruby on Rails with the Mechanize library to scrape store websites. The problem is that many times I can't crawl certain elements. However, I can see this when I 'view source' on the site.
For example, Walmart's category (in this case below it is "Health") is unscapeable. I believe this is because it is dynamically produced HTML (e.g. from javascript). In order to scrape this, I need a browser to process the web request.
http://www.walmart.com/ip/Replacement-Sensor-Module-for-AlcoMate-Prestige-Breathalyzer/10167376
I am also using a linux machine on Amazon EC2. It would be tough to install browser for UI scraping. Is there any Rails gem/plugin that can help me?
Thanks, all!!
Your question, rephrased, is, what's an easy way to parse an HTML document's DOM in the same way a web browser would, then execute the JavaScript in the document against the parsed DOM? Without running an actual web browser.
That's a little tricky.
However, all is not lost. Take a look at Capybara. Though created for acceptance testing you can also use it for general grokking of documents. To execute JavaScript you'll need to use a driver that supports it, and since you want it to be "headless" (no browser GUI) that probably means using capybara-webkit, Akephalos or capybara-envjs.
Another option might be Harmony, which I know nothing about except that it appears to do what you want but also appears not to be maintained anymore, so YMMV.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm developing an application and am thinking about releasing it open source.
Is it good choice to open source it, even though it's not a developer API library, but an end user app?
When is it a good time to release the source code? Should I start the project open source from the very beginning or wait until it's v1.0?
If the source code is GPL, how do you prevent someone from grabbing it and illegally releasing a proprietary closed source application? In practice, how can this violation of copyright law be spotted and is the law enforceable?
This is all inherently subjective, of course...
Yes. There are many open source end user applications. Firefox, GIMP, Inkscape, Open Office, and many many (other) GNOME and KDE apps, for example.
You definitely don't need to wait until v1.0, though it might be good to wait until you've got some early proof of concept code to "announce" the project. If you announce an empty code repository you'r unlikely to get contributors, and it may be hard to drum up enthusiasm later.
Spotting a GPL violation of an app is probably easier than spotting a GPL violation of a library, on average.
If the code is GPL and you have evidence (or strong suspicions) that the GPL was violated you could try contacting gpl-violations.org or the FSF.
Here are my opinions:
1 - Yes. It can be a portfolio, an example app for others, anything... IMHO, it doesn't matter if it's not a dev-focused project.
2 - Since the beginning. One great thing about these open-sources repositories is that it holds the source code. And there, you can but some ideas about the direction of the project, maybe even discuss it with other users / developers.
3 - Thats tough. I guess you can't, but I'm not sure.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
As an amateur mobile developer, I feel dismay every time I have to fix, update or add new features to an application of mine.
I'm eagerly awaiting the moment you can just develop a web application for any kind of device.
HTML5 and new APIs like Geolocation API or Contacts API are a step forward, but what other APIs could be useful to move current mobile developers to the web? For example, some kind of Sensor API to access mobile accelerometers or magnetometers.
I am aware that future Flash and AIR mobile releases are coming, but I'd rather prefer web standards.
There’s an idea to add a general devices API to HTML5.
http://www.w3.org/TR/dap-api-reqs/
To be honest, I don’t think you can do this sort of thing generically (or at least it’s an impractical challenge). I think it’s down to the folks who make mobile operating systems — i.e. Apple, Google, and the rest — decide whether and how to provide JavaScript access to hardware.
It’s potentially a massive security risk. Go to a hijacked website, and suddenly Russian criminals are copying every photo you take? There’s a “powerful mobile web application” for you.