Why is Google's "face recognition" feature available only in Picasa WEB and not Picasa for the PC? - language-agnostic

I friend asked me this today.
Picasa Web has a cool (and frightening :-) feature where it will recognize all the faces in your photo album.
But the PC (desktop) version doesn't have this.
Several reasons I can think of:
They just haven't gotten around to writing the PC version of the code.
They are licensing that feature and it costs a lot more (or isn't available) on the PC.
Takes a lot of processing power (this seems odd b/c MY PC cycles are free to Google, but they have to pay for for cycles consumed on their server.
Any other thoughts?

I'm certain it'll make it out in coming releases but Google is a funny company when it comes to its own competing/complementing services. One thing is for sure, only somebody on the Picasa team could give an accurate answer.
But we could hypothesise several things...
They don't want their code reverse-engineered.
(As you say), they aren't licensed to redist
It's blocked in the dev version by other new features that aren't complete yet
They don't want to release it because they want people to use PicasaWeb as a social photo network.
I don't think processing power is an issue. If they're running it in bulk on their own servers for free, a modern desktop could probably run it without issue.

From my limited contact with face recognition software, it's probably the redistribution issue. When I dealt with it, face recognition was its own little world with extremely high per-CPU licensing costs and tremendous paranoia about code getting loose.

I'm not so sure it's not a processing issue. It took Google's massive servers 30 minutes to run through all my photos. I can only imagine that same task would have taken days on my local machine.

Actually, its in, just in limited functionality when you do a search, there's an icon to find only photos with faces. The experimental passport feature also works that way.
So the answer is:
Not the same base (APIs) available or used and not the same language so its not directly portable.
Not the same software and there are no stated goals to make both apps feature equivalent.
Programmers are limited and their time is too. They make choices as to what implement now.

No idea if this is the case for Picasa, but there's another case where licensing could be the issue. If the server-side code is using code with a restrictive license with DRM (GPL, for example) which restricts how you can distribute modules using the code. Running that module on a web server, where the user only gets the output, is legal under such licenses. If that code was distributed, there would be many legal requirements attached which would likely be very undesirable for commercial software companies, including google. This is one very good reason to have some capabilities only accessible through web services.

This was also the case with Riya (who was arguably the first to market with reliable facial recognition for consumer photo collections).
The biggest reasons are likely:
Processing Time (they can't control
how fast your CPU is and therefore
they can't control the experience).
Facial recognition is very likely to
be process intensive (this was Riya's
stated reason for not doing it
client-side)
The recognition process requires a
LARGE volume of data for processing
that is only accessible on the
server? (In other words, the process needs to spin through millions of faces, not just the faces that you have on your hard drive?)

Related

Can HTML5 communicate with peripherals like scanners and credit card readers?

My company writes software that installs on client machines to perform point-of-sale transactions. The software interfaces with a variety of external peripherals (receipt printers, bar code scanners, credit-card readers, etc). We do this with a WinForms app that we created in Visual Studio using the Microsoft OPOS library, which in turn communicates with our cloud server.
There are obvious inefficiencies in this model, primarily with updates. I'm researching other ways to communicate with these peripherals over the web, preferably via web browser. So far as I can tell, Java is one of the only technologies out there that can do what we're looking for (via applet), and I assume Adobe Flash can as well (via the Air platform). These are viable, but not preferable because we want to run our software on web-enabled mobile devices.
Does anybody have suggestions for other ways to communicate with external peripherals over the web?
UPDATE (Jan 16th, 2019): The Credential Management API has been announced. It's currently only supported on Chrome and Opera but it's looking promising. Google Developers wrote an article elaborating on the spec.
UPDATE (Dec 28th, 2016): Another couple years gone, and another update. This one will be more focused on two new developments than anything else. See the new "WebUSB & Web BlueTooth" section under "Full Device API". But the answer remains the same.
UPDATE (Nov 3rd, 2014): It's been just over two years since the original post was made, but the answer remains mostly the same for now. We are, however, closer to your original goal in several areas.
ORIGINAL ANSWER:
There would be a number of ways to go about this.
Background
The HTML5 specification has entered into the "Recommendation" state. This means that HTML5 is pretty much set for what it looks like. However, I will be using HTML5 in the same way that every marketing person in the world has decided is best. That is, I will not be talking about HTML. Well, I will, in so far as you will utilize it from an HTML page, but not really. What I'll actually be discussing is JavaScript (JS) and that's a horse of a different color. But for all intents and purposes, we're putting it all under the same heading as HTML5, which has been decided to mean "shiny and new" now.
Also, the items which I am discussing will vary in support. Some are very browser dependent projects (like Chromium specific implementations), and some are more standards driven projects that may not have browsers implementing or experimenting with them yet. I'll try to distinguish between the two as I go along.
Full Device API
Status: Incoming, but not ready
Being able to access devices from the browser is making slow but steady progress. Right now, many modern browsers have access to some of the more common devices like the camera or gamepads, but they are all high level APIs. Browser vendors, the standards groups, and lots of companies involved with the web are all trying to make webapps just as powerful as your local applications.
But the APIs you are looking for are still in progress and a ways off. For your particular case, and for the more general case of connecting your webapp to most devices, we're still a few years away from something we can use. If you want to see what awesome things are coming up in that field, here are just a select few items that may help you directly:
Web Near Field Communication (NFC) API
This one unfortunately may be dead in the water for now. But it looks like originally some folks at the W3C (mostly Intel it looks like) were looking at adding a NFC API to the web.
Media Capture Streams
The WebRTC group is working on programmatic access to media streams like the camera which would allow to integrate things like barcode scanning or other features. This has reached CR status and is available in browsers, but is less helpful on its own.
Web Bluetooth
If you had bluetooth capable tools, this API would help you connect with them from computers and devices that were able to listen and connect. The primary driver for this at the moment seems like it is the Chrome team, including an experimental implementation, but I wouldn't consider it anywhere ready to use yet (See "WebUSB & Web BlueTooth" section).
WebUSB
This would allow full access to low level USB information including listing devices and interacting with them. Same as Web BlueTooth, this seems to be current Chrome pet project, but I also wouldn't rely on it (See "WebUSB & Web BlueTooth" section).
Network Service Discovery
If you have other devices or items on the network which broadcast and use HTTP, this API would allow you to discover and interact with these services. No browser implementation, but it is in a working draft for the W3C.
Originally, Mozilla was pushing a number of these forward because of Boot2Gecko (or Firefox OS). However, with that project officially cancelled, we aren't seeing much progress from them in these areas now.
Members of the Chrome team, however, seem to have decided to dive in and start not only working towards these, but putting them live in browsers. Which leads us to...
WebUSB & Web BlueTooth
Like sausages, it's better to not know how Web Standards are made
-Abraham Lincoln (probably)
There's been a little bit of buzz in this area as it looks like the Chrome team snuck in these as experimental features and developed their own specification for it. Which is great! Just maybe not in the way that you were hoping for.
Each browser vendor and W3C contributor group has their own style and makes contributions towards the specs in their own way. The result is usually a fairly decent specification that the browsers have agreed upon. But getting from nothing to something is... messy. Real messy. And is quite a process a lot of times. It doesn't always result in a good spec (yeah, I'm talking about you Florian compromise...) but even when it does, it takes a while.
However, It seems like Google developed this version of the spec all on their own. And, in my experience, Google's approach to the specs is always a little... well... setting my personal opinions aside we'll say "gung-ho". They tend to just dive right into the deep end. And that seems to be what they've done here.
I highly doubt these specs or implementations will look anything like this when they become standards. And there's nothing wrong with that. That's part of the process. But I wouldn't go relying on this implementation or developing any code or products against it. This is an unprecedented feature on the web and all the browser vendors are gonna want a big say in this.
That said, this is actually good. One of the things Google often does (for better or worse) with situations like this is forces the conversation and it can push things along. And having a feature shipped in the browser, even an experimental feature, can turn up the heat on the demand for it. So we may see more progress in this area soon.
PhoneGap Apache Cordova. You know, for your phone
Status: Not fully featured and phone only
Apache Cordova, previously Adobe PhoneGap, is a way to write your program in HTML, CSS, and JS that allows you to access lower level functionality on things like phones, and compile across devices. This would be a way to implement your program, but it would be a phone application, not necessarily a desktop one. An option to consider, and something I figured I would mention.
Cordova implements a few of the above features already, but doesn't have some of the more powerful ones like NFC or BlueTooth.
The Native-App solution (for Windows 8)
Status: Possible, but OS specific and desktop app
Windows 8 offers the ability to build applications in HTML and JS. This would allow you to easily access lower level functionality on the OS via their API. From the looks of it, it is pretty extensive and you can do a lot. You mentioned cross OS support, however, and this obviously limits you to one OS.
It's so Flash-y!
Status: Dying/Dead, not possible as a web app
Flash won't have direct access to the system through the web. You could create an AIR application, but that will sort of defeat the purpose of having it web based. In addition, Flash support on mobile, and on the web it would seem, is on the decline.
NodeJS
Status: Can be a bit of a pain and only possible as a desktop app
NodeJS and JS applications have sort of been a hot topic the last couple years. I didn't discuss it in my original post because I felt it wasn't quite there yet. However, things have progressed and it is much closer to being ready for this sort of thing, and has the support and power of a growing user base. That said, for your particular case, I wouldn't recommend using it. It would have to be local on the users machine, and because of how NodeJS (and similar engines) are at the moment, it would require a lot of extra configuration and setup that would complicate things a bit.
So you could build an app using HTML, CSS and JS with NodeJS or similar engines and have low level access to what you need, but it has to be local, and it would take more work than I'm sure you want to do every time you'd like to install it for a customer.
... Now where was I?
So where does that leave us? Well, simple: if you want a single language/set of code as your code base, HTML/CSS/JS aren't a great option... yet. But they could be some day. For now, your options are limited to what you feel is best for your customer. Java is a stable option you listed, but obviously comes with its own drawbacks. As the web develops, I think we'll see a lot of really cool things coming out of the new functionality, but we've got a ways to go still.
More reading:
Brian.IO: Beyond HTML5
HTML5 Apps on Windows 8
Wikipedia list of projects built using JS
This is possible, but it would have to be done indirectly. In theory, you could write a socket-server in a low level language, which gets I/O, and sends the I/O through the socket (relaying, I guess). HTML5 uses WebSockets, or some equivalent to communicate with this socket-server.
Now it can be achieved with WebUSB API.
It is available in Chrome since version 54.
It is a W3C editor's draft so we can expect (hope) that it will be adopted by other browser vendors...
I've been thinking about this a lot lately... have a POS app mostly written in VB6, considering what to do next. HTML5 is an option and I was thinking I'd use VSPE to get the serial stuff into the JS.
http://www.eterlogic.com/Products.VSPE.html
Love this product! Works very well for getting serial traffic where you need it, so I think it would work well, at least as a proof-of-concept to get you going. You'll want to use a combination of "connector" types along with the "tcpclient" and "tcpserver".
Just for the record, a method that works well in 2016 (since chrome 26), but is to be withdrawn over the next 2 years is to deploy your html5 as a chrome app and use chrome.usb (or chrome.serial or chrome.bluetooth).
I am currently using chrome.usb and planning to migrate to a web app using WebUSB API (see Supersharp's answer), which I hope will be adopted by the time Google discontinue chrome apps 🤞.

HTML5 offline authentication security issues

I'm doing a mobile WebApp using HTML5. My problem is that the "post-login" pages cached by the HTML5 application cache, from what i understand, remain still unsafe. Is there a solution? What is the best way to ensure an offline authentication hiding user/pass and "post-login" pages from intruders?
I am just starting to delve into HTML5 usage of local storage via the Manifest option (http://diveintohtml5.info/offline.html) and this too is a concern for me as much for privacy as security. Two things came to mind: Ezncrypt and the Editor's Draft on Web Storage (Privacy and Security), links to both below...
While I do not know if this will be the 'best' answer, figured anything would be better than nothing and after all you posted this question back on Feb 2, 2012 and no one else has offered anything.
Caveats (ezNcrypt):
It works on Linux
Its a Commercial product with a 30 day trial, honestly do not know the cost as I am not affiliated with them, just heard of what they do via a local meetup, LAPHP, LAMySQL or LAWebspeed last year, and it sounded interesting enough to note for future reference. Transparent encryption will be huge.
Google Ezncrypt products to get a link, I am limited to two here.
Even if its not the 'right' solution for you or others, perhaps it will point you in a good direction with some decent search terms to find more.
If the encryption is handled "transparently" below the application / data layers, it will just work regardless of the IT knowledge of the user.
If you are willing to share some contact information with them, you get this PDF file with 4 case studies, FTP, NoSQL, SQL and something else... its free.
http://blog.gazzang.com/white-paper-unifying-data-encryption-liberating-transparent-encryption-for-any-purpose-/?utm_campaign=Whitepaper&utm_source=Whitepaper
I should get a commission, lol. Hey if it helps us find a solution, that is all that matters.
Whatever your decision make sure you go through the Editor's Draft, Privacy and Security to dot your i's and cross your T's, especially sections 6 Privacy and 7 Security.
http://dev.w3.org/html5/webstorage/#the-localstorage-attribute
Just thought of another, I did not look except to provide a URL to their checklists (cheat sheets) , but my guess is OWASP would have one or two checklists that might lead you to something. Just think of your device as a little desktop/server and see if any of those apply. To bad my Nokia N800 broke on me, a full blown Linux computer in my hand circa 2006 and the new Linux handhelds circa 2012 are so much more powerful. Just use a Linux distro with a small footprint on a device with exchangeable storage (Micro SSD Cards would work...the Nokia N800 had two slots in 2006) and there is no limit to what you could store locally and run offline. Here is the URL to the OWASP checklists:
Sorry limited to two links, google OWASP cheet sheets and you will find them.
If a handheld is truly 'smart' you will have root (administrator) access to the device and underlying operating system / file system. Every operating system has methods to encrypt data on the fly, but you have to have access to utilize them. A device that does not give you this access (usually for proprietary reasons, most often to force you to buy a new device in 6 mos to 1 year) is limiting your options artificially for the wrong reasons and is simply not smart. Remember that all versions of Android (Linux) are not open and rootable, so do your homework or you will end up with an expensive paper weight in the near future.
I would recommend only buying smart handhelds that allow for root/admin access.

In which domains are message oriented middleware like AMQP useful?

What problem do MOM (Message Oriented Middleware) solve? Scalability? Integration?
In which domain are they typically used and in which domains are they typically not used?
For example, say, is Google using such solution for it's main search engine or to power GMail?
What about big websites like Walmart, eBay, FedEx (pretty much a Java shop) and buy.com (pretty much an MS shop)? Does MOM solve a need there?
Does it make any sense when you're writing a Webapp where you control the server-side and have an homogenous environment (say tens of Amazon EC2 instances all running Linux + Java JVMs) there and where the clients are, well, Web browsers?
Does it make sense for desktop apps that need to communicate with a server?
Or is it 'only' for big enterprise stuff where you typically have a happy mix of countless of different systems that needs to communicate in a way or another?
I'm a bit confused as to what they're useful for and I think that with example of where they're appropriate and where they're not appropriate I could better understand their use.
This is a great question.
The main uses of messaging are: scaling, offloading work, integration, monitoring, event handling, routing, networking, push, mobility, buffering, queueing, task sharing, alerts, management, logging, batch, data delivery, pubsub, multicast, audit, scheduling, ... and more. Basically: anything where you need data but don't want to make a database request. (Caching is another, longer story).
Another way of looking at this is to notice that many applications used to be built by assuming that users (people) would perform actions that would be fulfilled by executing a transaction on a database (including reads, writes). But today, many actions are not user-initiated. Instead they are application-initiated. For example "tell me when the book that I want to buy is in stock". The best way to solve this class of problems is with messaging of some sort. Whether you call it middleware or web push or real time salad dressing does not matter. It's all messaging.
When you enable applications to initiate or react to events, then it is much easier to scale because your architecture can be based on loosely coupled components. It is also much easier to integrate those components if your messaging is based on a stable, scalable, serviceable tool, preferably using open standard APIs and protocols.
I hope this helps. We try to maintain a list of useful links about messaging here
Please get in touch with questions and comments on any of this, we are dead easy to find.
To address your specific questions:
In which domain are they typically used and in which domains are they typically not used?
Like databases, messaging systems crop up everywhere.
For example, say, is Google using such solution for it's main search engine or to power GMail?
Google uses a lot of home grown technology, but a lot of their open source contributions and known use cases suggest that messaging is (or should be) central to some of the main services.
What about big websites like Walmart, eBay, FedEx (pretty much a Java shop) and buy.com (pretty much an MS shop)? Does MOM solve a need there?
Very much so.
An example use case is scaling web page requests. When the user makes a web request, the web server puts it onto a queue for background processing. This means that the web server can keep working while the request is processed. It also means that the web server does not need to know how the request is handled, making system maintenance, upgrade and rollback much simpler because the main parts are 'decoupled'.
So, anyway, the web request gets processed by a back end service, or possibly by many services, eg 'look up book titles', 'draw shopping cart', 'get advertisement', 'check user account'... Finally all the results get put onto another queue, ready for collection and user response by the web server. Typically the system will include a timeout of around 100ms so that any late requests just get thrown away. The user sees anything that got processed in the time interval. This is one reason why some large ecommerce sites have pages that appear to load in stages.
There are many more use cases...
Does it make any sense when you're writing a Webapp where you control the server-side and have an homogenous environment (say tens of Amazon EC2 instances all running Linux + Java JVMs) there and where the clients are, well, Web browsers?
Definitely. If you have an unknown, or unbounded, number of users, server side instances, and application latencies, then it makes sense to use messaging, even if just as a scalable substrate for non-blocking RPC.
Does it make sense for desktop apps that need to communicate with a server?
In lots of cases. One very common case is when the server pushes events to the desktop app, eg game event, tweets, price feeds in finance, system alerts....
Or is it 'only' for big enterprise stuff where you typically have a happy mix of countless of different systems that needs to communicate in a way or another?
Definitely not only for those 'legacy integration' cases but they are important too. At RabbitMQ, the biggest customers we have in terms of pure scale or message volume are cloud providers and big web application providers.
I will answer only one answer, from prior experience - take a look at this middle-ware that is employed by big companies here - middle-ware has one purpose - to glue dis-connected systems (written in disparate languages) together so that they can interact with one another and streamline the business process - Entera as I have had experience with, creates a middle layer in which the unix box using processes written in C, interact with the mainframe system (DB2, COBOL) via a front-end written in PowerBuilder (I am not naming the company!).
From the description I have given, Entera is a middle-ware which hosts a number of things - smooth integration of the flow of data regardless of the endian format, ability for different languages to talk to the middle-ware broker (a broker is a CORBA or DCE like process, that conforms to 'The Open Group) that listens on a particular port) and is specified by an IDL which makes a process appear to be local - if you understand the terminology used in Remoting under Microsoft's .NET Framework, you are not far off the mark! The middle-ware generates stubs which are linked at compile-time and manages the creation of the process, hosting it off a port, multi-threading at run-time, and also, the modern front-ends (such as .NET, Java, PowerBuilder even the unspeakable VB6...ok...VB.NET for the purists out there) can interact by opening a connection to the specified port on a particular IP address, and using the stubs generated, can interact with it directly.
Obviously, from what was described you can see how the legacy systems can have new life breathed into it and thus scalability of the process, the major downside of this is the cost factor which can run into thousdands of dollars. Big companies who uses mainframes as their back-end processing systems for billing/invoicing, who generate a huge revenue can obviously afford such an expensive product - to them it would seem like throwing pennies into a pool of water...because of the use of middle-ware which prolongs the business process, and breathe new life into it, can extend the business by a good number of years into the future without worrying about 'legacy' tag attached to it.
Incidentally, I carried this out as part of my thesis for my BSc. in Information Systems which covered this commercial front-end. There was an open source version of the middle-ware available on sourceforge called FreeDCE, but development efforts have declined or stopped.
Edit:
#cocotwo: That is exactly what middle-ware does as you said it is a plumbing tool...message oriented middle-ware is not really heard of AFAIK because I would imagine, the processes (functions) would need to be called as if they are locally visible within the application domain of the front-end to make it easy to interact with.
Using messages may have its advantages over RPC calls in that the messages are queued in a safe-holding area in the event that a network disconnection occurs - there may be some data caching going on within that aspect to allow the front-end to continue regardless...it would be useful in the instances of 'updating a status of a particular billing/invoice number' - a one-way write-data to the back-end via the middle-ware.
Ok, big companies would have advanced systems infrastructure in that technicians are constantly around the clock to ensure a smooth delivery of data-flow so that would have to be factored in. The company that I worked with had IBM Global Support contract to fulfill in order to ensure a maximum uptime 99% with 6 nine's after the decimal point...with hot-swapping/balanced-clusters/mirroring systems in place...
Whereas with RPC, if the disconnection occurs, the front-end would have to be restarted or would have to handle the disconnection event. It really depends if the message-queueing middle-ware handles each message in real-time and pass back results to the front-end immediately...
This is where each (Message-queueing and RPC related middle-ware) have their strengths and weaknesses...and also the cost mitigation factor such as support, maximum up-time, development efforts and training - that's a biggie here as middle-ware are really proprietary (despite following the 'The Open Group' layout/standards) and complex to setup and to glue the whole thing together via scripts.
Good answers and discussion here. Our consulting team has two preferred "messaging" solutions: RabittMQ and NXTera a high speed RPC middleware, the contemporary version of Entera mentioned above. My partners and I have developed several solutions using RabittMQ, it is the best tool available in that space right now. Additionally, I happen to work for the company that makes NXTera/Entera.
From experience I can clearly say that both of these products meet the need for reliability and low maintenance as discussed above. There are situations where a messaging service, like RabittMQ, is the right choice -- where Publish and subscribe, certified delivery, Queuing or store-and-forward are required.
In other cases, RPC's (remote procedure calls) are the best and fastest solutions for transactional and distributed processing for enterprise or cloud-based applications. When it is right to use an RPC, but SOAP/.NET (yes these are RPC implementations) are too slow, expensive or complex, a lightwieght high speed RPC middleware like NXTera/Entera is the right choice for us.
There is some use case overlap between RPC middleware and message oriented middleware, and where there are you can use either successfully. But both are strong and dependable choices.
The large companies I work with use both RPC and MoM side-by-side. As far as Internet companies, Google (Protocol Buffers) and Facebook (Thrift) show that RPC's have a roll to play in modern web and cloud-based development.

Pros and cons of building apps with proprietary database systems

I've been interested in 4D SAS' database product for a long time, though have barely touched it in eons.
In considering what tools to use for application development, especially one that will require a database component, what should be looked for when considering open-source tools like MySQL and PostgreSQL vs proprietary solutions like 4D or Pervasive SQL?
What good (and bad!) experiences has the SO community had with various DB tools like 4D, Pervasive, FilemakerPro, etc?
Any bad experiences?
Difficult to make a relevant list of Pros and Cons without a context.
My advice would be the following: when making the decision of using a proprietary database, make sure that this decision is based on strong facts and not merely a technical interest for an exotic tool. Put into the balance the benefits for using the proprietary database and the advantages of a non-proprietary solution.
The answer is different from system to system.
A prerequisite is that your system is well identified, with a clear scope, a quite predictable evolution, so that the results of your analysis will be robust. Then, if your proprietary solution brings a real benefit for your system, that you are comfortable with the support and that you can afford the overall cost, you should be a good candidate for the proprietary solution.
4D is a MacOS/Windows only cross-platform, proprietary database system with both stand-alone and Client-Server varieties. You would do well to compare it to Alphafive.com software which is Windows only. I've worked with it for 17 years and it has served me and my department very well. Off the top of my head ...
Pros:
Interface & code are closely tied to the data engine which makes development of rich, cross-platform user interfaces very fast and easy.
Proprietary relational data engine runs natively on both platforms, along with native client interfaces (but requires licenses for multi-users). Auto-relations are helpful (but sometimes get in the way).
Can access external systems via SOAP and ODBC and SQL drivers (limited).
Can access 4D from external systems via SOAP or http requests & web pages.
Native procedural programming language based on Pascal and is EASY to learn.
Excellent tool for small to mid-sized departments.
Latest version accepts subset of SQL commands AND original data access, so it's backward compatibility record has been very good.
Security is EASY in 4D.
You can build solutions to deploy through a variety of means, and are not limited by whether or not MS Access is installed.
Cons:
Interface & code are closely tied to the data engine which can lead to limited use of abstraction and "black-box" coding unless you make it a goal of your development.
Compiles to one monolithic structure file forcing restart for single fixes.
Language is still only procedural--making it harder for object-oriented programmers to accept. Every method requires separate "file" in 4D so you can't include more then one function or procedure in a single routine -- it will take some getting used to it.
While company appears to be in good shape, growing and developing, you simply never know as they keep their condition to themselves.
Company has never really marketed itself--trusting in its developer base to spread the word and grow the product through site deployments and product upgrades. Web site is clearly useful only to developers who already use the product -- it simply fails to attract new users.
Product upgrades have always seemed to focus on how the tool is better for the DEVELOPERS rather than for the CUSTOMERS of those developers.
SQL lacks views, compound indexes, and other common SQL features.
When a user requests a report of specific columns of data, I often have to write yet another program just to provide that specific data -- I can't always just query the data and generate a text file.
Does not handle new OS versions with nearly the ease of web browser based applications. Older version is broken on Mac OS 10.6, and newest version requires the latest Mac OS 10.6. No version is certified yet on Windows 7.
I've been nearly a year at learning ASP.NET and a few weeks at Ruby on Rails. While SQL data stores are EASY, user interface is HARD -- but worth it when your application still functions through OS upgrades. You can always use an older browser if the latest version breaks something.
I'd recommend you consider either of those, depending on how much funds you have available to implement the project--Rails being the cheaper of the two. Then, ANY system with a web browser can access the data, and you can fix interface pages on the fly as needed rather than taking the whole system down a few minutes for a single, simple update. Those skills might be more marketable in the future.
I will only say one thing.. Watch the "actual" cost of your decision.. Most proprietary database systems are Windows only.. or sometimes Mac/Windows only.
This means that along with paying quite a bit of money for the database system, you must also pay a good amount of money on a Server operating system to run it...
Also, compare the database system with current open source solutions. Is it really worth it? After moving from Microsoft Sql Server(which has a free edition, but anyway) to PostgreSQL I was blew away that people pay so much for SQL Server.. I mean, Postgres to me is a lot more clean, and most of it works exactly how you'd expect(unlike in certain SQL server syntaxes) and it has more features built into it(programming stored procs in Ruby anyone?)
So basically, compared the proprietary with the open source software and decide upon which one to take by total pricing(including OS) and feature set..
Pro of zeroing in on any DB: it's got good non-portable features that help you get things done
Con of zeroing in on any DB: sometimes a different DB is appropriate (for example running your tests with in-memory SQLite instances), but that option is now closed
Con of a proprietary commercial DB: if you need many instances, licensing costs can kill you
Consider the following questions:
How easy (or difficult) is it to make changes in maintenance? Applications are likely to spend far more time in maintenance than they do in development, so if changes are hard, long-term pain is guaranteed.
What is the quality of support? A system that is well-documented, proprietary or otherwise, is going to be easier to work with.
How large (or small) is the user community? Systems with larger user communities mean more people to ask for assistance if and when things go wrong.
How robust are the import/export capabilities of this proprietary database system?
I found the last point particularly useful at my first full-time job. Our client was using CA-Ingres, and no one at the company knew it well enough to write queries to validate the data. So I came up with the idea of exporting the data from Ingres and importing it into MS SQL Server (which I knew from a brief stint at Sybase Professional Services) so we could write our validation queries there. If it had been really hard to export data from Ingres, my idea wouldn't have been an option at all.
From 4D's webpage, I gather that we are looking at a complete development+deployment environment, not a standalone database as such. So the alternatives you could be looking at include stuff like django, ruby-on-rails, hibernate and others. The real question, of course, is if the proprietary system can save you enough money doing the product lifetime to justify the costs of the product. And that would depend on the type of human resources you have available.
4D is a good option for vertial applications. I have worked for a company which used 4D to build a medical records and billing application for general practitioners and specialists. The rapid design and deployment features of 4D enabled the application to quickly move with market desires and legislated changes to medical record storage.The environment itself was not cutting edge, but it was integrated, cross platform and very productive.
If you are entering a market with high vendor lock-in and a high barrier to entry, then I think proprietory integrated development environments are a good option.
At various points in my career, I've used and gotten very good at FileMaker Pro, FoxPro, 4D, and a few other commercial products. Now I mainly use PHP/MySQL, and haven't used the latest versions of any of the products.
I've always liked FileMaker because most people who can use a computer can pick up FileMaker and design their own systems. They don't have to know programming or database design. But, you can "program" FileMaker, put a web front end on it, or do other more sophisticated setups if you need to. Many times I was "handed" a system created in FileMaker by a non-technical person that needed to be made into a full fledged data management system. The good part was that all the "specs" and data flow were already designed into a system. The prototype was already created!
4D and FoxPro I always found required a certain amount of extra programming and/or database knowledge to really do anything with. 4D & FileMaker are really complete self-contained systems, not just database systems. Although they all have the ability to hook into other backend databases systems (i.e. MySQL, Oracle), that is not their strong point.
On the downside, doing more complex, dynamic systems can be difficult in 4D and Filemaker due to everything being tightly coupled. Because of their cost, you really would want to create multiple systems with them. Which means you need to really "buy into them" to get your money's worth.
The key concept is always adherence to standards: if you plan to use 4D's custom and / or special designed functions (but the discussion could be far more general, and cover any other free or commercial tool in the wild), well, just use it and take your advantage.
Not surprisingly, that's why huge DB systems like Oracle or IBM's DB2 in the past were wide accepted for specific business areas, as commercial transactions, for instance.
The other main reason to adopt a very closed solution is the legacy support. One of the products you cited (Pervasive SQL) acted as a no-effort port for BTrieve-based applications in late 90s, and it gained popularity thanks to the huge BTrieve community all over the planet.
Finally, last but not least, you should evaluate the TCO (Total Cost of Ownership) not only in terms of license price (single seat, network environment, site licenses and so on), but also for what concerns tech support, updates and availability for your platform. Many business units I know have been obliged to change their base OS for DB related problems.
Tip: add a bonus for custom solution that are proven or supported for usage in virtualized environments, if you aren't in seek for extreme performances. It will save more than a head ache for your DB manager.
In all other cases, rely on opensource/freesoftware DBs. MySql and Postgres for big projects, SQLite for single app persistence layer. Fairly standard and very good (community) support. Good value for no price.
I don't have any experiences with the proprietary database products you listed: 4D, Pervasive, FilemakerPro.
I'd be interested in knowing what those products offer that make them more attractive to you than the open source alternatives, you listed: MySQL and PostgreSQL.
I'd be interested in what makes those more attractive to you than the much more popular proprietary alternatives: Oracle, SQL Server, DB2, etc.
Without you providing more specifics, it's hard to advise you.
I personally feel safer using a widely used open source solution than a narrowly used closed source solution. The more widely used, the more battle-tested it's likely to be. The more open, the more control over my own destiny I have in case I do encounter some bug.
I have reported bugs to open source projects and gotten a quick fix. I have reported bugs to companies that make for-profit proprietary software and have gotten nothing.

Should I code for browser or PC? (fleet management)

I have to architect a commercial vehicle fleet tracking system.
Each vehicle (a few 100, max a few 1,000) will have a GPS and satellite transmitter and will periodically report its position. Positions will be stored in a database and used to create a Google Map.
There will of course be other functionalities. Security, log in, etc and probably lots of interaction with other corporate databses (drivers start/stop time for salary purposes, etc).
Question: pure GoogleMaps is probably best implemented as a browser based app (Php & MySql?), but with the additional functionality of a commercial vehicle fleet tracking system, would it be better doing something PC based (Windows/Linux)?
Any other advice? Thanks
I think with the capabilities of modern browsers, along with various mature client-side frameworks, we are witnessing an always thinning distinction between web and desktop interfaces.
You may want to take into consideration that a web application automatically solves some important problems for you:
Distribution: No need to distribute your application. Simply provide a URL.
Updates: Upgrading and fixing problems in your software will be easier and quicker if you distribute it through a web interface.
Security: Deriving from the above, you are able to fix security vulnerabilities more promptly.
Compatibility: Your application will be able to work on any operating system that can launch a web browser.
Last but not least, remember that the Google Maps API is not free for this type of application. Article 10.9.C of Google Maps API Terms and Conditions explicitly restrict using the standard Google Maps API for fleet management and asset tracking. You would need the Google Maps API Premier to legally use Google Maps for your application.
According to one unofficial source (dated April 2008), this would cost USD 10,000 per year, which entitles you to track 100 vehicles. If you exceed the 100 vehicles, you would need to add USD 24 per additional vehicle per year.
Implement solution for the domain problems first. It means data storage, data transmission between vehicles and your system, methods of data analysis, aggregation and visualisation.
These will likely to sit as a head-less system on a server and provide access to it remotely, in both directions: to input data and to query data.
Now, PC or Web is more related to presentation on a client side. You can make both if you like. Web client as well as desktop application can serve as a client to remote data and operational server.
Don't forget that you can always host a web control in a thick client app. This is actually trivial with .Net on the Windows platform with the IE control. You can also access the browser's DOM this way and do some neat things. So just because there's a strong web component to what you're doing you're not necessarily "stuck" writing a pure web app.
One big question is what kind of hardware you'll be able to put in the vehicles. Will they be laptops or small PCs with full fledged OSs or something more mobile like CE or a pared-down Linux distro?
Google Maps is JavaScript based so you can do most things with it, e.g browser based, widgets, etc. However due to the licensing Google won't allow you to use it in anything other than an Internet environment unless you use there Enterprise License.
In terms of integrating it into other systems, its really difficult to say what's best without knowing what other software you are using, what protocols they use, are web services available, etc. I agree with Daniel though in that any distributed system not implemented in a browser better have some good reasons not to, simply because the benefits are substantial. You'll need to weight them up though with a full break down of all the different systems you will need to interact with and work out what fits best.
The great thing is that with it being JavaScript based you have a lot of flexibility in what you can do with it.
This is more an extension to #Daniel Vassallo's answer. Although a web based application would solve most problems there may be the small potential issue of bandwidth usage and reception for internet access. This may or may not be an issue for the fleet management, depending on how that is tackled on the hardware side of things.
An offline solution may assist with this issue but then a clever architect could find a way to create an initial web based solution which can be accessed with an offline application which can pick up the slack and/or provide predictive reasoning until a connection is re-established.