Alternative to cgi-bin - language-agnostic

This question asks about the disadvantages of 'cgi-bin based' services. As far as I can ascertain, apart from perhaps the naming convention, nothing much has changed over the years as far as web based client/server interaction is concerned. There is of course now the option to use AJAX clients but ultimately they are still stateless and code on the server, whatever language it is written in, still waits for input to be sent via 'GET' or 'POST' methods.
Having been out of the loop as far as web programming is concerned for quite a while, am I missing something obvious?
To clarify my question: The question I referred to suggests that 'cgi-bin' based systems are no longer in use, what is the new alternative?
#sarnold. Thank you for your answer. Just so I am 100% certain about this, even if a system is developed using the 'latest and greatest' server platform (I guess this would be a .net based system or Linux equivalent) it is still, ultimately, just a program, or programs, running (if using fast cgi) or waiting to be started on a server, so there really hasn't been any change over the years. If that is the case what alternative is Brian referring to in his question?

The largest changes have been in tools like mod_php that execute the code directly in the address space of the web server and FastCGI which implement something very nearly identical to the CGI protocol, but with a handful of long-lived processes, rather than fork(2) + execve(2) of a new interpreter for every single request.
Of course, both approaches have problems: executing the interpreter directly in the address space of the web server is potentially horrible for reliability and security: the server (typically) runs with the same privileges all the time, so separating users is (typically) impossible. Further, flaws in the interpreter can be quite common, so it isn't a good solution for shared hosting environments, because any user could run arbitrary code with the privileges required to access all the data of all the other users on the system.
The FastCGI approach almost keeps the same speed; it does sacrifice some speed for copying data around between processes, but this isn't a real big deal for anyone except huge volume sites. But, you can run multiple FastCGI systems as different user accounts attached to different locations of the single 'web server' (e.g., http://example.com/public/ runs under account www-public and http://example.com/private/ runs under account www-private), and the FastCGI systems don't need to run with the same privileges as the web server.
Of course, there are also servlet systems where the server calls into compiled callbacks (frequently, compiled to bytecode) code that is linked into the server process. Much less "scripting"-feel.

Related

Is there a way to keep track of the calls being done in mysql server by a web app?

I'm finishing a system at work that makes calls to mysql server. Those calls' arguments reveal information that I need to keep private, like vote(idUser, idCandidate). There's no information in the db that relates those two of course, nor in "the visible part" of the back end, but even though I think this can't be done, I wanted to make sure that it is impossible to trace this sort of calls, with a log or something (calls that were made, or calls being made at the moment), as it is impossible in most languages, unless you specifically "debug" in a certain way, while the system is in production and being used. I hope the questions is clear enough. Thanks.
How do I log thee? Let me count the ways.
MySQL query log. I can enable this per-session and send everything to a log file.
I can set up a slave server and have insertions sent to me by the master. This is a significant intervention and would leave a wide trace.
On the server, unbeknownst to either Web app and MySQL log, I can intercept communications between the two. I need administrative access to the machine, of course.
On the server, again with administrative access, I can both log the query calls and inject a logging instrumentation into the SQL interface (the legitimate one is the MySQL Audit Plugin, but there are several alternatives, developed for various purposes by developers over the years)
What can you do? You can have the applications use a secure protocol, just for starters.
Then, you need to secure your machine so that administrator tricks do not work, and even if the logs are activated, nobody can read them and you can be advised of any new and modified file to delete it promptly.

What's the most efficient architecture for this system? (push or pull)

All s/w is Windows based, coded in Delphi.
Some guys submit some data, which I send by TCP to a database server running MySql.
Some other guys add a pass/fail to their data and update the database.
And a third group are just looking at reports.
Now, the first group can see a history of what they submitted. When the second group adds pass/fail, I would like to update their history. My options seem to be
blindly refresh the history regularly (in Delphi, I display on a DB grid so I would close then open the query), but this seems inefficient.
ask the database server regularly if anything changed in the last X minutes.
never poll the database server, instead letting it inform the user's app when something changes.
1 seems inefficient. 2 seems better. 3 reduces TCP traffic, but that isn't much. Anyway, just a few bytes for each 2. However, it has the disadvantage that both sides are now both TCP client and server.
Similarly, if a member of the third group is viewing a report and a member of either of the first two groups updates data, I wish to reflect this in the report. What it the best way to do this?
I guess there are two things to consider. Most importantly, reduce network traffic and, less important, make my code simpler.
I am sure this is a very common pattern, but I am new to this kind of thing, so would welcome advice. Thanks in advance.
[Update] Close voters, I have googled & can't find an answer. I am hoping for the beneft of your experience. Can you help me reword this to be acceptable? or maybe give a UTL which will help me? Thanks
Short answer: use notifications (option 3).
Long answer: this is a use case for some middle layer which propagates changes using a message-oriented middleware. This decouples the messaging logic from database metadata (triggers / stored procedures), can use peer-to-peer and publish/subscribe communication patterns, and more.
I have blogged a two-part article about this at
Firebird Database Events and Message-oriented Middleware (part 1)
Firebird Database Events and Message-oriented Middleware (part 2)
The article is about Firebird but the suggested solutions can be applied to any application / database.
In your scenarios, clients can also use the middleware message broker send messages to the system even if the database or the Delphi part is down. The messages will be queued in the broker until the other parts of the system are back online. This is an advantage if there are many clients and update installations or maintenance windows are required.
Similarly, if a member of the third group is viewing a report and a
member of either of the first two groups updates data, I wish to
reflect this in the report. What it the best way to do this?
If this is a real requirement (reports are usually a immutable 'snapshot' of data, but maybe you mean a view which needs to be updated while beeing watched, similar to a stock ticker) but it is easy to implement - a client just needs to 'subscribe' to an information channel which announces relevant data changes. This can be solved very flexible and resource-saving with existing message broker features like message selectors and destination wildcards. (Note that I am the author of some Delphi and Free Pascal client libraries for open source message brokers.)
Related questions:
Client-Server database application: how to notify clients that data was changed?
How to communicate within this system?
Each of your proposed solutions are all viable in certain situations.
I've been writing software for a long time and comments below relate to personal experience which dates way back to 1981. I have no doubt others will have alternative opinions which will also answer your questions.
Please allow me to justify the positives and negatives of each approach, and the parameters around each comment.
"blindly refresh the history regularly (in Delphi, I display on a DB grid so I would close then open the query), but this seems inefficient."
Yes, this is inefficient
Is often the quickest and simplest thing to do.
Seems like the best short-term temporary solution which gives maximum value for minimal effort.
Good for "exploratory coding" helping derive a better software design.
Should be a good basis to refine / explore alternatives.
It's very important for programmers to strive to document and/or share with team members who could be affected by your changes their team when a tech debt-inducing fix has been checked-in.
If not intended as production quality code, this is acceptable.
If usability is poor, then consider more efficient solutions, like what you've described below.
"ask the database server regularly if anything changed in the last X minutes."
You are talking about a "pull" or "polling" model. Consider the following API options for this model:
What's changed since the last time I called you? (client to provide time to avoid service having to store and retrieve seesion state)
If nothing has changed, server can provide a time when the client should poll again. A system under excessive load is then able to back-off clients, i.e if a server application has an awareness of such conditions, then it is therefore better able to control the polling rate of compliant clients, by instructing them to wait for a longer period before retrying.
After considering that, ask "Is the API as simple as it can possibly be?"
"never poll the database server, instead letting it inform the user's app when something changes."
This is the "push" model you're talking about- publishing changes, ready for subscribers to act upon.
Consider what impact this has on clients waiting for a push - timeout scenarios, number of clients, etc, System resource consumption, etc.
Consider that the "pusher" has to become aware of all consuming applications. If using industry standard messaging queueing systems (RabbitMQ, MS MQ, MQ Series, etc, all naturally supporting Publish/Subscribe JMS topics or equivalent then this problem is abstracted away, but also added some complexity to your application)
consider the scenarios where clients suddenly become unavailable, hypothesize failure modes and test the robustness of you system so you have confidence that it is able to recover properly from failure and consistently remain stable.
So, what do you think the right approach is now?

WebSockets on PHP shared hosting

I've been doing some research of the best way to show an "users online" counter which is updated to the second trying to avoid continuos ajax polling.
Obviously WebSockets seems to be the best option. Since this is an intranet I will make it a requirement to use Chrome or Safari so there shouldn't be compatibility issues.
I've been reading some articles about WebSockets since I'm new to it and I think I pretty much understand how it works.
What I'm not so sure is how to implement it with PHP. Node.js seems the natural choice for this because of it's "always running" nature but that's not an option.
Why I'm most confused about is the fact that PHP runs and when it's done, it ends. If PHP ended, wouldn't the socket connection be lost? Or if the php re-runs it will look back the user by ip? (I don't see that likely)
Then I found this library
http://code.google.com/p/phpwebsocket/
but it seems to be a little old (it mentions only Chrome nightly is compatible with WebSockets)
In one point says "From the command line, run the server.php program to listen for socket connections." which means I need SSH, something many shared hosting plans don't have.
And my other doubt is this other line in the source of that library:
set_time_limit(0);
does that mean that the php file will run continuously? Is that allow in shared hosting? From what I know all hostings kill php after a timeout of 1 o2 minutes.
I have a mysql table with online users and I want to use PHP to broadcast via websocket the amount of logged in users to those online users. Can someone please help me or point me somewhere with better information how this could be achieved?
Thanks
Websockets would require lots of thing even on dedicated hosting, put aside shared hosting.
For your requirement server sent events (sse) is the correct choice, since only the server will be pushing data to the client.
SSE can simply call a server script, very much like ajax, but the client side will receive and be able to process data part by part as it comes in. Dom events would be generated whenever some data comes in.
But IE does not support SSE even in version 10. So for IE you have to use some fallback technique like "foreever iframe".
as far as hosting is concerned, ordinary shared hostings (and those which are not very cheap) would allow php scripts to run for long, as long as they are not seen as a problem.

What is the best way to use Web database using Delphi?

all.
I'm using DBExpress and C++ Builder(Delphi) 2007 and MySQL, firebird , ...
I'd like to make win 32 application which use Database(located on my web server).
I tried using DBExpress (TSQLConnection for MySQL), it's so so slow...
and I tried local database then upload/download using Indy..
but it was not good and little complicated.
So what is the base way to use web-based database for win 32 application?
Do you have any experience? or any document or any comment will be so so graceful..
thanks a lot..
Database connections via an Internet link (using a VPN or not) are slow - you are perfectly right. The main reason IMHO is the "ping" delay of every request, which is very low on a local network, and much higher via Internet. So direct connection is not a good idea.
In latest versions of Delphi, you have the DataSnap components, which is the new "standard" (or Embarcadero recommended) way of doing remote access (including web access). Even if it was found at first to be a bit limited, the latest versions are perfectly usable, and are becoming a key product for cross-platform application building with Delphi. But it is not available for Delphi 2007.
One much matured product (and available for Delphi 2007) is Data Abstract:
Data Abstract is a framework for building database-driven applications
using the multi-tier data access model, for a variety of platforms.
Of course, this is not free, but this is a proven and efficient solution.
You may also take a look at our Client-Server ORM, which can connect to any DB, and is able to implement a RESTful SOA architecture with Delphi 2007, even without using the ORM part - that is, you can use your existing DBExpress-based source code, and expose easily some web interfaces to the data. It is Open Source, and uses JSON as communication format over a secured authentication mechanism. There is a lot of documentation included (more than 700 pages of PDF), which also tries to introduce to the SOA world.
Take a look at Datasnap: info
You need a data access library, which offers features:
Thread safety. In general, you will need to use a dedicated connection for each thread.
Connection pooling. To make connection creation (what is needed for (1)) fast, there must be a connection pool.
Fast execute SQL command, open result set, fetch capabilities.
Tracing. With any one library you may run into performance issues. You need a tool to see what is going on wrong. For that you will need to see and analyze the client and server communication.
Result set caching and ability to read it simultaneously from different threads. You may have few read-only tables, which you will fetch once and cache in your application. But you will need a machanism to read this data from threads. Kind of InMemTable cloning.
My answer is biased, but you may consider AnyDAC. It has all these and many other features.
PS: dbExpress should work too. Try to find first the reason for your performance issue, and not a different library. Because the same may happen with other library ...
DB applications over a slow link need a different approach than those using a fast link. You have to be careful about how much data you move around, and about how many roundtrips your application perform.
Usually an approach when the needed subset is cached on the client, modified, and the applied to the database is preferrable (of course if changes do not neeed to be seen immediately, and the chances of conflicts are low).
No middleware will help you much if the application is not designed with handling a slow link in mind.

What data entry system should I choose for multiple users at multiple sites?

I've just started working on a project that will involve multiple people entering data from multiple geographic locations. I've been asked to prepare forms in Access 2003 to facilitate this data entry. Right now, copies of the DB (with my tables and forms) will be distributed to each of the sites, returned to me, and then I get to hammer them all together. I can do that, but I'm hoping that there is a better way - if not for this project, then for future projects.
We don't have any funding for real programming support, so it's up to me. I am comfortable with HTML, CSS, and SQL, have played around with Django a fair bit, and am a decently fast learner. I don't have much time to design forms, but they don't have to actually function for a few months.
I think there are some substantial benefits to web-based forms (primary keys are set centrally, I can monitor data entry, form changes are immediately and universally deployed, I don't have to do tech support for different versions of Access). But I'd love to hear from voices of experience about the actual benefits and hazards of this stuff.
This is very lightweight data entry - three forms attached to three tables, linked by person ID, certainly under 5000 total records. While this is hardly bank account-type information, I do take the security of these data seriously, so that's an additional consideration. Any specific technology recommendations?
Options that involve Access:
use Jet replication. If the machines where the data editing is being done can be connected via wired LAN to the central network, synchronization would be very easy to implement (via the simple Direct Synchronization, only a couple lines of code). If not (as seems the case), it's an order of magnitude more complex and requires significint setup of the remote systems. For an ongoing project, it can be a very good solution. For a one off, not so much. See the Jet Replication Wiki for lots of information on Jet Replication. One advantage of this solution is that it works completely offline (i.e., no Internet connection).
use Access for the front end and SQL Server (or some other server database) for the back end. Provide a mechanism for remote users to connect to the centrally-hosted database server, either over VPN (preferred) or by exposing a non-standard port to the open Internet (not recommended). For lightweight editing, this shouldn't require overmuch optimization of the Access app to get a usable application, but it isn't going to be as fast as a local connection, and how slow will depend on the users' Internet connections. This solution does require an Internet connection to be used.
host the Access app on a Windows Terminal Server. If the infrastructure is available and there's a budget for CALs (or if the CALs are already in place), this is a very, very easy way to share an Access app. Like #2, this requires an Internet connection, but it puts all the administration in one central location and requires no development beyond what's already been done to create the existing Access app.
For non-Access solutions, it's a matter of building a web front end. For the size app you've outlined, that sounds pretty simple for the person who already knows how to do that, not so much for the person who doesn't!
Even though I'm an Access developer, based on what you've outlined, I'd probably recommend a light-weight web-based front end, as simple as possible with no bells and whistles. I use PHP, but obviously any web scripting environment would be appropriate.
I agree with David: a web-based solution sounds the most suitable.
I use CodeCharge Studio for that: it has a very Access-like interface, lots of wizards to create online forms etc. CCS offers a number of different programming languages; I use PHP, as part of a LAMP stack.