I have found a few sparse resources on the matter, but i am looking to build a Perl server as a "microservice". More specifically a web application in LAMPhp/Perl/MariaDB in a SOA format.
What is the best way to go about building an efficient Perl server for our backend? The Web Tier opens a PHP stream TCP socket to a particular Perl server for a particular "service" (high-level service). That server must service many Web servers' requests asynchronously. The service then either connects directly to MySQL to fetch an answer (simple case) or must do some computational work to generate an answer.
My naive implementation is single-tasking:
use IO::Socket::INET;
use Data::Dumper;
use JSON::XS qw(encode_json decode_json);
$| = 1;
my $socket = new IO::Socket::INET (
LocalHost => '0.0.0.0',
LocalPort => '7000',
Proto => 'tcp',
Listen => 5,
Reuse => 1
);
while(1) {
my $client_socket = $socket->accept();
my $client_address = $client_socket->peerhost();
my $client_port = $client_socket->peerport();
my $client_json = "";
$client_socket->recv($client_json, 1024);
my $client_data = decode_json $client_json;
%response = %{process_request($client_data)};
$reply_json = encode_json(\%response);
$client_socket->send($reply_json);
shutdown($client_socket, 1);
}
So, there are obviously problems with this, as it is a a copy-paste example from the documentation. It handles a single socket/request at a time serially.
My question is: "What are best practices in Perl to build a server than can efficiently multiplex and process many incoming requests"?
My own thought on the matter is build a 'select' or 'epoll' main process that forks off to a small pool of worker threads via a Thread::Queue.
Any suggestions?
I would consider using either a complete Framework like Mojolicious, or Dancer, or a Package like Net::Server. Quoting from its perldoc:
"Net::Server" is an extensible, generic Perl server engine.
"Net::Server" attempts to be a generic server as in "Net::Daemon" and "NetServer::Generic". It includes with it the ability to run as an
inetd process ("Net::Server::INET"), a single connection server ("Net::Server" or "Net::Server::Single"), a forking server
("Net::Server::Fork"), a preforking server which maintains a constant number of preforked children ("Net::Server::PreForkSimple"), or as a
managed preforking server which maintains the number of children based on server load ("Net::Server::PreFork"). In all but the inetd type,
the server provides the ability to connect to one or to multiple server ports.
HTH
Related
TL;DR: Vertical or Horizontal scaling for this system design?
I have NGINX running as a load balancer for my application. It distributes across 4 EC2 (t2.micro's cuz I'm cheap) to route traffic and those are all currently hitting one server for my MySQL database (also a t2.micro, totalling 6 separate EC2 instances for the whole system).
I thinking about horizontally scale my database via Source/Replica distribution, and my thought is that I should route all read queries/GET requests (the highest traffic volume I'll get) to the Replicas and all write queries/POST requests to the Source db.
I know that I'll have to programmatically choose which DB my servers point to based on request method, but I'm unsure of how best to approach that or if I'm better off vertically scaling my DB at that point and investing in a larger EC2 instance.
Currently I'm connecting to the Source DB using an express server and it's handling everything. I haven't implemented the Source/Replica configuration just yet because I want to get my server-side planned out first.
Here's the current static connection setup:
const mysql = require('mysql2');
const Promise = require('bluebird');
const connection = mysql.createConnection({
host: '****',
port: 3306,
user: '****',
password: '*****',
database: 'qandapi',
});
const db = Promise.promisifyAll(connection, { multiArgs: true });
db.connectAsync().then(() =>
console.log(`Connected to QandApi as ID ${db.threadId}`)
);
module.exports = db;
What I want to happen is I want to either:
set up an express middleware function that looks at the request method and connects to the appropriate database by creating 2 configuration templates to put into the createConnection function (I'm unsure of how I would make sure it doesn't try to reconnect if a connection already exists, though)
if possible just open two connections simultaneously and route which database takes which method (I'm hopeful this option will work so that I can make things simpler)
Is this feasible? Am I going to see worse performance doing this than if I just vertically scaled my EC2 to something with more vCPUs?
Please let me know if any additional info is needed.
Simultaneous MySQL Database Connection
I would be hesitant to use any client input to connect to a server, but I understand how this could be something you would need to do in some scenarios. The simplest and quickest way around this issue would be to create a second database connection file. In order to make this dynamic, you can simply require the module based on conditions in your code, so sometimes it will be called and promised at only certain points, after certain conditions. This process could be risky and requires requiring modules in the middle of your code so it isn't ideal but can get the job done. Ex :
const dbConnection = require("../utils/dbConnection");
//conditional {
const controlledDBConnection = require("../utils/controlledDBConnection");
var [row] = await controlledDBConnection.execute("SELECT * FROM `foo`;")
}
Although using more files could potentially have an effect on space constraints and could potentially slow down code while waiting for a new promise, but the overall effect will be minimal. controlledDBConnection.js would just be something close to a duplicate to dbConnection.js with slightly different parameters depending on your needs.
Another path you can take if you want to avoid using multiple files is to export a module with a dynamically set variable from your controller file, and then import it into a standard connection file. This would allow you to change up your connection without rewriting a duplicate, but you will need diligent error checks and a default.
Info on modules in JS : https://javascript.info/import-export
Some other points
Use Environment Variables for your database information like host, etc. since this will allow for you to easily change information for your database all in one place, while also allowing you to include your .env file in .gitignore if you are using github
Here is another great stack overflow question/answer that might help with setting up a dynamic connection file : How to create dynamically database connection in Node.js?
How to set up .env files : https://nodejs.dev/learn/how-to-read-environment-variables-from-nodejs
How to set up .gitignore : https://stackabuse.com/git-ignore-files-with-gitignore/
If I want to live broadcast some work on a text editor embedded in a virtual terminal on my personnal computer, I can stream on the web a video of the window containing it.
But since information consists mainly in a bunch of characters, possibly with some colors and formating, I think that video is a waste of ressources, bandwidth and technology speaking.
What would you recommend for this, and is there some server implementing the solution somewhere ?
The requirements are :
the stream must be almost real time (at least 1 update per second and no more than 1 second delay)
audience can access the stream with only a web browser (no additional software on their side), read-only (no interaction with the stream or with my terminal)
features from say xterm or urxvt be supported
all necessary software (both streamer client side and potential server side) are open source
Comments on technical advantages of such tool compared to video streaming are welcome.
I finally took the time to implement a complete solution, using socket.io within a simple NodeJS server for broadcasting.
On the client side, serve a simple HTML with an Xterm.js terminal
<script src='/socket.io/socket.io.js'></script>
<script src="xterm/xterm.js"></script>
...
<div class="terminal" id="terminal"></div>
and script the synchronization along the lines of
var socket = io();
term.open(document.getElementById('terminal'));
var updateTerminal = socket.on("updateTerminal", function(data) {
term.write(data);
});
Now the data that can be passed to term.write of Xterm.js can be raw terminal data. Several UNIX utilities can monitor such raw data from a terminal, for instance tmux as proposed by jerch in the comments, or script.
To pass these data to the server for broadcasting, the easiest way is to use a named pipe; so on the server side
mkfifo server_pipe
script -f server_pipe
(the terminal issuing that last command will be the one broadcasting; if one does not have physical access to the server, one can use an additional pipe and a tunneling connection
mkfifo local_pipe
cat local_pipe | ssh <server> 'cat > path/to/server_pipe'&
script -f local_pipe
)
Finally, the NodeJS server must be listening to the named pipe and broadcast any new data
/* create server */
const http = require('http');
const server = http.createServer(function (request, response) {
...
});
/* open named pipe for reading */
const fs = require('fs');
const fd = fs.openSync("path/to/server_pipe", 'r+')
const termStream = fs.createReadStream(null, {fd});
termStream.setEncoding('utf8');
/* broadcast any new data with socket.io */
const iolib = require("socket.io");
io = iolib(server);
termStream.on('data', function(data) {
io.emit("updateTerminal", data)
});
All this mechanism is implemented in my software Remote lecture.
As for the comparison with video broadcast, I did not take the time to actually quantify the difference, but for equivalent resolution and latency, the above mechanism should use much less network and computing resources than capturing a graphic terminal output and sharing it with video.
I have a little understanding on how server sent events work. Say I have a Linux server(remote server) and I need to monitor it's CPU usage from local machine continuously (via a HTML page which will be in my local machine). Will I be able to get the CPU usage continuously from the server to local machine using SSE? If so, I need some clarifications on how to do so. Or is there any other alternatives that I can go with without involving any softwares or so?
You'll need to write some code to run on the server, which will gather whatever data you need. (Common choices include Node.js, PHP, etc.)
That code will need to either directly serve HTTP requests, or connect to a web server.
Your code will send data in this format:
event: someevent
data: {"key": "value"}
Then, your client-side will use EventSource:
const eventSource = new EventSource('https://example.com/your-sse-path');
eventSource.addEventListener('someevent', (e) => {
console.log(JSON.parse(e.data));
});
import socket
def functions():
print ("hello")
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('192.168.137.1', 20000)
sock.bind(server_address)
sock.listen(1)
conn, addr = sock.accept()
print ('Connected by', addr)
sock.listen(1)
conn.sendall(b"Welcome to the server")
My question is how to send a function to the client,
I know that conn.sendall(b"Welcome to the server") will data to the client.
Which can be decoded.
I would like to know how to send a function to a client like
conn.sendall(function()) - this does not work
Also I would like to know the function that would allow the client to receive the function I am sending
I have looked on the python website for a function that could do this but I have not found one.
The functionality requested by you is principally impossible unless explicitly coded on client side. If this were possible, one could write a virus which easily spreads into any remote machine. Instead, this is client right responsibility to decode incoming data in any manner.
Considering a case client really wants to receive a code to execute, the issue is that code shall be represented in a form which, at the same time,
is detached from server context and its specifics, and can be serialized and executed at any place
allows secure execution in a kind of sandbox, because a very rare client will allow arbitrary server code to do anything at the client side.
The latter is extremely complex topic; you can read any WWW browser security history - most of closed vulnerabilities are of issues in such sandboxing.
(There are environments when such execution is allowed and desired; e.g. Erlang cookie-based peering cluster. But, in such cluster, side B is also allowed to execute anything at side A.)
You should start with searching an execution environment (high-level virtual machine) which conforms to your needs in functionality and security. For Python, you'd look at multiprocessing module: its implementation of worker pools doesn't pass the code itself, but simplifies passing data for execution requests. Also, passing of arbitrary Python data without functions is covered with marshal and pickle modules.
I've a MySql database hosted in my web site, with a table named UsrLic
Where any one wants to buy my software must register and enter his/her Generated Machine Key (+ username, email ...etc).
So my question is:
I want to automate this process from my software, how this Process will be?
Should I connect and update my database directly from my software ( and this means I must save all my database connection parameters in it * my database username , password , server * and then use ADO or MyDac to connect to this database ? and if yes how secure is this process ?
or any other suggestions .
I recommend creating an API on your web site in PHP and calling the API from Delphi.
That way, the database is only available to your web server and not to the client application, ever. In fact, you should run your database on localhost or with a private IP so that only machines on the same physical network can reach it.
I have implemented this and am implementing it again as we speak.
PHP
Create a new file named register_config.php. In this file, setup your MySQL connection information.
Create a file named register.php. In this file, put your registration functions. From this file, include 'register_config.php'. You will pass parameters to the functions you create here, and they will do the reading and writing to your database.
Create a file named register_api.php. From this file, include 'register.php'. Here, you will process POST or GET variables that are sent from your client application, call functions in register.php, and return results back to the client, all via HTTP.
You will have to research connecting to and querying a MySQL database. The W3Schools tutorials will have you doing this very quickly.
For example:
Your Delphi program calls https://mysite/register_api.php with Post() and sends the following values:
name=Marcus
email=marcus#gmail.com
Here's how the beginning of register_api.php might look:
// Our actual database and registration functions are in this library
include 'register.php';
// These are the name value pairs sent via POST from the client
$name = $_POST['name'];
$email = $_POST['email'];
// Sanitize and validate the input here...
// Register them in the DB by calling my function in register.php
if registerBuyer($name, $email) {
// Let them know we succeeded
echo "OK";
} else {
// Let them know we failed
echo "ERROR";
}
Delphi
Use Indy's TIdHTTP component and its Post() or Get() method to post data to register_api.php on the website.
You will get the response back in text from your API.
Keep it simple.
Security
All validation should be done on the server (API). The server must be the gatekeeper.
Sanitize all input to the API from the user (the client) before you call any functions, especially queries.
If you are using shared web hosting, make sure that register.php and register_config.php are not world readable.
If you are passing sensitive information, and it sounds like you are, you should call the registration API function from Delphi over HTTPS. HTTPS provides end to end protection so that nobody can sniff the data being sent off the wire.
Simply hookup a TIdSSLIOHandlerSocketOpenSSL component to your TIdHTTP component, and you're good to go, minus any certificate verification.
Use the SSL component's OnVerifyPeer event to write your own certificate verification method. This is important. If you don't verify the server side certificate, other sites can impersonate you with DNS poisoning and collect the data from your users instead of you. Though this is important, don't let this hold you up since it requires a bit more understanding. Add this in a future version.
Why don't you use e.g. share*it? They also handle the buying process (i don't see how you would do this for yourself..) and let you create a reg key through a delphi app.