Streaming a virtual terminal without video - html

If I want to live broadcast some work on a text editor embedded in a virtual terminal on my personnal computer, I can stream on the web a video of the window containing it.
But since information consists mainly in a bunch of characters, possibly with some colors and formating, I think that video is a waste of ressources, bandwidth and technology speaking.
What would you recommend for this, and is there some server implementing the solution somewhere ?
The requirements are :
the stream must be almost real time (at least 1 update per second and no more than 1 second delay)
audience can access the stream with only a web browser (no additional software on their side), read-only (no interaction with the stream or with my terminal)
features from say xterm or urxvt be supported
all necessary software (both streamer client side and potential server side) are open source
Comments on technical advantages of such tool compared to video streaming are welcome.

I finally took the time to implement a complete solution, using socket.io within a simple NodeJS server for broadcasting.
On the client side, serve a simple HTML with an Xterm.js terminal
<script src='/socket.io/socket.io.js'></script>
<script src="xterm/xterm.js"></script>
...
<div class="terminal" id="terminal"></div>
and script the synchronization along the lines of
var socket = io();
term.open(document.getElementById('terminal'));
var updateTerminal = socket.on("updateTerminal", function(data) {
term.write(data);
});
Now the data that can be passed to term.write of Xterm.js can be raw terminal data. Several UNIX utilities can monitor such raw data from a terminal, for instance tmux as proposed by jerch in the comments, or script.
To pass these data to the server for broadcasting, the easiest way is to use a named pipe; so on the server side
mkfifo server_pipe
script -f server_pipe
(the terminal issuing that last command will be the one broadcasting; if one does not have physical access to the server, one can use an additional pipe and a tunneling connection
mkfifo local_pipe
cat local_pipe | ssh <server> 'cat > path/to/server_pipe'&
script -f local_pipe
)
Finally, the NodeJS server must be listening to the named pipe and broadcast any new data
/* create server */
const http = require('http');
const server = http.createServer(function (request, response) {
...
});
/* open named pipe for reading */
const fs = require('fs');
const fd = fs.openSync("path/to/server_pipe", 'r+')
const termStream = fs.createReadStream(null, {fd});
termStream.setEncoding('utf8');
/* broadcast any new data with socket.io */
const iolib = require("socket.io");
io = iolib(server);
termStream.on('data', function(data) {
io.emit("updateTerminal", data)
});
All this mechanism is implemented in my software Remote lecture.
As for the comparison with video broadcast, I did not take the time to actually quantify the difference, but for equivalent resolution and latency, the above mechanism should use much less network and computing resources than capturing a graphic terminal output and sharing it with video.

Related

HTML server sent events

I have a little understanding on how server sent events work. Say I have a Linux server(remote server) and I need to monitor it's CPU usage from local machine continuously (via a HTML page which will be in my local machine). Will I be able to get the CPU usage continuously from the server to local machine using SSE? If so, I need some clarifications on how to do so. Or is there any other alternatives that I can go with without involving any softwares or so?
You'll need to write some code to run on the server, which will gather whatever data you need. (Common choices include Node.js, PHP, etc.)
That code will need to either directly serve HTTP requests, or connect to a web server.
Your code will send data in this format:
event: someevent
data: {"key": "value"}
Then, your client-side will use EventSource:
const eventSource = new EventSource('https://example.com/your-sse-path');
eventSource.addEventListener('someevent', (e) => {
console.log(JSON.parse(e.data));
});

How can I use socket io for just server client communication?

I would like to build a web page with interactive content.
I would like to use socket IO, but I've got a problem when I send two number from client to the server.
The server adds the two numbers and sends for every user (but I want just send back to the one user). I wouldn't like to store the users. So I would like to ask how can I build this example using NodeJS and Socket IO?
if you look into socket.io docs, you will find out that you can specify the key for each message you send, for instance:
io.on('connection', function (socket) {
// specify a clientId for each client you have, you may define it at the moment the connection starts.
socket.emit(`news:${clientId}`, { hello: 'world' });
});
And, on the client side, you have:
socket.on(`news:${myClientId}`, function (data) {
console.log(data);
});
You may generate this id randomly by using many libraries, for instance Node-Forge.
Hope it helps! Feel free to ask further.

How to build a Perl Socket Server?

I have found a few sparse resources on the matter, but i am looking to build a Perl server as a "microservice". More specifically a web application in LAMPhp/Perl/MariaDB in a SOA format.
What is the best way to go about building an efficient Perl server for our backend? The Web Tier opens a PHP stream TCP socket to a particular Perl server for a particular "service" (high-level service). That server must service many Web servers' requests asynchronously. The service then either connects directly to MySQL to fetch an answer (simple case) or must do some computational work to generate an answer.
My naive implementation is single-tasking:
use IO::Socket::INET;
use Data::Dumper;
use JSON::XS qw(encode_json decode_json);
$| = 1;
my $socket = new IO::Socket::INET (
LocalHost => '0.0.0.0',
LocalPort => '7000',
Proto => 'tcp',
Listen => 5,
Reuse => 1
);
while(1) {
my $client_socket = $socket->accept();
my $client_address = $client_socket->peerhost();
my $client_port = $client_socket->peerport();
my $client_json = "";
$client_socket->recv($client_json, 1024);
my $client_data = decode_json $client_json;
%response = %{process_request($client_data)};
$reply_json = encode_json(\%response);
$client_socket->send($reply_json);
shutdown($client_socket, 1);
}
So, there are obviously problems with this, as it is a a copy-paste example from the documentation. It handles a single socket/request at a time serially.
My question is: "What are best practices in Perl to build a server than can efficiently multiplex and process many incoming requests"?
My own thought on the matter is build a 'select' or 'epoll' main process that forks off to a small pool of worker threads via a Thread::Queue.
Any suggestions?
I would consider using either a complete Framework like Mojolicious, or Dancer, or a Package like Net::Server. Quoting from its perldoc:
"Net::Server" is an extensible, generic Perl server engine.
"Net::Server" attempts to be a generic server as in "Net::Daemon" and "NetServer::Generic". It includes with it the ability to run as an
inetd process ("Net::Server::INET"), a single connection server ("Net::Server" or "Net::Server::Single"), a forking server
("Net::Server::Fork"), a preforking server which maintains a constant number of preforked children ("Net::Server::PreForkSimple"), or as a
managed preforking server which maintains the number of children based on server load ("Net::Server::PreFork"). In all but the inetd type,
the server provides the ability to connect to one or to multiple server ports.
HTH

Python 3.4 Sockets sendall function

import socket
def functions():
print ("hello")
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('192.168.137.1', 20000)
sock.bind(server_address)
sock.listen(1)
conn, addr = sock.accept()
print ('Connected by', addr)
sock.listen(1)
conn.sendall(b"Welcome to the server")
My question is how to send a function to the client,
I know that conn.sendall(b"Welcome to the server") will data to the client.
Which can be decoded.
I would like to know how to send a function to a client like
conn.sendall(function()) - this does not work
Also I would like to know the function that would allow the client to receive the function I am sending
I have looked on the python website for a function that could do this but I have not found one.
The functionality requested by you is principally impossible unless explicitly coded on client side. If this were possible, one could write a virus which easily spreads into any remote machine. Instead, this is client right responsibility to decode incoming data in any manner.
Considering a case client really wants to receive a code to execute, the issue is that code shall be represented in a form which, at the same time,
is detached from server context and its specifics, and can be serialized and executed at any place
allows secure execution in a kind of sandbox, because a very rare client will allow arbitrary server code to do anything at the client side.
The latter is extremely complex topic; you can read any WWW browser security history - most of closed vulnerabilities are of issues in such sandboxing.
(There are environments when such execution is allowed and desired; e.g. Erlang cookie-based peering cluster. But, in such cluster, side B is also allowed to execute anything at side A.)
You should start with searching an execution environment (high-level virtual machine) which conforms to your needs in functionality and security. For Python, you'd look at multiprocessing module: its implementation of worker pools doesn't pass the code itself, but simplifies passing data for execution requests. Also, passing of arbitrary Python data without functions is covered with marshal and pickle modules.

Sync indexedDB with mysql database

I am about to develop an application where employees go to service repair machines at customer premises. They need to fill up a service card using a tablet or any other mobile device.
In case of no Internet connection, I am thinking about using HTML5 offline storage, mainly IndexedDB to store the service card (web form) data locally, and do a sync at the office where Internet exists. The sync is with a MySQL database.
So the question: is it possible to sync indexedDB with mysql? I have never worked with indexedDB, I am only doing research and saw it is a potential.
Web SQL is deprecated. Otherwise, it could have been the closer solution.
Any other alternatives in case the above is difficult or outside the standard?
Your opinions are highly appreciated.
Thanks.
This is definitly do able. I am only just starting to learn indexeddb the last couple of days. This is how I would see it working tho. Sorry dont have code to give you.
Website knows its in offline mode somehow
Clicking submit form saves the data into indexeddb
Later laptop or whatever is back online or on intranet and can now talk to main server sends all indexeddb rows to server to be stored in mysql via an ajax call.
indexeddb is cleared
repeat
A little bit late, but i hope it helps.
This is posible, am not sure if is the best choice. I can tell you that am building a webapp where I have a mysql database and the app must work offline and keep trace of the data. I try using indexedDB and it was very confusing for me so I implemented DexieJs, a minimalistic and straight forward API to comunicate with indexedDB in an easy way.
Now the app is working online then if it goes down the internet, it works offline until it gets internet back and then upload the data to the mysql database. One of the solutions i read to save the data was to store in a TEXT field the json object been passed to JSON.stringify(), and once you need the data back JSON.parse().
This was my motivation to build the app in that way and also that we couldn't change of database :
IndexedDB Tutorial
Sync IndexedDB with MySQL
Connect node to mysql
[Update for 2021]
For anyone reading this, I can recommend to check out AceBase.
AceBase is a realtime database that enables easy storage and synchronization between browser and server databases. It uses IndexedDB in the browser, and its own binary db format or SQL Server / SQLite storage on the server side. MySQL storage is also on the roadmap. Offline edits are synced upon reconnecting and clients are notified of remote database changes in realtime through a websocket (FAST!).
On top of this, AceBase has a unique feature called "live data proxies" that allow you to have all changes to in-memory objects to be persisted and synced to local and server databases, so you can forget about database coding altogether, and program as if you're only using local objects. No matter if you're online or offline.
The following example shows how to create a local IndexedDB database in the browser, how to connect to a remote database server that syncs with the local database, and how to create a live data proxy that eliminates further database coding altogether.
const { AceBaseClient } = require('acebase-client');
const { AceBase } = require('acebase');
// Create local database with IndexedDB storage:
const cacheDb = AceBase.WithIndexedDB('mydb-local');
// Connect to server database, use local db for offline storage:
const db = new AceBaseClient({ dbname: 'mydb', host: 'db.myproject.com', port: 443, https: true, cache: { db: cacheDb } });
// Wait for remote database to be connected, or ready to use when offline:
db.ready(async () => {
// Create live data proxy for a chat:
const emptyChat = { title: 'New chat', messages: {} };
const proxy = await db.ref('chats/chatid1').proxy(emptyChat); // Use emptyChat if chat node doesn't exist
// Get object reference containing live data:
const chat = proxy.value;
// Update chat's properties to save to local database,
// sync to server AND all other clients monitoring this chat in realtime:
chat.title = `Changing the title`;
chat.messages.push({
from: 'ewout',
sent: new Date(),
text: `Sending a message that is stored in the database and synced automatically was never this easy!` +
`This message might have been sent while we were offline. Who knows!`
});
// To monitor realtime changes to the chat:
chat.onChanged((val, prev, isRemoteChange, context) => {
if (val.title !== prev.title) {
console.log(`Chat title changed to ${val.title} by ${isRemoteChange ? 'us' : 'someone else'}`);
}
});
});
For more examples and documentation, see AceBase realtime database engine at npmjs.com