Best way to request synchronization when offline - html

I'm developing a web app that need work on-line and when the coonection is not available, so i would like to say how the best way to synchronization my requests when on-line again. I see some things about Service Workers, but i don't know if it is the best.

You can definitely use service workers for this use case!
The particular solution will depend on your specific needs (service workers are pretty generic).
A possible approach would be a "request deferrer", like the one implemented in the ServiceWorker Cookbook. In this solution, while the user is offline, requests to the server are queued and, when the user goes back online, the queued requests are actually executed against the server.

Related

Which hook to limit the number of messages a user can send per day?

We want to use ejabberd in the context of a web application having fairly unique and business rules, we'd therefore need to have every chat message (not protocol message, but message a user sends to another one) go through our web application first, and then have the web application deliver the message to ejabberd on behalf of the user (if our business rules allow the message to be sent).
The web application is also the one providing the contact lists (called rosters if I understand correctly to ejabberd). We need to be and remain the single source of truth to ease maintenance.
To us, ejabberd value added would be to deliver chat messages in near real-time to clients, and enable cool things such as presence indicators. Web clients will maintain a direct connection to ejabberd through websocket, but this connection will have to be read-only as far as chat messages are concerned, and read-write as far as presence messages are concerned.
The situation is similar with regards to audio and video calls. While this time the call per see will directly be managed by ejabberd to take advantage of built-in STURN, TURN etc... and will not need to go through our web app, we have custom business logic to manage who is able to call who, when, how often etc... (so in order words, we have custom business logic to authorize the call or not and we'd like to keep all the business logic centralized in the web app).
My question is what would be the proper hooks we'd need to look into to achieve what we are after? I spent an hour or so in the documentation, but I couldn't find what I am after so hopefully someone can provide me pointers. In an ideal world, we'd like to expose API endpoints from our web app that ejabberd hooks can hit. However, the first question is: which relevant hooks is ejabberd offering and where are these hooks documented?
Any help would be greatly appreciated, thank you!
When a client sends a packet to ejabberd, it triggers the user_send_packet hook, providing the packet and the state of the client's session process. Several modules use that hook, for example mod_service_log.

Is there a standard PubSub protocol over WebSocket?

I'm looking for a way to implement basic Publish / Subscribe between applications written in different languages, to exchange events with JSON payloads.
WebSocket seems like the obvious choice for the transport, but you need an (arguably small) layer on top to implement some of the plumbing:
aggreeing on messages representing the pubsub domain "subscribe to a topic", "publish a message"
aggreeing on messages for the infra ("heartbeat", "authentication")
I was expecting to find an obvious standard for this, but there does not seem to be any.
WAMP is often refered to, but in my (short) experience, the implementations of server / clients libraries are not great
STOMP is often refered to, but in my (even shorter) experience, it's even worse
Phoenix Channels are nice, but they're restricted to Phoenix/Elixir world, and not standard (so the messages can be changed at any phoenix version without notice.)
So, is everyone using MQTT/WS (which require another broker components, rather than simple servers ?) Or gRPC ?
Is everyone just re-implementing it from scratch ? (It's one of those things that seems easy enough to do oneselves, but I guess you just end up with an half-baked, poorly-specified, broken version of the thing I'm looking for...)
Or is there something fundamentally broken with the idea of serving streams of data from a server over WS ?
There are two primary classes of WebSocket libraries; those that implement the protocol and leave the rest to the developer, and those that build on top of the protocol with various additional features commonly required by realtime messaging applications, such as restoring lost connections, pub/sub, and channels, authentication, authorization, etc.
The latter variety often requires that their own libraries be used on the client-side, rather than just using the raw WebSocket API provided by the browser. As such, it becomes crucial to make sure you’re happy with how they work and what they’re offering. You may find yourself locked into your chosen solution’s way of doing things once it has been integrated into your architecture, and any issues with reliability, performance, and extensibility may come back to bite you.
ws, faye-websockets, socket.io, μWebSockets and SocketCluster are some good open-source options.
The number of concurrent connections a server can handle is rarely the bottleneck when it comes to server load. Most decent WebSocket servers can support thousands of concurrent connections, but what’s the workload required to process and respond to messages once the WebSocket server process has handled receipt of the actual data?
Typically there will be all kinds of potential concerns, such as reading and writing to and from a database, integration with a game server, allocation and management of resources for each client, and so forth.
As soon as one machine is unable to cope with the workload, you’ll need to start adding additional servers, which means now you’ll need to start thinking about load-balancing, synchronization of messages among clients connected to different servers, generalized access to client state irrespective of connection lifespan or the specific server that the client is connected to – the list goes on and on.
There’s a lot involved when implementing support for the WebSocket protocol, not just in terms of client and server implementation details, but also with respect to support for other transports to ensure robust support for different client environments, as well as broader concerns, such as authentication and authorization, guaranteed message delivery, reliable message ordering, historical message retention, and so forth. A data stream network such as Ably Realtime would be a good option to use in such cases if you'd rather avoid re-inventing the wheel.
There's a nice piece on WebSockets, Pub/Sub, and all issues related to scaling that I'd recommend reading.
Full disclosure: I'm a Developer Advocate for Ably but I hope this genuinely answers your question.

Implementing a scalable multiroom chat system

I've been looking into sockjs-tornado recently and am working on a chat function for a social networking site. I'm trying to get a feel for common methods used in building scalable multiroom chat functionality. I'll outline a couple of the methods I've thought of and I'd like to get feedback. What methods are used in the real world? What are the advantages and disadvantages to these methods?
Prereqs:
running tornado
using sockjs-tornado lib
sockjs-client lib for js
Everything else is open.
Methods I've considered:
For loop
This seems like the simplest way to go. You create a user class that subscribes to certain room classes. The user sends a message class that contains a room id and the server redirects the message in the loop only to users that have subscribed to that room. This seems to me to be by far the worst because the complexity is obviously at least linear. (Imagine 500 users connected at once to 5 chat rooms each.)
Multi-tasking/multiple server instances
This also seems like a bad idea because you could have 500 server instances running at any time on... different ports? I'm really not sure on the implementation of this method.
Native support
Now granted, a lot of libraries have this built in such as socketio. However that's not an option due to the sole node.js support. (I'm on tornado server.) Socks in particular does not have built in support for multiple "rooms".
Conclusion
I'm looking for resources/case studies, and industry standards. Any help would be appreciated.
I would just use a message queue server like RabbitMQ with a fanout exchange as each "chat room".
You can see an example of using a fanout exchange in Python here.
The Pika AMQP library works with Tornado, too.
The advantage with using a message queueing system is that you can have users connected to different Tornado processes on different servers while still being in the same "room", giving you high availability on the HTTP layer.
RabbitMQ also has HA capabilities (although not the greatest).

When using WebRTC, is a peer-to-peer architecture redundant to build a video chat service like Skype?

We're playing around with WebRTC and trying to understand its benefits.
One reason Skype can serve hundreds of millions of people is because of its decentralized, peer-to-peer architecture, which keeps server costs down.
Does WebRTC allow people to build a video chat application similar to Skype in that the architecture can be decentralized (i.e., video streams are not routed from a broadcaster through a central server to listeners but rather routed directly from broadcaster to listener)?
Or, put another way, does WebRTC allow someone to essentially replicate the benefits of a P2P architecture similar to Skype's?
Or do you still need something similar to Skype's P2P architecture?
Yes, that's basically what WebRTC does. Calls using the getPeerConnection() API don't send voice/video data through a centralized server, but rather use firewall traversal protocols like ICE, STUN and TURN to allow a direct, peer-to-peer connection. However, the initial call setup still requires a server (most likely something running a WebSocket implementation, but it could be anything that you can figure out how to get JavaScript to talk to), so that the two clients can figure out that they're both online, signal that they want to connect, and then figure out how to do it (this is where the ICE/STUN/TURN bit comes in).
However, there's more to Skype's P2P architecture than just passing voice/video data back and forth. The majority of Skype's IP isn't in the codecs or protocols (much of which they licensed from Global IP Solutions, which Google purchased two years ago and then open-sourced, and which forms of the basis of Chrome's WebRTC implementation). Skype's real IP is all in the piece of WebRTC which still depends on a server: figuring out which people are online, and where they are, and how to get a hold of them, and doing that in a massively decentralized fashion. (See here for some rough details.) I think that you could probably use the DataStream portion of the getPeerConnection() API to do that sort of thing, if you were really, really smart - but it would be complicated, and would most likely stomp on a few Skype patents. Unless you want to be really, really huge, you'd probably just want to run your own centralized presence and location servers and handle all that stuff through standard WebSockets.
I should note that Skype's network architecture has changed since it was created; it no longer (from what I hear) uses random users as supernodes to relay data from client 1 to client 2; it didn't scale well and caused rampant variability in results (and annoyed people who had non-firewalled connections and bandwidth).
You definitely can build something SKype-like with WebRTC - and more. :-)

What's the best way to notify a non-web application about a change on a web page?

Let's say I have two applications which have to work together to a certain extent.
A web application (PHP, Ruby on Rails, ...)
A desktop application (Java, C++, ...)
The desktop application has to be notified from the web application and the delay between sending and receiving the notification must be short. (< 10 seconds)
What are possible ways to do this? I can think of polling in a 10 second interval, but that would produce much traffic if many desktop applications have to be notified. On a LAN I'd use an UDP broadcast, but unfortunately that's not possible here...
I appreciate any ideas you could give me.
I think the "best practice" here will depend on the number of desktop clients you expect to serve. If there's just one desktop to be notified, then polling may well be a fine approach -- yes, polling is much more overhead than an event-based notification, but it'll certainly be the easiest solution to implement.
If the overhead of polling is truly unacceptable, then I see two basic alternatives:
Keep a persistent connection open between the desktop and web-server (could be a "comet"-style web request, or a raw socket connection)
Expose a service from within the desktop app, and register the address of the service with the web-server. This way, the web-server can call out to the desktop as needed.
Be warned, though -- both alternatives are chock full of gotchas. A few highlights:
Keeping a connection open can be tricky, since you want your web-servers to be hot-swappable
Calling out to an external service (eg, your desktop) from a web-server is dangerous, because this request could hang. You'd want move this notification onto a separate thread to avoid tying up the webserver.
To mitigate some of the concerns, you might decouple the unreliable desktop from the web-server by introducing an intermediary notification server -- the web-server could post an update somewhere, and the desktop could poll/connect/register there to be notified. To avoid reinventing the wheel here, this could involve some sort of MessageQueue system... This, of course, adds the complexity of needing to maintain the new intermediary.
Again, all of these approaches are probably quite complex, so I'd say polling is probably the best bet.
I can see two ways:
Your desktop application polls the web app
Your web app notifies the desktop application
Your web app could publish an RSS feed, but your desktop app will still have to poll the feed every 10 s.
The traffic need not be huge: if you use an HTTP HEAD request, you'll get a small packet with the date of the last modification (conveniently named Last-Modified).
I don't know exactly what to do to achieve your task but I can suggest to create a windows service at the desktop application PC.
This service checks the web application every interval of time for new changes and if changes occurred it can run the desktop application with notification that there is a change in the web application and in the web application when any change occurrs you can response with acknowledgment
I hope that this may be useful I didn't try it exactly but I am suggesting using like this idea.
A layer of syndication would help to scale out the system.
The desktop app can register itself with a "publisher" service (running on one of several/many machines) This publisher service receives the "notice" from your web app that something has changed, and immediately starts notifying all of its registered subscribers.
The number of publishers you need will increase with the number of users.
Edit: Forgot to mention that the desktop app will need to listen on a socket.