XMPP - What is different between muc (mod_muc) and mucLight (mod_muc_light)? - ejabberd

I have successfully implemented mucLight in my app with mongooseIM server.but I'm aware about muc protocol on ejabberd server.
Which client extension support muc/mucLight protocol ?
Is there a way to have a shared history for a group using muc/mucLight protocol ?
Which is improved for mobile devices ?
any others pros and cons ?

Which client extension support muc/mucLight protocol ?
MUC is supported by almost every client library, MUC Light is supported by Smack and XMPPFramework. Also, MUC Light may be configured to use MUC protocol (with some MUC Light-exclusive features unavailable).
Is there a way to have a shared history for a group using muc/mucLight protocol ?
In MongooseIM both MUC and MUC Light use the same extension and table for archiving so theoretically their archives should be compatible but it is not a requirement so is not tested automatically in the project.
Which is improved for mobile devices ?
MUC Light exchanges data much less frequently than MUC and the packets attempt to carry information as efficiently as possible to reduce round-trips and unnecessary traffic.
any others pros and cons ?
http://mongooseim.readthedocs.io/en/latest/open-extensions/muc_light/#2-requirements
Here are the high-level principles behind MUC Light, that more or less directly indicate differences.

https://www.youtube.com/watch?v=4fZ3iQ752Tk
I explained it once and it has been recorded.

Related

Is there a standard PubSub protocol over WebSocket?

I'm looking for a way to implement basic Publish / Subscribe between applications written in different languages, to exchange events with JSON payloads.
WebSocket seems like the obvious choice for the transport, but you need an (arguably small) layer on top to implement some of the plumbing:
aggreeing on messages representing the pubsub domain "subscribe to a topic", "publish a message"
aggreeing on messages for the infra ("heartbeat", "authentication")
I was expecting to find an obvious standard for this, but there does not seem to be any.
WAMP is often refered to, but in my (short) experience, the implementations of server / clients libraries are not great
STOMP is often refered to, but in my (even shorter) experience, it's even worse
Phoenix Channels are nice, but they're restricted to Phoenix/Elixir world, and not standard (so the messages can be changed at any phoenix version without notice.)
So, is everyone using MQTT/WS (which require another broker components, rather than simple servers ?) Or gRPC ?
Is everyone just re-implementing it from scratch ? (It's one of those things that seems easy enough to do oneselves, but I guess you just end up with an half-baked, poorly-specified, broken version of the thing I'm looking for...)
Or is there something fundamentally broken with the idea of serving streams of data from a server over WS ?
There are two primary classes of WebSocket libraries; those that implement the protocol and leave the rest to the developer, and those that build on top of the protocol with various additional features commonly required by realtime messaging applications, such as restoring lost connections, pub/sub, and channels, authentication, authorization, etc.
The latter variety often requires that their own libraries be used on the client-side, rather than just using the raw WebSocket API provided by the browser. As such, it becomes crucial to make sure you’re happy with how they work and what they’re offering. You may find yourself locked into your chosen solution’s way of doing things once it has been integrated into your architecture, and any issues with reliability, performance, and extensibility may come back to bite you.
ws, faye-websockets, socket.io, μWebSockets and SocketCluster are some good open-source options.
The number of concurrent connections a server can handle is rarely the bottleneck when it comes to server load. Most decent WebSocket servers can support thousands of concurrent connections, but what’s the workload required to process and respond to messages once the WebSocket server process has handled receipt of the actual data?
Typically there will be all kinds of potential concerns, such as reading and writing to and from a database, integration with a game server, allocation and management of resources for each client, and so forth.
As soon as one machine is unable to cope with the workload, you’ll need to start adding additional servers, which means now you’ll need to start thinking about load-balancing, synchronization of messages among clients connected to different servers, generalized access to client state irrespective of connection lifespan or the specific server that the client is connected to – the list goes on and on.
There’s a lot involved when implementing support for the WebSocket protocol, not just in terms of client and server implementation details, but also with respect to support for other transports to ensure robust support for different client environments, as well as broader concerns, such as authentication and authorization, guaranteed message delivery, reliable message ordering, historical message retention, and so forth. A data stream network such as Ably Realtime would be a good option to use in such cases if you'd rather avoid re-inventing the wheel.
There's a nice piece on WebSockets, Pub/Sub, and all issues related to scaling that I'd recommend reading.
Full disclosure: I'm a Developer Advocate for Ably but I hope this genuinely answers your question.

Video and audio stream - server to clients only

Is there a way to stream a video and audio on a website just to the clients, using a camera installed on the server - for instance, like youtube does ?
I've started reading webrtc, but if I use webrtc I should create a stun/turn server and other things, which for one way stream I think is not necessary (this is just my understanding of the things..) because I don't need anything from the clients, literally, neither their video, or audio..
So is there a way to achieve this using html5, streaming just in one direction:
server (camera) -> clients
Is there something about this out there, or should I stick with webrtc ?
I'm going to explain a possible solution for this scenario, there might be others, but I hope mine gives you a rough idea of how you could do it and a start point to explore more about the amazing possibilities of WebRTC. Please let me know if something is not understood.
So, WebRTC is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. Sweet, that is: WebRTC has a quite good browser support (not in every browser though, Safari just started supporting it a month ago with Safari 11). But in this case we want to use WebRTC in the server side. At the end of the day we can still think about peer-to-peer real time communication, where one of our peers is the server.
I don't know if you are familiar with Node.js, but I recommend you to write your Server app with it (<3 Javascript!):
There are a few libraries that wrap WebRTC functionality to be used in the server side, like node-webrtc and node-rtc-peer-connection.
But I recommend you to to take a look at electron-werbrtc, since
the others might be using deprecated methods or be incomplete.
electron-webrtc runs a headless Electron client in the background to
use Chromium's built-in WebRTC implementation. So with it you should
be able to access the Camera in your server and create a stream to be
served to the other peer (the browser).
All above would be the WebRTC related tasks, in this case: streaming video peer(server)-to-peer(browser).
Now, let's talk about signaling process, stun and turn.
Signaling: imagine now a scenario peer-to-peer with 2 browsers, they want to establish a direct connection and stream video and audio between each other. But they don't know each other, like if I don't know your home address, I can't send you a letter. So they need a service that helps them know each other, so they can have the other's IP. This should be done by what is called "a signaling server". If somehow you know the other peer IP, you wouldn't need a signaling server.
STUN/TURN: the scheme above works perfectly in a local area network where each peer has its own IP address and there are no firewalls and routers between them. But otherwise, you can have peers behind a NAT or firewalls, and then your signaling server won't be able to make both peers to discover themselves. If you have peers behind a NAT, you'll need a STUN server, and if you have peers behind firewalls you'll need a TURN server. This is a bit simplified, but I just want you to have the general picture of when you might need STUN/TURN servers.
To better understand Signaling, STUN and TURN, there is a very graphic article that explains them perfectly.
Now, for your scenario:
I think you prob don't need STUN/TURN servers and also you prob don't need to implement the signaling process, because the browsers that are supposed to receive the stream from the server will know that server address, right? So they can establish a WebRTC connection with it.
EDIT: it is likely that you will need to implement some sort of handshake between the server and the clients (browsers), so this will be the signaling process. This is not part of WebRTC and this is why you need to implement it yourself. As I said, it is the way 2 peers can discover each other, but they also exchange information as their local media conditions, like codecs, resolutions they can handle, etc. For your case, your signaling server could be hosted in the same server you use to strea: you can build a small node.js app that runs there and that manages all the signaling process easily, it is not a big deal. I recommend you to read this article, and specially the section "How can I build a signaling service?". In general all WebRTC articles from that site are very helpful.
Does this make sense to you? I think with it you can start digging a little bit more and see if with this is enough or you need to implement more stuff. Hope it helps!

Info about GE to use for a IoT mobility Project

I would to develop an IoT mobility project project using fi-ware.
My intention is to deploy a lot of sensor on the taxi/bus
in the city to control air quality.
I want to use IDAS GEi, but i have some questions:
I must use a linino board as gateway for my sensors.
How can i send observations or receive commands from
linino to IDAS and viceversa? I have found on the web this
tool: figway. I have read figway is used as communication
gateway between raspberryPI and IDAS.
So i have thought to adapt figway for linino. Is it the correct way
to reach my goal? Are there better ways to do that?
Furthermore, i should provide discovery mechanisms and a transparent
interface to control the sensors. For example, i should provide to the user
the possibility to find the sensors, that provide a data measure, in a certain place.
I would to use SWE for that. Is IDAS swe compliant? I have read in the documentation
IDAS uses swe data model, sensorML, O&M but i have not found anything about
SOS/SAS/SPS/WNS services.
Has IDAS discovery mechanisms? Maybe i must use other GE to do that (Configuration Manager?)
Figway is just a python example of how you can make the queries to the Ultralight 2.0 IoT-Agent.
You may port Figway to your new platform if it supports python or, alternatively you can check the HTTP POST requests to code at any other platform/language.
It is really easy, have a look at: http://www.slideshare.net/FI-WARE/fiware-iotidasintroul20v2
Additionally, do not forget that Ultralight2.0/HTTP is one of the technology options that we support for IoT. If your devices are to use other standard such as MQTT/TCP or LWM2M/CoAP/UDP you can check other IoT-Agents (that connect as well to the same Orion contextbroker):
UL2.0 and MQTT are here: https://github.com/telefonicaid/fiware-IoTAgent-Cplusplus
LWM2M is here: https://github.com/telefonicaid/lightweightm2m-iotagent
Also, if you want to use any other standard (or even your own propietary protocol) you may build up your own IoT Agent using the skeleton provided here:
https://github.com/telefonicaid/iotagent-node-lib
Thanks for using IDAS!
Cheers,

technology behind html based chat systems like facebook and google talk

Can anyone give me a brief rundown of how Facebook and Google Talk work, are there persistent connections involved similar to the classic Java based chat systems whereby a server manages the connections and directs messages to the necessary destination or are they stateless? I'd like to create something similar to these but I'm not sure where to start and if I need to have custom services running on a server I may have to rethink my approach.
I'm not after a full-blown explanation but I am interested to know if there's a stateless approach that doesn't require services running on a server. If Html5 is required that's ok.
Both use the Jabber protocol: http://www.jabber.org/

When using WebRTC, is a peer-to-peer architecture redundant to build a video chat service like Skype?

We're playing around with WebRTC and trying to understand its benefits.
One reason Skype can serve hundreds of millions of people is because of its decentralized, peer-to-peer architecture, which keeps server costs down.
Does WebRTC allow people to build a video chat application similar to Skype in that the architecture can be decentralized (i.e., video streams are not routed from a broadcaster through a central server to listeners but rather routed directly from broadcaster to listener)?
Or, put another way, does WebRTC allow someone to essentially replicate the benefits of a P2P architecture similar to Skype's?
Or do you still need something similar to Skype's P2P architecture?
Yes, that's basically what WebRTC does. Calls using the getPeerConnection() API don't send voice/video data through a centralized server, but rather use firewall traversal protocols like ICE, STUN and TURN to allow a direct, peer-to-peer connection. However, the initial call setup still requires a server (most likely something running a WebSocket implementation, but it could be anything that you can figure out how to get JavaScript to talk to), so that the two clients can figure out that they're both online, signal that they want to connect, and then figure out how to do it (this is where the ICE/STUN/TURN bit comes in).
However, there's more to Skype's P2P architecture than just passing voice/video data back and forth. The majority of Skype's IP isn't in the codecs or protocols (much of which they licensed from Global IP Solutions, which Google purchased two years ago and then open-sourced, and which forms of the basis of Chrome's WebRTC implementation). Skype's real IP is all in the piece of WebRTC which still depends on a server: figuring out which people are online, and where they are, and how to get a hold of them, and doing that in a massively decentralized fashion. (See here for some rough details.) I think that you could probably use the DataStream portion of the getPeerConnection() API to do that sort of thing, if you were really, really smart - but it would be complicated, and would most likely stomp on a few Skype patents. Unless you want to be really, really huge, you'd probably just want to run your own centralized presence and location servers and handle all that stuff through standard WebSockets.
I should note that Skype's network architecture has changed since it was created; it no longer (from what I hear) uses random users as supernodes to relay data from client 1 to client 2; it didn't scale well and caused rampant variability in results (and annoyed people who had non-firewalled connections and bandwidth).
You definitely can build something SKype-like with WebRTC - and more. :-)

Categories