I'm considering RabbitMQ's usefulness for creating a multi-user chat system. People would be able to chat in various rooms, some public and some private, and privately person-to-person. Would it be possible to implement the functionality of private, invite-only rooms? For person-to-person, I might be able to use random strings for the queue/exchange names, but that wouldn't work for private rooms where the capability needs to be revokable.
Is the functionality of rabbitmqctl available to (selected) clients, and how scalable are the ACLs? Can an ACL reference the username, for a rule matching "<user>.*"?
I think I have the start of a workable solution to this. I'll create a public exchange to which any user can send a room join request. The 'server' software (actually just another RabbitMQ client) consumes from this queue, and if the user is allowed to join then it binds the room's outgoing message fanout exchange to the user's queue. Users will get an ACL including something like ^public/.*, so they would only be able to publish to the public exchange.
You can configure ACLs on RabbitMQ at the user level and the individual resources (queues or exchanges using regex) - but I don't believe this functionality is exposed through most clients.
If you are looking to build a chat client you would be much better off using ejabberd (http://www.ejabberd.im/) which is built for exactly this kind of scenario:
Multi-User Chat with eJjabberd
Related
We want to use ejabberd in the context of a web application having fairly unique and business rules, we'd therefore need to have every chat message (not protocol message, but message a user sends to another one) go through our web application first, and then have the web application deliver the message to ejabberd on behalf of the user (if our business rules allow the message to be sent).
The web application is also the one providing the contact lists (called rosters if I understand correctly to ejabberd). We need to be and remain the single source of truth to ease maintenance.
To us, ejabberd value added would be to deliver chat messages in near real-time to clients, and enable cool things such as presence indicators. Web clients will maintain a direct connection to ejabberd through websocket, but this connection will have to be read-only as far as chat messages are concerned, and read-write as far as presence messages are concerned.
The situation is similar with regards to audio and video calls. While this time the call per see will directly be managed by ejabberd to take advantage of built-in STURN, TURN etc... and will not need to go through our web app, we have custom business logic to manage who is able to call who, when, how often etc... (so in order words, we have custom business logic to authorize the call or not and we'd like to keep all the business logic centralized in the web app).
My question is what would be the proper hooks we'd need to look into to achieve what we are after? I spent an hour or so in the documentation, but I couldn't find what I am after so hopefully someone can provide me pointers. In an ideal world, we'd like to expose API endpoints from our web app that ejabberd hooks can hit. However, the first question is: which relevant hooks is ejabberd offering and where are these hooks documented?
Any help would be greatly appreciated, thank you!
When a client sends a packet to ejabberd, it triggers the user_send_packet hook, providing the packet and the state of the client's session process. Several modules use that hook, for example mod_service_log.
We wish to decouple the systems between 2 separate organizations (as an example: one could be a set of in house applications and the other a set of 3rd party applications). Although we could do this using REST based APIs, we wish to achieve things like temporal decoupling, scalability, reliable and durable communication, workload decoupling (through fan-out), etc. And it is for these reasons, we wish to use a message bus.
Now one could use Amazon's SNS and SQS as the message bus infrastructure, where our org would have an SNS instance which would publish to the 3rd party SQS instance. Similarly, for messages the 3rd party wished to send to us, they would post to their SNS instance, which would then publish to our SQS instance. See: Cross Account Integration with Amazon SNS
I was thinking of how one would achieve this sort of cross-organization integration using Azure Service Bus (ASB), as we are heavily invested in Azure. But, ASB doesnt have the ability to publish from one instance to another instance belonging to a different organization (or even to another instance in the same organization, not yet at least). Given this limitation, the plan is that we would give the 3rd party vendor separate sets of connections strings that would allow them to listen and process messages that we posted and also a separate set of connection strings that would let them post messages to a topic which we could then subscribe to and process.
My question is: Is this a good idea? Or would this be considered an anti-pattern? My biggest concern is the fact, that while the point of using a message bus was to achieve decoupling, the infrastructure piece of ASB is making us tightly coupled to the point that we need to communicate between the 2 organizations on not just the endpoints, but also, how the queue/topic was setup (session, or no session, duplicate detection etc) and the consumer is tightly coupled to how the sender sends messages (what was used as the session id, message id, etc).
Is this a valid concern?
Have you done this?
What other issues might I run into?
Using Azure Service Bus connections string with different Shared Access Policy for senders and receivers (Send and Listen) is intended to be used by senders and receivers with limitted permissions. Just like you intend to use it.
My biggest concern is the fact, that while the point of using a message bus was to achieve decoupling, the infrastructure piece of ASB is making us tightly coupled to the point that we need to communicate between the 2 organizations on not just the endpoints, but also, how the queue/topic was setup (session, or no session, duplicate detection etc) and the consumer is tightly coupled to how the sender sends messages (what was used as the session id, message id, etc).
The coupling always exists. You're coupled to the language you're using. Datastore technology used to persist your data. Cloud vendor you're using. This is not type of coupling I'd be worried, unless you're planning to change those on a monthly basis.
Not more specific to the communication patterns. Sessions would be a business requriement and not a coupling. If you required ordered message delivery, then what else would you do? On Amazon you'd be also "coupling" to FIFO queue to achieve order. Message ID is by no means coupling either. It's an attribute on a message. If receiver chooses to ignore it, they can. Yes, you're coupling to use BrokeredMessage/Message envelope and serialization, but how else would you send and receive messages? This, is more of a contract for partied to agree upon.
One name for the pattern for connecting service buses between organizations is "Shovel" (that's what they are called in RabbitMq)
Sometimes it is necessary to reliably and continually move messages
from a source (e.g. a queue) in one broker to a destination in another
broker (e.g. an exchange). The Shovel plugin allows you to configure a
number of shovels, which do just that and start automatically when the
broker starts.
In the case of Azure, one way to achieve "shovels" is by using logic apps, as they provide the ability to connect to ASB entities in different namespaces.
See:
What are Logic Apps
Service Bus Connectors
Video: Use Azure Enterprise Integration Services to run cloud apps at scale
I've been looking into sockjs-tornado recently and am working on a chat function for a social networking site. I'm trying to get a feel for common methods used in building scalable multiroom chat functionality. I'll outline a couple of the methods I've thought of and I'd like to get feedback. What methods are used in the real world? What are the advantages and disadvantages to these methods?
Prereqs:
running tornado
using sockjs-tornado lib
sockjs-client lib for js
Everything else is open.
Methods I've considered:
For loop
This seems like the simplest way to go. You create a user class that subscribes to certain room classes. The user sends a message class that contains a room id and the server redirects the message in the loop only to users that have subscribed to that room. This seems to me to be by far the worst because the complexity is obviously at least linear. (Imagine 500 users connected at once to 5 chat rooms each.)
Multi-tasking/multiple server instances
This also seems like a bad idea because you could have 500 server instances running at any time on... different ports? I'm really not sure on the implementation of this method.
Native support
Now granted, a lot of libraries have this built in such as socketio. However that's not an option due to the sole node.js support. (I'm on tornado server.) Socks in particular does not have built in support for multiple "rooms".
Conclusion
I'm looking for resources/case studies, and industry standards. Any help would be appreciated.
I would just use a message queue server like RabbitMQ with a fanout exchange as each "chat room".
You can see an example of using a fanout exchange in Python here.
The Pika AMQP library works with Tornado, too.
The advantage with using a message queueing system is that you can have users connected to different Tornado processes on different servers while still being in the same "room", giving you high availability on the HTTP layer.
RabbitMQ also has HA capabilities (although not the greatest).
i'm looking at building some mobile applications. Therefore, these apps will 'talk' to my server via JSON and via REST (eg. put, post, etc).
If I want to make sure a client phone app is trying to do something that requires some 'permission', how to people handle this?
For example:
Our website sells things -> tv's, car's, dresses, etc. The api will
allow people to browse the shop and purchase items. To buy, you need
to be 'logged in'. I need to make sure that the person who is using
their mobile phone, is really them.
How can this be done?
I've had a look at how twitter does it with their OAuth .. and it looks like they have a number of values in a REQUEST HEADER? If so (and I sorta like this approach), is it possible that I can use another 3rd party as the website to store the username / password (eg. twitter or Facebook are the OAuth providers) .. and all I do is somehow retrieve the custom header data .. and make sure it exists in my db .. else .. get them to authenticate with their OAuth provider?
Or is there another way?
PS. I really don't like the idea of having an API key - I feel that it can be too easily handed to another person, to use (which we can't take the risk).
Our website sells things -> tv's, car's, dresses, etc. The api will
allow people to browse the shop and purchase items. To buy, you need
to be 'logged in'. I need to make sure that the person who is using
their mobile phone, is really them.
If this really is a requirement then you need to store user identities in your system. The most popular form of identity tracking is via username and password.
I've had a look at how twitter does it with their OAuth .. and it
looks like they have a number of values in a REQUEST HEADER? If so
(and I sorta like this approach), is it possible that I can use
another 3rd party as the website to store the username / password (eg.
twitter or Facebook are the OAuth providers) .. and all I do is
somehow retrieve the custom header data .. and make sure it exists in
my db .. else .. get them to authenticate with their OAuth provider?
You are confusing two differing technologies here, OpenID and OAuth (don't feel bad, many people get tripped up on this). OpenID allows you to defer identify tracking and authentication to a provider, and then accept these identities in your application, as the acceptor or relying party. OAuth on the other hand allows an application (consumer) to access user data that belongs to another application or system, without compromising that other applications core security. You would stand up OAuth if you wanted third party developers to access your API on behalf of your users (which is not something you have stated you want to do).
For your stated requirements you can definitely take a look at integrating Open ID into your application. There are many libraries available for integration, but since you asked for an agnostic answer I will not list any of them.
Or is there another way?
Of course. You can store user id's in your system and use basic or digest authentication to secure your API. Basic authentication requires only one (easily computed) additional header on your requests:
Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
If you use either basic or digest authentication then make sure that your API endpoints are protected with SSL, as otherwise user credentials can easily be sniffed over-the-air. You could also fore go user identification and instead effectively authenticate the user at checkout via credit card information, but that's a judgement call.
As RESTful services uses HTTP calls, you could relay on HTTP Basic Authentication for security purposes. It's simple, direct and is already supported for the protocol; and if you wan't an additional security in transport you could use SSL. Well established products like IBM Websphere Process Server use this approach.
The other way is to build your own security framework according to your application needs. For example, if you wan't your service only to be consumed by certain devices, you'll need maybe to send an encoded token as a header over the wire to verify that the request come from an authorized source. Amazon has an interesting way to do this , you can check it here.
We need to write a .Net (C#) application that monitors all mail activity through a POP, SMTP and Exchange Server (2007 and later) and essentially grab the mail for archiving into a document management system. I realise that the way to monitor each type of server would probably be different so I'd like to know what the best (most elegant and reliable) way is to achieve this.
Thanks.
Many countries have rather narrow regulations for what such a system must do and what it must not do in order to be in compliance with the law. If you are developing a product for a company in SA that wants to sell it internationally, I would suggest that need a more targeted approach.
Depending on the legal framework, your solution will have to intercept and archive all emails, or just a subset.
For instance, some countries do not allow the company to store private emails of employees, in which case the archival process needs to be configurable with rules that the employee can control.
If the intent is to archive each and every email, then the network-level approach that Jimmy Chandra suggested is better, because it is easier to deploy.
I don't think you need to worry about POP right? it is not used for sending mails (unless you need to monitor access to emails too).
Regarding Exchange, versions 2000 onwards have Journaling support (don't know about previous ones), so a mail is copied cto a mailbox as it is sent/recieved (there are several different options depending Exchange version, check it out). Then you can read that mailbox or set a rule to forward it to an external SMTP, and you app listen to it.
For other SMTP servers, it would be possible to get a similar approach by forwarding rules etc, and some might have custom support as Exchange has.