Which hook to limit the number of messages a user can send per day? - ejabberd

We want to use ejabberd in the context of a web application having fairly unique and business rules, we'd therefore need to have every chat message (not protocol message, but message a user sends to another one) go through our web application first, and then have the web application deliver the message to ejabberd on behalf of the user (if our business rules allow the message to be sent).
The web application is also the one providing the contact lists (called rosters if I understand correctly to ejabberd). We need to be and remain the single source of truth to ease maintenance.
To us, ejabberd value added would be to deliver chat messages in near real-time to clients, and enable cool things such as presence indicators. Web clients will maintain a direct connection to ejabberd through websocket, but this connection will have to be read-only as far as chat messages are concerned, and read-write as far as presence messages are concerned.
The situation is similar with regards to audio and video calls. While this time the call per see will directly be managed by ejabberd to take advantage of built-in STURN, TURN etc... and will not need to go through our web app, we have custom business logic to manage who is able to call who, when, how often etc... (so in order words, we have custom business logic to authorize the call or not and we'd like to keep all the business logic centralized in the web app).
My question is what would be the proper hooks we'd need to look into to achieve what we are after? I spent an hour or so in the documentation, but I couldn't find what I am after so hopefully someone can provide me pointers. In an ideal world, we'd like to expose API endpoints from our web app that ejabberd hooks can hit. However, the first question is: which relevant hooks is ejabberd offering and where are these hooks documented?
Any help would be greatly appreciated, thank you!

When a client sends a packet to ejabberd, it triggers the user_send_packet hook, providing the packet and the state of the client's session process. Several modules use that hook, for example mod_service_log.

Related

How to keep backend session information in Polymer SPA

I'd like to login to a RESTful back-end server written in Laravel5, with the single page front-end application leveraging Polymer's custom element.
In this system, the persistence(CRUD) layer lives in the server. So, authentication should be done at the server in responding to client's api request. When a request is valid, the server returns User object in JSON format including user's role for access control in client.
Here, my questions is how I can keep the session, even when a user refreshes the front-end page? Thanks.
This is an issue beyond Polymer, or even just single page apps. The question is how you keep session information in a browser. With SPAs it is a bit easier, since you can keep authentication tokens in memory, but traditional Web apps have had this issue since the beginning.
You have two things you need to do:
Tokens: You need a user token that indicates that this user is authenticated. You want it to be something that cannot be guessed, else someone can spoof it. So the token better not be "jimsmith" but something more reliable. You have two choices. Either you can have a randomly generated token which the server stores, so that when presented on future requests, it can validate the token. This is how just most session managers work in app servers like nodejs sessions or Jetty session or etc. The alternative is to do something cryptographic so that the server only needs to validate mathematically, not check in a store to see if the token is valid. I did that for node in http://github.com/deitch/cansecurity but there are various options for it.
Storage: You need some way to store the tokens client-side that does not depend on JS memory, since you expect to reload the page.
There are several ways to do client-side storage. The most common by far is cookies. Since the browser stores them without your trying too hard, and presents them whenever you access the domain that the cookie is registered for, it is pretty easy to do. Many client-side and server-side auth libraries are built around them.
An alternative is html5 local storage. Depending on your target browsers and support, you can consider using it.
There also are ways you can play with URL parameters, but then you run the risk of losing it when someone switches pages. It can work, but I tend to avoid that.
I have not seen any components that handle cookies directly, but it shouldn't be too hard to build one.
Here is the gist for cookie management code I use for a recent app. Feel free to wrap it to build a Web component for cookie management.. as long as you share alike!
https://gist.github.com/deitch/dea1a3a752d54dc0d00a
UPDATE:
component.kitchen has a storage component here http://component.kitchen/components/TylerGarlick/core-resource-storage
Simplest way if you use PHP is to keep the user in a PHP session (like a normal non SPA application).
PHP will store the user info on the server, and generate automatically a cookie that the browser will send with any request. With a single server with no load balancing, the session data is local and very fast.

How do I set up Box event notifications (webhooks) at the org level?

The documentation (http://developers.box.com/webhooks/) talks about webhooks in the context of "user's account". I read that as getting notifications only about the objects to which I have access.
Let's say I want to be notified every time there is a new upload anywhere across my organization. Do I need to be an admin to accomplish this, or is the webhooks scope not subject to my user permissions?
While we're continuing to enhance webhooks, there is some administrative functionality built-in. For instance, if you can get your webhooks installed for all your users, and have the webhook point to one endpoint, that can track all activity in your account.
There is a way to force webhooks on all users in your domain as an administrator. However, this feature hasn't been optimized for companies that use webhooks for internal use. That's still in progress.
If you'd like to be kept in the loop, or try some workarounds with what we have, feel free to contact us at api [at] box [dot] com. With more information, we may find something that works today, based on your exact needs.
At this point, the webhooks are only provided at the user level. If you log on as an admin and setup an application that gets webhooks, you will only get the same set of notifications as you see in the "Updates" tab in the Web UI.
We are looking to expand the webhooks capabilities, and this is one area that we may explore. However, it is not currently scheduled, so I can't provide any idea of even rough dates.

How do people handle authentication for RESTful api's (technology agnostic)

i'm looking at building some mobile applications. Therefore, these apps will 'talk' to my server via JSON and via REST (eg. put, post, etc).
If I want to make sure a client phone app is trying to do something that requires some 'permission', how to people handle this?
For example:
Our website sells things -> tv's, car's, dresses, etc. The api will
allow people to browse the shop and purchase items. To buy, you need
to be 'logged in'. I need to make sure that the person who is using
their mobile phone, is really them.
How can this be done?
I've had a look at how twitter does it with their OAuth .. and it looks like they have a number of values in a REQUEST HEADER? If so (and I sorta like this approach), is it possible that I can use another 3rd party as the website to store the username / password (eg. twitter or Facebook are the OAuth providers) .. and all I do is somehow retrieve the custom header data .. and make sure it exists in my db .. else .. get them to authenticate with their OAuth provider?
Or is there another way?
PS. I really don't like the idea of having an API key - I feel that it can be too easily handed to another person, to use (which we can't take the risk).
Our website sells things -> tv's, car's, dresses, etc. The api will
allow people to browse the shop and purchase items. To buy, you need
to be 'logged in'. I need to make sure that the person who is using
their mobile phone, is really them.
If this really is a requirement then you need to store user identities in your system. The most popular form of identity tracking is via username and password.
I've had a look at how twitter does it with their OAuth .. and it
looks like they have a number of values in a REQUEST HEADER? If so
(and I sorta like this approach), is it possible that I can use
another 3rd party as the website to store the username / password (eg.
twitter or Facebook are the OAuth providers) .. and all I do is
somehow retrieve the custom header data .. and make sure it exists in
my db .. else .. get them to authenticate with their OAuth provider?
You are confusing two differing technologies here, OpenID and OAuth (don't feel bad, many people get tripped up on this). OpenID allows you to defer identify tracking and authentication to a provider, and then accept these identities in your application, as the acceptor or relying party. OAuth on the other hand allows an application (consumer) to access user data that belongs to another application or system, without compromising that other applications core security. You would stand up OAuth if you wanted third party developers to access your API on behalf of your users (which is not something you have stated you want to do).
For your stated requirements you can definitely take a look at integrating Open ID into your application. There are many libraries available for integration, but since you asked for an agnostic answer I will not list any of them.
Or is there another way?
Of course. You can store user id's in your system and use basic or digest authentication to secure your API. Basic authentication requires only one (easily computed) additional header on your requests:
Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
If you use either basic or digest authentication then make sure that your API endpoints are protected with SSL, as otherwise user credentials can easily be sniffed over-the-air. You could also fore go user identification and instead effectively authenticate the user at checkout via credit card information, but that's a judgement call.
As RESTful services uses HTTP calls, you could relay on HTTP Basic Authentication for security purposes. It's simple, direct and is already supported for the protocol; and if you wan't an additional security in transport you could use SSL. Well established products like IBM Websphere Process Server use this approach.
The other way is to build your own security framework according to your application needs. For example, if you wan't your service only to be consumed by certain devices, you'll need maybe to send an encoded token as a header over the wire to verify that the request come from an authorized source. Amazon has an interesting way to do this , you can check it here.

Use of messaging like RabbitMQ in web application?

I would like to learn what are the scenarios/usecases/ where messaging like RabbitMQ can help consumer web applications.
Are there any specific resources to learn from?
What web applications currently are making use of such messaging schemes and how?
In general, a message bus (such as RabbitMQ, but not limited to) allows for a reliable queue of job processing.
What this means to you in terms of a web application is the ability to scale your app as demand grows and to keep your UI quick and responsive.
Instead of forcing the user to wait while a job is processed they can request a job to be processed (for example, clicking a button on a web page to begin transcoding a video file on your server) which sends a message to your bus, let's the backend service pick it up when it's turn in the queue comes up, and maybe notify the user that work has/will begin. You can then return control to the UI, so the user can continue working with the application.
In this situation, your web interface does zero heavy lifting, instead just giving the user visibility into stages of the process as you see fit (for example, the job could incrementally update database records with the state of process which you can query and display to your user).
I would assume that any web application that experiences any kind of considerable traffic would have this type of infrastructure. While there are downsides (network glitches could potentially disrupt message delivery, more complex infrastructure, etc.) the advantages of scaling your backend become increasingly evident. If you're using cloud services this type of infrastructure makes it trivial to add additional message handlers to process your jobs by subscribing to the job queue and just picking off messages to process.
I just did a Google search and came up with the following:
Reddit.com
Digg.com
Poppen.De
That should get you started, at least.

What's the best way to notify a non-web application about a change on a web page?

Let's say I have two applications which have to work together to a certain extent.
A web application (PHP, Ruby on Rails, ...)
A desktop application (Java, C++, ...)
The desktop application has to be notified from the web application and the delay between sending and receiving the notification must be short. (< 10 seconds)
What are possible ways to do this? I can think of polling in a 10 second interval, but that would produce much traffic if many desktop applications have to be notified. On a LAN I'd use an UDP broadcast, but unfortunately that's not possible here...
I appreciate any ideas you could give me.
I think the "best practice" here will depend on the number of desktop clients you expect to serve. If there's just one desktop to be notified, then polling may well be a fine approach -- yes, polling is much more overhead than an event-based notification, but it'll certainly be the easiest solution to implement.
If the overhead of polling is truly unacceptable, then I see two basic alternatives:
Keep a persistent connection open between the desktop and web-server (could be a "comet"-style web request, or a raw socket connection)
Expose a service from within the desktop app, and register the address of the service with the web-server. This way, the web-server can call out to the desktop as needed.
Be warned, though -- both alternatives are chock full of gotchas. A few highlights:
Keeping a connection open can be tricky, since you want your web-servers to be hot-swappable
Calling out to an external service (eg, your desktop) from a web-server is dangerous, because this request could hang. You'd want move this notification onto a separate thread to avoid tying up the webserver.
To mitigate some of the concerns, you might decouple the unreliable desktop from the web-server by introducing an intermediary notification server -- the web-server could post an update somewhere, and the desktop could poll/connect/register there to be notified. To avoid reinventing the wheel here, this could involve some sort of MessageQueue system... This, of course, adds the complexity of needing to maintain the new intermediary.
Again, all of these approaches are probably quite complex, so I'd say polling is probably the best bet.
I can see two ways:
Your desktop application polls the web app
Your web app notifies the desktop application
Your web app could publish an RSS feed, but your desktop app will still have to poll the feed every 10 s.
The traffic need not be huge: if you use an HTTP HEAD request, you'll get a small packet with the date of the last modification (conveniently named Last-Modified).
I don't know exactly what to do to achieve your task but I can suggest to create a windows service at the desktop application PC.
This service checks the web application every interval of time for new changes and if changes occurred it can run the desktop application with notification that there is a change in the web application and in the web application when any change occurrs you can response with acknowledgment
I hope that this may be useful I didn't try it exactly but I am suggesting using like this idea.
A layer of syndication would help to scale out the system.
The desktop app can register itself with a "publisher" service (running on one of several/many machines) This publisher service receives the "notice" from your web app that something has changed, and immediately starts notifying all of its registered subscribers.
The number of publishers you need will increase with the number of users.
Edit: Forgot to mention that the desktop app will need to listen on a socket.