I'm currently working on a project where I want to control a microprocessor (Arduino) from a web page.
The microprocessor will not be connected physically to the computer.
What is the best protocol for communication? (My current choice is TCP).
What is the best way to serialize the objects to be sent? (My current choice is JSON).
The server side is written in NodeJs.
Since I'm new to this kind of development I would very much appreciate any input on the topic!
Without details about which arduino you plan on using and what shields you might employ to achieve the interface it is hard to make a definite statement.
I would argue that with the proper shield to provide the ethernet interface TCP would be an acceptable choice.
I am inclined to say you are going to be hard pressed to build a JSON interpreter that fits into the memory foot print of an Arudino. Most of these devices have 32K of program memory, and 1 or 2K of data memory. For embedded devices like this a concise binary protocol on the wire is far more common and easier to fit into the device's limitations.
There is a library called aJson which allows you to do JSON parsing in Arduino. I have used that library to parse response from a YQL call all inside the 32K program memory of Arduino :)
I would suggest using Ethernet or Wifi shield for Arduino to make it connect to internet and then you can use the Ethernet library of Arduino to make HTTP calls. The response format could be in JSON.
Related
In RX you have observables and observers. So far I've dealt with those two conceptual elements being on the same machine. But what if they are separated by a LAN or even the internet? What are some options to transparently subscribe to an observable on a different machine? What technology can support this stream-subscribing behavior when there is a network in the way?
Specifically, my target platform is a WinRT app client and the server would be something running on a Windows machine.
Currently this is a very broad question and difficult to provide a helpful answer. I don't beleive that there is anything currently in the public space that allows for what you are asking i.e. Transparently exposing remote queries on an observable sequence. However Bart and the Rx/Bing/Cortana team at Microsoft have apparently got some internal software that does this.
To do this you really need to be able to serialize your queries and send them to the server. In the IEnumerable<T> world this translates to IQueryable<T> and the canonical example is LINQ-SQL/Entity Framework. In Rx IObservable<T> would map to IQbservable<T> (yes really), but implementations of this type are rare. I believe this is mainly due to the most people struggling with the learning curve that Rx presents, and don't want the extra layer of abstraction.
If you don't need the Transparent subscription, but just want to stream data from one server to another and cosume that with Rx, then you can do that with SignalR, WCF, Crossbar.IO/WAMP and I am sure many others. You may just need to put a tiny wrapper around the library, probably nothing more that an Observable.Create.
EDIT:
Here is an example of using Rx with Tibco's Rendevous (TibRv) communications stack.
https://github.com/LeeCampbell/RxCookbook/blob/master/IO/Comms/TibRv.md
and the code
https://github.com/LeeCampbell/RxCookbook/blob/master/IO/Comms/TibRvSample.linq
Could you please describe the benefits of having client-side XML/XSLT page? What are the benefits over server-side XML/XSLT, etc?
The main point looks to me to unload the server side..
Lighter load on server
(possibly) less network traffic
The one xml-type http resource can be used for both human inspection (via browser hosted transform) and for machine consumption.
A good example is the World of Warcraft character database. A person can view his character information in convenient html format, and a game addon can leverage the raw data. Both observers are reading the same xml file.
Currently I have a shiny web app that can do some calculations on a 3GB data.frame loaded in memory.
Now instead of implementing this feature on shiny web app, I need to make it a restful service that pipelines its calculation to another app in JSON format, so that people can use it by sending http request with url like http://my-app.com/function
I'm trying opencpu right now, but I don't quite understand how can I load and keep the large data in memory so that I can use api of opencpu to call functions in package just to do calculations, not to load the large data from disk every time I send http request.
One workaround might be to use hbase as in memory database and use rhbase to load the data. But before I invest time in learning it I want to know if it is a reasonable choice for 3GB data.frame since it might add more overhead in serialization and other stuff that offset its speed benefit.
What would be a better way to implement this functionality? Solutions using packages other than opencpu are also welcome, and it's better to be free.
You may want to look at Plumber. You decorate your R functions with comment code (that can include you loading data) and it makes it available via a REST API.
You should put your data in a package and add this package to preload in the server config.
I'm designing the architecture for a new web application.
I think that communications between the backend (server) and the frontend should be JSON only.
Here are my arguments:
Its the client responsibility to manipulate and present data in its own way. The server should just send to the client the raw information needed.
JSON is lightweight and my application might be used by remote clients over poor mobile connections
It allows multiple front-end developments (Desktop devices, mobile
devices) and has the potential to create an API for other developers
I can't see any counter-argument to this approach, considering that we have internally the frontend skills to do almost everything we need from raw JSON information.
Could you provide counter-arguments to this JSON-only choice so that I can make a more informed choice?
There must be some as a lot of backend frameworks (think about the php ones) still advertise HTML templating to send HTML formatted responses to the clients.
Thanks
UPDATE: Even though I researched the topic before, I found a similar and very interesting post: Separate REST JSON API server and client?
There are many front end based framework already in market which support a Json very efficiently,some of them are backbone,underscore,angular etc.Now if we talk about backend,we generally use REST based communication for such type of application.So i think this type of architecture already exits in market and working very well,specially if i talked about mobile based application.
Although this question is dead, I think I should try to weigh in.
For all the reasons you stated and more, communication between the back-end and front-end via only JSON files is maybe the best way available, as it provides a more compartmentalized structure for your web application, and at the same time drastically reduces the data sent over your users' connection.
However, some drawbacks that are a direct consequence of this are:
Need for a lot more JavaScript front-end development (as the HTML structure is not being sent by the server and needs to be created in the client)
It shifts the pressure from the server to the client, thus there is more JavaScript for the client to run (this can sometimes be a problem especially for mobile users)
What is a good framework to build a multiplayer game in Actionscript?
I want to create a multiplayer 2D shooter like Asteroids on the Blackberry Playbook; my main concern is latency - a shooter wouldn't be fun if the bullets are super-jerky and unexpectedly hit people.
I'm guessing that a UDP-based framework would be the best. Can anyone point me to the right direction?
There are many things you can use off the shelf but the basic setup is very simple but you have a few options.
The most common is server push, things like Flash Media Server, LiveCycle Data Services from Adobe or other tools like SmartFoxServer can do this. With this setup the server saves the connections to everyone that connects to the server and passes or "pushes" applications state to the people connected every time the data changes in the application.
Another option is called long pulling, this can be done with any web server really. How this works is the data stores the state of the application, when the application starts it calls the server, when it responds the client calls the server again.
There are a few other ways to do it but these are the most common. But this has nothing to do with protocol like HTTP, UDP, AMF, XMPP, or whatever else. The protocol is the format that the data is sent. With these out of the box servers they normally output a few of these but the fastest formats are binary like AMF but not always the best, there are advantages to each, because each gives you different features for keeping track of things.
If you are talking about have a game that takes over the world that has millions of users then you need to think about scaling and what happens when you need two or 100 servers and how do they talk to each other. But for now keep in mind that the more the server does the slower it will get, if you are sending small amounts of data it will be able to handle more users. Stick with making one efficient server and worry about that later if you get there.
You also need to thing about what server side programming language you want to mess with if any. Some services don't let you do anything, these normally cost money and don't do as much. Adobe likes Java but there are servers that output all of these protocols in most every language. My favorit lately has been Node.js a super fast way to run JavaScript on the server. Node.js has a built in HTTP server but it is just as easy to create a simple server that sends basic text through a Socket or XMLSocket. A server like this will easily handle many thousands of users. There are many games that use Socket.IO and if you want to see a simple example of what I'm talking about you can check out this.
Assuming you want to use Flash/Flex and not Java (Blackberry/Android) or native SDKs for Playbook -
There is a book as an inspiration: http://www.packtpub.com/flash-10-multiplayer-game-essentials/book it uses Pulse SDK at the server side. But you could use an own sockets-program on the server side. I use Perl as TCP-sockets server (sends gzipped XML around) in a small card game but this wouldn't work for your shooter.
Flash does not support UDP out of the box
But there is peer-to-peer networking protocol RTMFP in the upcoming Flash Media Server Enterprise 4 (price is out of reach for mere mortals)
So your best bet is to buy an Amazon-service for RTMFP then you can pay-per-use and stay scalable...
You can either do a constant post/get request with the server to get data for the game, but for a multiplayer shooter i'd surgest SmartFoxServer: http://www.smartfoxserver.com/
Out of the box, Adobe AIR supports UDP through datagram packets.
http://help.adobe.com/en_US/air/reference/html/flash/net/DatagramSocket.html
I couldn't find a particular networking API for flash, but perhaps you can build one. Libgren is open source and you can use that for reference.
You can also look into RTMFP though it's focus is on transmitting audio/video and some messages (through TCP I think).