How to force a multihopping topology with xbee zb? - topology

I use some xbee (s2) modules with zb stack for mesh networking evaluation. Therefore a multi hopping environment has to be created. The problem is, that the firmware handles the association for themselves and there is no way deeper into the stack as the api provides. To force the path of the data, without to disturb the routing mechanism, I have tried to measure, I had to put them outside their reach. To get only the next hop in association isn't that easy. I used the least power level of the output, but the distance for the test setup is to large and the rf characteristics of the environment change undetermined.
Therefore my question, has anyone experience with this issue?
Regards, Toby

I don't think it's possible through software and coordinator/routers. You could change the Node Join Time (ATNJ) to force a new router to join through a particular router (disable Node Join on all nodes except one), but that would only affect joining. Once joined to the network, the router will discover that other nodes are within range.
You could possibly do it with sleepy end devices. You can use the ATNJ trick to force an end device to join through a single router, and it will always send its messages to that router. But you won't get that many hops -- end device sends to its parent router, which sends to the target's parent router, which sends to the target end device.
You'll likely need to physically limit the range of the radios to force hopping, as demonstrated in the video you linked of Digi's K-Node test equipment with a network of over 1000 radios. They're putting the radios in RF-shielded boxes and using wired antenna connections with software-controlled attenuators to connect the modules to each other.
If you have XBee modules with the U.fl or RPSMA connector, and don't connect an antenna, it should significantly reduce the range of the module. Otherwise, with a wire whip or integrated PCB antenna, you need to put each radio in some sort of box that attenuates the signal. Perhaps someone else can offer advice on materials that will reduce the signal's range without completely blocking it.

ZigBee nodes try to automatically form an Ad-Hoc network. That is why they join the network with the strongest connection (best network coverage) available on that moment. These modules are designed in such a way, that you do not have to care much about establishing a reliable communication. They will solve networking problems most of the time.
What you want to do, is somehow force a different situation. You want to create a specific topology, in order to get some multi-hopping. That will not be the normal behavior of the nods. But you can still get what you want with some of the AT Commands.
The mentioned command "NJ" should work for you. This command locks joins after a certain time (in seconds). Let us think of a simple ZigBee network with three nodes: one Coordinator, one Router and one End-Device. Switch on the Coordinator with "NJ" set to, let us say, two minutes. Then quickly switch on the Router, so it can associate with the Coordinator within these two minutes. After these two minutes, the Coordinator will be locked and will not accept more joins. At that moment you can start the End-Device, which will have to associate with the Router necessarily. This way, you will see that messages between End-Device and Coordinator go through the Router, as you wanted.
You may get a bigger network applying this idea several times, without needing to play with the module's antennas. You can control the AT Parameters remotely (i.e. from a Computer connected to the Coordinator), so you can use some code to help you initialize the network.

Related

Can I connect to my car's can bus with an elm327 interface?

I've been fiddeling around with a bluetooth elm327 device I bought a few months ago and am able to get standard obd infos like vin, rpm, speed etc.
But as I just read about recently obd2 and can are not the same. I've tried to sniff on my can bus with th AT MA command, but I get no response, so I guess the can network is decoupled from the obd2 interface. Is there any chance to get access to the can network? Or might I need a different device to do so?
Maybe this info helps: I have a 2011 Skoda.
On many modern vehicles there are actually multiple CAN buses controlling the numerous functions needed by the car. Some of these CAN buses are high-speed for important systems like engine control, and some are low-speed for less critical functions such as climate control (or in your case, diagnostics through the OBD2 port). These multiple CAN buses are usually interconnected through a gateway device in the car that arbitrates which CAN messages can be sent between buses. This is a safety net that prevents lower priority CAN buses from interfering with the more critical CAN buses.
In an example case, the CAN bus used for engine control may be able to communicate with the radio CAN bus so that the radio volume gets increased when the engine is revving to higher RPMs for comfort reasons. This would likely be a one-way connection though the gateway though, as it would be in the interest of safety to not allow the radio's CAN bus to send signals back to the engine (this could lead to potential problems if using aftermarket radios for example).
As a result of everything mentioned above, a connection to the OBD2 port's CAN lines most likely will not have full access to the complete CAN network on your car. One way to confirm this would be to look for the Factory Service Manual for your particular vehicle to see how the CAN bus(es) are setup for your car (there are actually quite a few cars that operate on only a single CAN bus in order to cut costs).
Keep in mind that as an alternative to using the OBD2 port, you can always tap directly into the CAN bus that you are interested in. For example, if you remove the radio from your car to expose the radio harness, you can usually tap directly into the CAN lines for the radio bus with the correct equipment.
Hope this helps!
If your vehicle uses the CAN protocol then you shold be able to issue atma from the elm327 device.
Here are the conditions I met to get an ATMA dump:
my vehicle supports protocol 6 -- iso 15765-4 can-11 (500 kbaud)
ATSP6 // I am using protocol 6, not auto mode
ATSH7E0 // now I am talking to the engine ECU
ATMA // returned a page full of data before getting a buffer full message

Multiple instances of Google API Client?

I have activity A that instantiates GoogleApiClient, connects and starts processing in AsyncTask that may take seconds or minutes.
Meanwhile, user triggers activity B that instantiates it's own GoogleApiClient with a connection.
The question is: Can an app have multiple instances of GoogleApiClient connected and working simultaneously, or should I keep an app singleton with my own semaphores?
It's perfectly fine to keep as many GoogleApiClients as you want around, and there are often good reasons for doing so (separation of fragments, different accounts, etc.). It's also not particularly inefficient. The cost of two clients is less than 1% higher than the cost of one client.
It can be confusing if all of them are trying to resolve errors, so it's probably a good idea to make the Fragment clients all ignore connection failures, and have an Activity or Application level client responsible for resolving issues.
Its possible to have multiple connected GoogleApiClients, just possibly inefficient. You do need to be careful using GoogleApiClient with AsyncTasks that it isn't disconnected if the activity goes away.
Consider managing the GoogleApliClient within a retained fragment. See http://www.androiddesignpatterns.com/2013/04/retaining-objects-across-config-changes.html
The issue is resolving by very common OOP Composition knowledge and Factory design pattern. Saying something about 1%, like #Hounshell below is not engineering approach.

Scaling a Server Horizontally

I have a question about scalability. Let's say I have a multiplayer game, such as Uno, where the server handles everything. (Assume this is a text-only game for simplicity). For example, to get information printed out to the user in the client, the server might send PRINT string, or CHOOSE data (to pick a card to play), etc. In this regard, the client is "dumb" and the server handles the game logic.
A quick example of how this might work on a protocol level:
Server sends: PRINT Choose a card
Server sends: CHOOSE Red 1,Blue 1 (user shown a button or something, and picks Red 1)
Client sends: Red 1
Let's say I have this architecture:
Player Class: stores the cards the user has, maybe some methods (such as tellData(String data) which would send PRINT data, sendPM() which could private message a user)
Server Class: handles authentication, allows users to create new games, shows users a list of games they can join
Game Class: handles users playing a card, handles switching to a new player for his or her turn, calls methods on player class like tellData(), pickCard(), etc
How would I scale this, to run the server on multiple computers? Right now, all of the users connect to one server, and require the Player, Server, and Game class to interact with each other. If someone could provide some suggestions, and/or point me to some good resources/books on this, it would be greatly appreciated (no, this is not a homework assignment or something for a business, this is just a personal project and curiosity of mine). In terms of scalability, I'd like to just be able to add another server, and handle the additional load of players--but the most concurrent connections would be 1000.
Also, would this become significantly more difficult of a scalability challenge if we added in more games?
Furthermore, what is the best way to store game data? In a SQL database, or serializing objects, or what? By this, I mean let's say 3 users are in a game of Uno, and want to return to it later. I don't want to store their cards and information about the game in the Player/Server/Game class (RAM) forever - I want to dump this somewhere, so when the user logs in, the info can be loaded from however this was dumped into RAM, and then the appropriate Player/Game objects.
Finally, how can I make changes to the server without having to kill it, and restart it? Assume the server was written in Java or Python.
If anyone can provide suggestions or some resources it would be greatly appreciated - this includes changing the architecture I originally stated.
Thanks for any and all help!!
EDIT: Are there any good books or talks you all would recommend on the subject?
1.Scalability:
Involves an application architecture there across multiple server instances the session is replicated/shared and load balanced. You can choose to implement a message queue (rabbitmq) / ESB(enterprise service bus) architecture for your app.
2.Ease of scaling:
Depends on deployment and the servers you choose.
3.Pesistance:
Game for a person involves his particular game state at any point of time. If you could represent state information semantically you can have the data in markup savefiles, or store the state information directly into a DB.
Else, you may need to serialize objects and store them on filesystem / as a BLOB in DB in case the state space is humongous.
4.Hot deployment:
JVM mostly always will need a restart to reload class files, hence on java server side you will always need to restart. In Ruby/Rails is certain parts of the application can be hot deployed. If your need 100% hot deployability, perhaps Erlang is the answer.
To improve concurrency you can also use evented server/app architectures: thin/eventmachine for ruby or apache mina, jboss netty for java.

RabbitMQ: routing on multiple criteria but hitting only one consumer

How can we distribute work via RabbitMQ such that worker pools can subscribe to work messages based on differing (but frequently overlapping) criteria from each other but such that when a message is routed that matches to multiple worker pools, only one worker will pick up the job?
Simplified example:
We have host1 and host2.
Host1 handles jobs of classA and classB; host2 handles jobs of classB and classC.
If we route a job of
classA, only host1 will pick it up; if we route a job of classB,
either host1 or host2 will pick it up (based on their current load /
first available) but never both.
It would seem that we need to use a topic exchange, as our routing criteria is complex and using wildcards gives us the type of flexible matching we want.
However:
If we use the same name for the worker pool queue (say “worker-jobs”) we get the desired work splitting out to arbitrary matching workers, but every worker subscribing to the named queue seems to infect the other workers with each other’s routing criteria as they bind it. I.e. the binding of the routing key seems to be at the central queue name level not on a connection-to-queue basis.
If we use different queue names for each worker pool connection (say “poolA-jobs” and “poolB-jobs”) to the same exchange then we get the desired behavior with the different routing criteria maintained between pools but a job coming in that can match to both poolA and poolB gets routed to both of them (albeit only to one worker in each).
Notes:
I’ve spared you the details of why but suffice to say we have an existing multi-petabyte distributed search application that needs response times < 50ms. We already achieve this with our own custom routing hub but we’d like to replace this with RabbitMQ as its performance is attractive (as is retiring homemade code that overlaps with general purpose community projects) if we can get the sophisticated routing we need.
We use Python
Disco isn’t viable for many reasons, too numerous to go into.
It doesn’t have to be RabbitMQ but the performance needs to be as good. ØMQ looks very interesting and like it might provide both the flexibility and the performance but we’re already using RabbitMQ and after wading through the first half of the colorfully written ØMQ guide I’m still not sure if it will support the routing we need but it does look like we’ll have to pretty much write a broker to do it.
We actually have the luxury of knowing which hosts are capable of serving which jobs, so we can do something like have host1 subscribe to #.host1.# and host2 to #.host2.#. Then when we route a classB message, we can give it a key of host1.host2 to indicate which backends are acceptable for service. This simplifies the routing rules but still doesn’t overcome the problem described.

How do download accelerators work?

We require all requests for downloads to have a valid login (non-http) and we generate transaction tickets for each download. If you were to go to one of the download links and attempt to "replay" the transaction, we use HTTP codes to forward you to get a new transaction ticket. This works fine for a majority of users. There's a small subset, however, that are using Download Accelerators that simply try to replay the transaction ticket several times.
So, in order to determine whether we want to or even can support download accelerators or not, we are trying to understand how they work.
How does having a second, third or even fourth concurrent connection to the web server delivering a static file speed the download process?
What does the accelerator program do?
You'll get a more comprehensive overview of Download Accelerators at wikipedia.
Acceleration is multi-faceted
First
A substantial benefit of managed/accelerated downloads is the tool in question remembers Start/Stop offsets transferred and uses "partial" and 'range' headers to request parts of the file instead of all of it.
This means if something dies mid transaction ( ie: TCP Time-out ) it just reconnects where it left off and you don't have to start from scratch.
Thus, if you have an intermittent connection, the aggregate transfer time is greatly lessened.
Second
Download accelerators like to break a single transfer into several smaller segments of equal size, using the same start-range-stop mechanics, and perform them in parallel, which greatly improves transfer time over slow networks.
There's this annoying thing called bandwidth-delay-product where the size of the TCP buffers at either end do some math thing in conjunction with ping time to get the actual experienced speed, and this in practice means large ping times will limit your speed regardless how many megabits/sec all the interim connections have.
However, this limitation appears to be "per connection", so multiple TCP connections to a single server can help mitigate the performance hit of the high latency ping time.
Hence, people who live near by are not so likely to need to do a segmented transfer, but people who live in far away locations are more likely to benefit from going crazy with their segmentation.
Thirdly
In some cases it is possible to find multiple servers that provide the same resource, sometimes a single DNS address round-robins to several IP addresses, or a server is part of a mirror network of some kind. And download managers/accelerators can detect this and apply the segmented transfer technique across multiple servers, allowing the downloader to get more collective bandwidth delivered to them.
Support
Supporting the first kind of acceleration is what I personally suggest as a "minimum" for support. Mostly, because it makes a users life easy, and it reduces the amount of aggregate data transfer you have to provide due to users not having to fetch the same content repeatedly.
And to facilitate this, its recommended you, compute how much they have transferred and don't expire the ticket till they look "finished" ( while binding traffic to the first IP that used the ticket ), or a given 'reasonable' time to download it has passed. ie: give them a window of grace before requiring they get a new ticket.
Supporting the second and third give you bonus points, and users generally desire it at least the second, mostly because international customers don't like being treated as second class customers simply because of the greater ping time, and it doesn't objectively consume more bandwidth in any sense that matters. The worst that happens is they might cause your total throughput to be undesirable for how your service operates.
It's reasonably straight forward to deliver the first kind of benefit without allowing the second simply by restricting the number of concurrent transfers from a single ticket.
I believe the idea is that many servers limit or evenly distribute bandwidth across connections. By having multiple connections, you're cheating that system and getting more than your "fair" share of bandwidth.
It's all about Little's Law. Specifically each stream to the web server is seeing a certain amount of TCP latency and so will only carry so much data. Tricks like increasing the TCP window size and implementing selective acks help but are poorly implemented and generally cause more problems than they solve.
Having multiple streams means that the latency seen by each stream is less important as the global throughput increases overall.
Another key advantage with a download accelerator even when using a single thread is that it's generally better than using the web browsers built in download tool. For example if the web browser decides to die the download tool will continue. And the download tool may support functionality like pausing/resuming that the built-in brower doesn't.
My understanding is that one method download accelerators use is by opening many parallel TCP connections - each TCP connection can only go so fast, and is often limited on the server side.
TCP is implemented such that if a timeout occurs, the timeout period is increased. This is very effective at preventing network overloads, at the cost of speed on individual TCP connections.
Download accelerators can get around this by opening dozens of TCP connections and dropping the ones that slow to below a certain threshold, then opening new ones to replace the slow connections.
While effective for a single user, I believe it is bad etiquette in general.
You're seeing the download accelerator trying to re-authenticate using the same transaction ticket - I'd recommend ignoring these requests.
From: http://askville.amazon.com/download-accelerator-protocol-work-advantages-benefits-application-area-scope-plz-suggest-URLs/AnswerViewer.do?requestId=9337813
Quote:
The most common way of accelerating downloads is to open up parllel downloads. Many servers limit the bandwith of one connection so opening more in parallel increases the rate. This works by specifying an offset a download should start which is supported for HTTP and FTP alike.
Of course this way of acceleration is quite "unsocial". The limitation of bandwith is implemented to be able to serve a higher number of clients so using this technique lowers the maximum number of peers that is able to download. That's the reason why many servers are limiting the number of parallel connection (recognized by IP), e.g. many FTP-servers do this so you run into problems if you download a file and try to continue browsing using your browser. Technically these are two parallel connections.
Another technique to increase the download-rate is a peer-to-peer-network where different sources e.g. limited by asynchron DSL on the upload-side are used for downloading.
Most download 'accelerators' really don't speed up anything at all. What they are good at doing is congesting network traffic, hammering your server, and breaking custom scripts like you've seen. Basically how it works is that instead of making one request and downloading the file from beginning to end, it makes say four requests...the first one downloads from 0-25%, the second from 25-50%, and so on, and it makes them all at the same time. The only particular case where this helps any, is if their ISP or firewall does some kind of traffic shaping such that an individual download speed is limited to less than their total download speed.
Personally, if it's causing you any trouble, I'd say just put a notice that download accelerators are not supported, and have the users download them normally, or only using a single thread.
They don't, generally.
To answer the substance of your question, the assumption is that the server is rate-limiting downloads on a per-connection basis, so simultaneously downloading multiple chunks will enable the user to make the most of the bandwidth available at their end.
Typically download-accelerators depend on partial content download - status code 206. Just like the streaming media players, media players ask for a small chunk of the full file to the server and then download it and play. Now the catch is if a server restricts partial-content-download then the download accelerator won't work!. It's easy to configure a server like Nginx to restrict partial-content-download.
How to know if a file can be downloaded via ranges/partially?
Ans: check for a header value Accept-Ranges:. If it does exist then you are good to go.
How to implement a feature like this in any programming language?
Ans: well, it's pretty easy. Just spin up some threads/co-routines(choose threads/co-routines over processes in I/O or network bound system) to download the N-number of chunks in parallel. Save the partial files in the right position in the file. and you are technically done. Calculate the download speed by keeping a global variable downloaded_till_now=0 and increment it as one thread completes downloading a chunk. don't forget about mutex as we are writing to a global resource from multiple thread so do a thread.acquire() and thread.release(). And also keep a unix-time counter. and do math like
speed_in_bytes_per_sec = downloaded_till_now/(current_unix_time-start_unix_time)