Wirelessly Change XBee Channel In Noisy Environment - radio

I have a XBee Radio on a device that we are trying to get to communicate with another XBee Radio 5' away (attached to PC). However, there is a lot of noise on the channel and the XBee is receiving a lot of gibberish only on that particular channel. My question: Is it possible to program the channel of the Xbee not attached to the computer using the one from the computer?? Will the noise make this impossible to do over wireless and will I need a hardwired connection to the second Xbee?

I think you may have misdiagnosed your problem. A noisy channel would result in delays of sending data, but won't result in random data. The coordinator typically checks all available channels and selects the one with the least noise when establishing a network.
It's more likely that another device joined the network and is sending data. Noise will limit the XBee modules' ability to send, but won't corrupt the data sent wirelessly.
Is it possible you have the XBee module in API mode when you're expecting Transparent Serial mode (also called AT mode)? In Transparent Serial, data on the module's serial port is passed directly to a destination device (specified in ATDH and ATDL).
If you're still interested in changing channels, you can control channel selection using ATSC (Scan Channels). It's a bitmask of channels the coordinator considers when establishing a network, and channels a router or end device will use when looking for a network to join. If you needed to avoid a specific channel, you could send a remote ATSC command removing the current channel from the bitmask, then possibly an ATNR (Network Reset) command. Then do the same on the coordinator so it creates a new network on a new channel.
If you've done everything correctly, the remote device will join the newly created network on the new channel. You might need to send an ATWR (Write) command to the remote device at that point, so it stores the new ATAC setting.

Related

STM32F4 exit from STOP on Usart receive interrupt

STM32F429 discovery board:
It's not possible to exit from STOP mode on Uart receive interrupt, because all the clocks are stopped? As far as I read any EXTI Line configured in Interrupt mode can wake up the microcontroller.EXTI0 - EXTI15 .
Please, I'd appreciate any advice on how to start with it.
I tried the following with STM32 cube Mx:
PA0 as GPIO_EXT0 and generated the code
how to link the uart receive pin to GPIO_EXT0
While you are correct about the EXTI0 - EXTI15 pins being configurable for a wake up, unfortunately, this particular series of microcontroller (STM32F4) cannot have the USART clock active when stop mode is on. This means that the peripheral cannot see any data. You can; however, use an external watchdog, RTC, etc... this will allow for that with your current microcontroller. There are workarounds for this.
You could use sleep mode, which just the Cortex M4 Clock and the CPU would be stopped while all the peripherals are left running. However, with all the peripheral clocks enabled you will draw more current.
If you are interested in USART clock functionality in stop mode, check out the STM32L0, or STM32L4. Both of these have that feature and it works phenomenally well and I would highly recommend these two series for a low-power application as this is what they are designed for.
It can be done in software, but not with STM32CubeMX
GPIO inputs and EXTI (if configured) are active even if the pin is configured as alternate function. Configure the UART RX pin as you would for UART receive, then select that pin as EXTI source in the appropriate SYSCFG->EXTICR* register, and configure EXTI registers accordingly. You'll probably want interrupt on falling edge, as the line idle state is high.
Keep in mind that it takes some time for the MCU to resume operations, therefore some data received on the UART port will be inevitably lost.
PA0 cannot be configured as a UART RX pin, use the EXTI line corresponding to the RX pin of the used UART.

Fiware: Data loss prevention

I’m working with the 0.27.0 version of context broker. I'm using the Cygnus generic enabler and I have established a MQTT agent that connects external devices to the context broker.
My major concern right now is how to prevent from data loss. I established the context broker and the Cygnus mongodb databases as replica sets, but that won't ensure that all data will be persisted into the databases. I have seen that Cygnus uses Apache flume. Looking at its configuration, the re-injection retries can be configured:
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = -1
¿It is a good idea to establish the retries value to -1? I have read about events re-injected in the channel forever.
¿What can be done to ensure that all the data will be persisted?
¿Is there any functionality into fiware ecosystem oriented to that purpose?
Regarding Cygnus, the TTL is for sure the way of controlling the persistence retries after an error. A retry means the data is reinjected in the internal channel communicating the source (which receives Orion notifications) and the sink (which persists the data in the final storage) for future persistence attempts.
Possible values for this TTL are:
TTL = 0: there are no retries, i.e. if the first time a notified data cannot be persisted in the final storage (because of a network fail, a storage error, whatever) then the data is dropped.
TTL > 0: there are as much retries as configured TTL. Once exhausted the TTL the data is dropped.
TTL = -1: infinite retries, i.e. the data is reinjected in the channel forever until it is persisted or the channel gets full.
As commented, a -1 TTL may consume the channel capacity if the final storage never gets OK, avoiding new received data is put into the channel. Nevertheless, if the final storage never gets OK, such a drawback does not matter, right? :)
Thus, we could say the rules for choosing a TTL are:
If you don't want retries, simply configure 0.
If you want retries but you don't mind to loose data afeter certain number of retries, then configure a positive value.
If you want retries but you don't want to loose data, then configure -1 and a large channel capacity since the final storage may be down for an unknown time.
In any case, the TTL feature is changing during this sprint. The behaviour will be the same, but instead of being applied to single events, it will applied to batches of events (batches may be about 1 single event, of course). You'll see this change in the next release of Cygnus (0.13.0), and it will be available at the end of February 2016 (at the moment of writing this, the next week :)). My recommendation is to wait for such a release if you want to instensively use the TTL feature.

What is the difference between a message channel and the message queue itself?

What is the difference between a message channel and a message queue itself?
They're different things. The queue actually holds messages which will be processed (pushed to the listener) in FIFO manner.
A channel is a medium through which messages are transmitted.
What does that mean exactly? In a book "Enterprise Integration Patterns" it says:
Connect the applications using a Message Channel, where one application writes information to the channel and the other one reads that information from the channel.
Does this mean that this message channel actually abstracts the queue away from the producer and consumer of the message? But it really doesn't right? When a producer has to place a message into a queue, it actually specifies the queue manager and queue names it want's to connect to.
There's also the concept of different protocols in channels and different data formats in channels where you have a separate channel for each protocol you're using maybe and maybe a separate channel for each data format (XML, JSON etc).
This would facilitate the different queues to pick up from different channels. But why not directly call different queues for different data formats? What exactly is the role of the channel? Is it just a connection?
I'm a completely new at MQM. I've just been assigned to this project which involves producing and consuming messages and I'm trying to wrap my mind around this.
To expand a bit on Shashi's answer, please keep in mind that the EIP book referenced talks about high level messaging patterns. In that context the authors needed a generic term for the medium by which messages are transferred between two points and chose the word "channel".
For purposes of the book the a channel connects any two endpoints that move messages, for any message transport vendor. In this case a channel has attributes that are classes of service and support the various patterns. It may be 1-1, 1-many, many-1, many-many, etc.
So for example if it is ZeroMQ, the endpoints are two peer-to-peer nodes and there's no messaging engine between them. For IBM MQ one endpoint is always the queue manager (a type of messaging engine) and the other is an application or another queue manager.
Based on this example, it should be obvious that channel as used in the book and channel as defined by any messaging transport are at different levels of abstraction. As used by MQ, a channel is a specific set of configuration parameters that define a communication path and includes such attributes as CONNAME, MAXMSGL, tuning parameters, SSL parameters, etc. Once an MQ channel is successfully started, you can see a running instance of it by displaying the channel status. In the case of CLUSRCVR, SVRCONN, and (less commonly) RCVR or RQSTR channels, you may see multiple instances of the same channel active simultaneously.
If you are still with me, you may have noticed that the MQ usage of the term channel always describes one or more point-to-point network connections whereas the EIP book's usage of the term channel is roughly translated as "the thing that moves messages between application endpoints." Consider that two applications connected directly to the queue manager using shared memory would be using a channel as defined in EIP but not as the term is defined by IBM MQ.
Based on that example, it should be clear that the EIP version of the term channel includes the queue manager and any MQ connections between the queue manager and application endpoints.
To sum up:
The EIP book's channel is all messaging infrastructure that isn't one of the application endpoints, and in an MQ context it includes the queue manager and any MQ channels.
The IBM MQ channel is a specific configuration defining network connectivity between the queue manager and another queue manager or a client application.
I hope this clarifies the terminology rather than confusing things further. I will update based on any comments if needed.
A message queue stores messages sent by producers so that they can be delivered to consumers.
A channel is the media or communication link for transmitting messages from
producer to queue,
queue to consumer,
or one queue in a queue manager to another queue in another queue manager.
There are two types of channels:
1) A Message channel is a unidirectional communications link between two queue managers.
Message channels are used to transfer messages between the two queue managers.
2) A MQI channel connects an application (producer or consumer) to a queue manager on a server machine.
MQI channels are required to transfer MQ API calls and responses between MQ client applications and queue managers.
So,
in simple terms,
channel is a communication media between a client application and a queue manager (or between queue managers) for sending and/or receiving messages.
MQ uses a proprietary protocol to transmit messages from client applications to queue managers and between queue managers.
The format of the data contained in the message does not matter,
it can be anything including bytes, XML, or JSON.
Any type of data can be sent over the same channel.
Hope this helped.
WebSphere MQ queues
A queue is a container for messages. Business applications that are connected to the queue manager that hosts the queue can retrieve messages from the queue or can put messages on the queue. A queue has a limited capacity in terms of both the maximum number of messages that it can hold and the maximum length of those messages.
Reference
Channels
WebSphere® MQ uses two different types of channels:
A message channel, which is a unidirectional communications link between two queue managers. WebSphere MQ uses message channels to transfer messages between the queue managers. To send messages in both directions, you must define a channel for each direction.
An MQI (Message Queue Interface) channel, which is bidirectional and connects an application (MQI client) to a queue manager on a server machine. WebSphere MQ uses MQI channels to transfer MQI calls and responses between MQI clients and queue managers
Reference

Different Retry policies for different messages in MSMQ

We have a requirement for an API, which allows asynchronous updates via a MSMQ message queue, that I'm putting together which will allow the developer consuming the API to specify different retry policies per message. So a high priority client system, e.g. for sales will submit all messages with 5 delivery attempts (retries) and 15 minutes between each attempt, whereas a low priority client system, e.g. back-end mail shot system will allow users to update their marketing preferences, submitting messages with 3 retries and an hour between each attempt.
Is there a way in the System.Messaging MSMQ (version 3 or 4) implementation to specify number of retries, retry delay and things like whether messages are sent to a dead letter queue or just deleted? (and if so, how?)
I would be open to using other messaging frameworks if they fulfilled this requirement.
Is there a way in the System.Messaging MSMQ (version 3 or 4) implementation to specify number of retries
Depending on which operating system/msmq version you're using, specifying retry semantics is highly sophisticated in WCF. The following is for Windows 2008 and MSMQ4 using a transactional queue.
The main setting on the binding is called MaxRetryCycles. One retry cycle is an attempt to successfully read a message from a queue and process it inside the handling method. This "attempt" can actually be made up of multiple attempts, as defined by the msmq binding property ReceiveRetryCount. ReceiveRetryCount is the number of times an application will try to read the message and process it before rolling back the de-queue transaction. This marks the end of one retry cycle.
You can also introduce a delay in between cycles with the RetryCycleDelay property.
A more complicated consideration is what to do with the messages which fail even after multiple retry cycles.
allow the developer consuming the API to specify different retry policies per message
I am not sure how you could do this with MSMQ - as far as I'm aware it's only possible to set retry semantics on a per-endpoint basis. If you're using transactions then you can't even allow API users to set the priority of the messages being sent (transactional queues guarantee delivery in order).
The only thing you could do is host a another instance of your API as high-priority and one for low priority. These could be hosted on different environments, and this has the added benefit that low priority messages won't be competing for system resources with high priority messages.
Hope this helps.

How can I model this usage scenario?

I want to create a fairly simple mathematical model that describes usage patterns and performance trade-offs in a system.
The system behaves as follows:
clients periodically issue multi-cast packets to a network of hosts
any host that receives the packet, responds with a unicast answer directly
the initiating host caches the responses for some given time period, then discards them
if the cache is full the next time a request is required, data is pulled from the cache not the network
packets are of a fixed size and always contain the same information
hosts are symmetic - any host can issue a request and respond to requests
I want to produce some simple mathematical models (and graphs) that describe the trade-offs available given some changes to the above system:
What happens where you vary the amount of time a host caches responses? How much data does this save? How many calls to the network do you avoid? (clearly depends on activity)
Suppose responses are also multi-cast, and any host that overhears another client's request can cache all the responses it hears - thereby saving itself potentially making a network request - how would this affect the overall state of the system?
Now, this one gets a bit more complicated - each request-response cycle alters the state of one other host in the network, so the more activity the quicker caches become invalid. How do I model the trade off between the number of hosts, the rate of activity, the "dirtyness" of the caches (assuming hosts listen in to other's responses) and how this changes with cache validity period? Not sure where to begin.
I don't really know what sort of mathematical model I need, or how I construct it. Clearly it's easier to just vary two parameters, but particularly with the last one, I've got maybe four variables changing that I want to explore.
Help and advice appreciated.
Investigate tokenised Petri nets. These seem to be an appropriate tool as they:
provide a graphical representation of the models
provide substantial mathematical analysis
have a large body of prior work and underlying analysis
are (relatively) simple mathematical models
seem to be directly tied to your problem in that they deal with constraint dependent networks that pass tokens only under specified conditions
I found a number of references (quality not assessed) by a search on "token Petri net"