Fiware: Can we use cygnus on Raspberry Pi - fiware

Can we install cygnus on RaspberryPi?
Thinking of using it with Cepheus to add persistance on the gateway level.
Thanks in advance for your help!

Never tried, nevertheless it is a Java application, so having a JVM and the proper libraries should work. Most probably the memory is a problem, so it is probably you could not handle a high througput of notifications. In any case, as said, it is a matter of trying.
The keypoint for using it with Cepheus is if Cepheus notifies in the same format than Orion Context Broker. If not, Cygnus will not understand the notifications.
Another important thing is the storage aimed to be used for persistence. I don't know if any of the supported storages in Cygnus, for instance MySQL or MongoDB, can be run wihtin a Raspberry Pi! In this case, the best option may be to install the storage in a remote machine.

Related

Hosting html file over mosquitto

I just found that mosquitto had got a websockets upgrade which allows it to
host the HTTP services.
I tried hosting a html file using the websockets feature on the port 8080.
The mosquitto broker seems to start fine and the mqtt services on the other ports seem to function properly. But when i try to access the html file over the localhost I get the a response saying no data sent by the server.
I am not sure where my mistake lies..Any ideas?
Mosquitto is not a HTTP server, it can not serve generic files.
The HTTP listener is only there to facilitate an upgrade to the websocket protocol in order to run MQTT over a websocket connection.
You might want to look out for a different broker that is flexible enough to do what you're looking for. I don't know of any MQTT broker that allows you to do that out of the box, but many are fairly extensible. For one I can talk about is VerneMQ, as I am one of the core developers. Developing a simple VerneMQ plugin that serves some static files over HTTP is a matter of a few lines of code, as the plugin only requires to setup some configuration for the internal webserver.
However, unfortunately we haven't yet documented this feature. But feel free to drop us a line if such an approach sounds interesting for you.
Cheers,
Andre

Issue when trying to connect to the cluster after updating the version of Java SDK

We are experiencing the issue when trying to connect to the cluster after updating the version of Java SDK.
The setup of the system is as follows:
We have a web application that is using Java SDK and a Couchbase cluster. In between we have a VIP (Virtual IP Address). We realise that isn’t ideal but we’re not able to change that immediately since VIP was mandated by Tech Ops. VIP is basically only there to reroute the initial request on application startup. That way we can make modifications on the cluster and ensure that when application starts it can find the cluster regardless of the actual nodes in the cluster and their IPs.
Prior to the issue we used JAVA SDK version 1.4.4. Our application would start and Java SDK would initiate a request on port 8091 to VIP. Please note that port 8091 is the only port open on VIP. VIP would reroute the request to one of the node cluster currently in use the cluster would respond to Java SDK. At that point Java SDK would discover all the nodes in the cluster and application would run fine. During up time if we would add, remove a node from the cluster Java SDK would update automatically and everything would run without the issue.
In the last sprint we updated the Java SDK to version 2.1.3. Our application would start and Java SDK would initiate a request on port 11210 to VIP. Since this port is not open the request would fail and Java SDK would throw an exception:
Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:93)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:108)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:99)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:89)
No further request would be made on any port.
It appears the order in which port are being used has been changed between versions. Could somebody please confirm, or dispute, that the order in which ports are being used for cluster discovery has been changed between versions. Also could somebody please provide some advice on how we could resolve the issue. We are trying to understand the clients behavior, if we could open all those ports on the VIP would the client still then function correctly and at full performance?
The issue is happening on our production environment which we cannot use for testing out potential solutions since it will interfere with our products.
In v2.x of the Java SDK, it defaults to 11210 to get the cluster map to bootstrap the application. This is a huge improvement actually as now the map comes from the managed cache and not the cluster manager (8091). The SDK should use 8091 as a fall back if it cannot get the map on 11210 though. Regardless, you really want to get that map from 11210, trust me. It cleans up a lot of problems.
To resolve this long term and follow Couchbase best practices, upgrade to the Java 2.2.x SDK, get rid of the VIP entirely and go with a DNS SRV record instead. That gives you one DNS entry for the SDK connection object and you just manage the node list in DNS. It works great. I say SDK 2.2 as the DNS SRV record solution is fully supported there, in 2.1 it is experimental. VIPs are specifically recommended against by Couchbase these days. In older versions of the SDKs it was fine to do this and it helped with limiting the number of connections from the app to the DB nodes, but that is no longer necessary and can actually be a bad thing.
in addition to Kirk's long term answer (which I also advise you to follow), a shorter term solution may be to deactivate the 11210 bootstraping (carrier bootstrap) through the CouchbaseEnvironment by calling bootstrapCarrierEnabled(false) on the builder.
I don't guarantee that it'll work with a vIP even after that, but that may be worth a try if you're in a hurry.

Installation of IDAS to Use with ORION CB

I have an Orion CB working on a virtual machine just fine.
Now i have a gateaway that communicates in mqtt. So i want to use IDAS as an iot Agent to make the link between my GW and Orion CB.
My question is how do i install IDAS?
I have this : https://github.com/telefonicaid/fiware-IoTAgent-Cplusplus/
It really is not clear at all what are the spteps to take in order to install and use it. Can any one explain?
Or is there any kind of Virtual machine with IDAS already installed on it, like for the orion CB?
Thanks
I think I ended up asking the same question here.
The easiest way to install IDAS on a CentOS machine is through its RPM's available in it's calogue page.
As of now I'm having other issues with such installation, but maybe you won't have them.
I hope this helps even though it comes so late.

How to connect to my MQTT Broker in Openshift

Following these two tutorials (https://www.anavi.org/article/182/ and http://wei-meilin.blogspot.tw/2014/05/red-hat-openshift-xpaas-simple-mqtt.html) I have installed a MQTT Broker using JBoss Fuse.
Although my mqtt-container disappears after a while (I don't know why) I can make a port-forwarding and test the broker.
But I would like to know how to connect directly to the broker. Do you know how to do it?
I have tried this tutorial (http://training.runcloudrun.com/advanced/16-Network-and-Protocols.md.html - AMQ Example) but I don't have access to "/var/lib/openshift/.httpd.d/sniproxy.cfg"
I am the author of the first tutorial that you pointed out. If you want to use MQTT without local port forwarding please have a look at the remark at the article on my blog and the AMQ cartridge that demonstrates the SNI features:
The port forwarding is not convenient for real life cases, especially if the MQTT clients are working on embedded devices such as microcontrollers and it is recommended to use a SNI Proxy as explained here: http://training.runcloudrun.com/advanced/16-Network-and-Protocols.md.html
I was using Online Openshift and that feature is only available for Enterprise edition.
Why doesn't Openshift have this feature (complete) in the Online mode?
One way to work around is to use the mqtt over websocket feature with a DIY cartridge. See the SO question "How can I access socket through Openshift" for some pointers to further details about how to run websocket on openshift.
The mosquitto seems to have implemented the websocket feature though I have not verified by testing it out.

Best way to deploy java application on AWS using Netbeans?

I have a publicly accessed database on RDS that works like a charm from Netbeans. I would like to deploy my Java application on AWS. What is the simplest way to do this? I will only use the application for some very basic tasks, getting used to cloud computing working on a small scale. Is EC2 my best bet and is it possible to upload apps as easily as with the Google App Engine plugin. Can I use the same jdbc driver as I use locally, and can I use JPA against the database? I would rather not use Eclipse for now as I am in a bit of a hurry and need to get this working as soon as possible.
This is a lot of questions for one question, but I'll see if I can help you out.
1. Simplest Way to deploy to AWS
If this application is as simple as you say it is, the most cost effective solution while you're getting used to AWS will be to deploy to a micro instance and take advantage of the free tier. From Amazon:
AWS Free Tier includes 750 hours of Linux and Windows Micro Instances each month for one year. To stay within the Free Tier, use only EC2 Micro instances.
The simplest way to deploy directly from Netbeans is to use the integrated Elastic Beanstalk support. This saves you from having to configure things yourself.
Another option is to launch a Ubuntu AMI and install Tomcat. Create a WAR file from your application and place it where Tomcat can find it. I suggest using the first method.
2. Is EC2 my best bet?
This is a little open ended. For a nice learning experience as you get accustomed to AWS, the free tier for EC2 is a nice platform to learn with. If your application needs to eventually scale, using EBS is a pretty simple way to manage an application. My answer is an opinion because "best bet" depends solely on the requirements of your application, but I say yes.
3. Is it possible to upload apps as easily as with the Google App Engine plugin?
For simple applications I think so. I think it's even easier if you switch to Eclipse and use the toolkit for AWS. Whether Google App Engine or AWS is easier for you will once again depend on personal preference, the application, and your requirements.
4. Can I use the same JDBC driver as I use locally?
If you're using MySQL Connector/J then yes. Read this to understand how it works with RDS.
5. Can I use JPA against the database?
Yes. You'll change the endpoint from localhost to the endpoint of your RDS instance.
6. I would rather not use Eclipse for now...
Another personal preference, but the AWS toolkit for Eclipse is very easy to use and can speed the process up a bit.