Fiware stack iotagent,Orion context broker compatibility with AWS DocumentDB instead of mongoDB - fiware

Can anybody confirm the AWS DocumentDB compatibility with Fiware GE's like iotagent,Orion context broker.
Is fiware stack fully working with DocumentDB. Looking for a suggestions and things to be consider before attempting it.
Thanks
ONR

Since AWS DocumentDB only supports a subset of MongoDB, there is definitely going to be some incompatibility.
Take for example geoqueries:
curl -G -X GET \
'http://localhost:1026/v2/entities' \
-d 'type=Store' \
-d 'georel=near;maxDistance:1500' \
-d 'geometry=point' \
-d 'coords=52.5162,13.3777'
Internally these rely on the MongoDB geospatial operations which are not currently supported: https://docs.aws.amazon.com/documentdb/latest/developerguide/mongo-apis.html#mongo-apis-geospatial
In my understanding, the IoT Agents are just using Mongo-DB as a simple memory store for mapping data so would not have any issues.

Related

How to parse a json payload while sending it from producer to consumer topic?

I want to send a payload from a producer topic to a consumer topic. I've created the channels locally & tried sending payload on producer topic. But the payload is not received on the consumer side.
I think this could be another in JSON formatting I've tried online JSON beautifiers but this is not helping.
Although it's a very slight chance, there is a possibility that there's something wrong with the code and the producer topic is not able to receive the payload. But I'm not able to confirm this.
You'll need to show code to solve your specific problem, but here is a simple example using kcat and jq
Producing
$ kcat -P -b localhost:9092 -t example
{"hello":"world"}
{"hello":"test data"}
Consume and parse
$ kcat -b localhost:9092 -C -t example -u | jq -r .hello
world
test data
The Kafka broker will not validate your JSON. The serialization library in your client might. So your issue could be any one of the following
If your serializer failed, and you aren't catching and logging that exception
You are not sending enough data for the producer buffer to clear, and so you should call .flush() method on the producer at some point.
You have some Kafka authorization enabled on your cluster and your producer is failing to connect/produce.
Some other connection setting is wrong in your code.

Autodek Forge Tutorial

I have been working through the Autodesk Forge Sample App tutorial.
WHen I click the button to connect with my account I get this error;
{"developerMessage":"The required parameter(s) redirect_uri not present in the request","errorCode":"AUTH-008","more info":"https://forge.autodesk.com/en/docs/oauth/v2/developers_guide/error_handling/"}
If you get this error, you probably trying a 3 legged oauth flow. And it means that you did not provide the callback url in the request. Since you did not say which tutorial you have been using, let me point you to 2 sources - the Forge documentation tutorial here or the Learn Forge tutorial here
In both case, it is important to have the callback url defined in the application page on the Forge portal - If you are using your local machine, it should be something like http://localhost:3000/mycallback. The learnforge material tells you to define it as (see here):
http://localhost:3000/api/forge/callback/oauth
where the documentation tutorial says to use
http://sampleapp.com/oauth/callback
but here they assume you own the domain sampleapp.com which is probably not true. You need to replace sampleapp.com by your own domain or the localhost:port when developing your webserver on your local machine. Note it is important to use your true domain vs localhost when you'll run the code on your server, and update both your application page and your code to use the same definition. I usually setup 3 applications (dev: with localhost:3001, staging: with myapp-staging.autod3sk.net, and production: with myapp.autod3sk.net) - this is to avoid to have to edits keys all the time and make the application deployment a lot easier.
Now that your application is setup, you need to use that URL in your request as documented in the Oauth API. But all parameters should be URL encoded, otherwise the / character will be misinterpreted by the server. Failing to pass the correct and encoded URL parameter in the request will result in the error you are seeing.
Here is an example:
https://developer.api.autodesk.com/authentication/v1/authorize \
?response_type=code \
&client_id=MYCLIENT_ID \
&redirect_uri=MY_ENCODE_CALLBACKURL \
&scope=REQUIRED_SCOPES
after replacing the placeholders, it should look like this
https://developer.api.autodesk.com/authentication/v1/authorize\
?response_type=code\
&client_id=oz9f...k2d\
&redirect_uri=http%3a%2f%2flocalhost%3a3000%2fapi%2fforge%2fcallback%2foauth\
&scope=data%3aread
Copy this in your browser, and after logging and the consent page, the service should return to your browser with a URL like this:
http://localhost:3000/api/forge/callback/oauth?code=wroM1vFA4E-Aj241-quh_LVjm7UldawnNgYEHQ8I
Because, we do not have a server yet, the browser will error, but you can clearly see the URL returned to you with a code. You know need to copy that code into another request to get the final token. Here, we will use curl, but ideally both request and the callback should be handled by your server code.
curl 'https://developer.api.autodesk.com/authentication/v1/gettoken' \
-X 'POST' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'client_id=oz9f...k2d' \
-d 'client_secret=eUr...Q1e' \
-d 'grant_type=authorization_code' \
-d 'code=wroM1vFA4E-Aj241-quh_LVjm7UldawnNgYEHQ8I' \
-d 'redirect_uri=http://localhost:3000/api/forge/callback/oauth'
Ideally, all this needs to be done in your server code, like the learnforge tutorial teach you to do.

how to capture bitorrent infohash id in network using tcpdump or any other open scource tool?

i am working on a project where we need to collect the bitorrent infohash id running in our small ISP network. using port mirroring we can pass the all wan traffic to a server and run tcpdump tools or any other tool to find the infohash id download by bitorrent client. for example
tcpflow -p -c -i eth1 tcp | grep -oE '(GET) .* HTTP/1.[01].*'
this code is showing result like this
GET /announce?info_hash=N%a1%94%17%2c%11%aa%90%9c%0a%1a0%9d%b2%cfy%08A%03%16&peer_id=-BT7950-%f1%a2%d8%8fO%d7%f9%bc%f1%28%15%26&port=19211&uploaded=55918592&downloaded=0&left=0&corrupt=0&key=21594C0B&numwant=200&compact=1&no_peer_id=1 HTTP/1.1
now we need to capture only infohash and store it to a log or mysql database
can you please tell me which tool can do thing like this
Depending on how rigorous you want to be you'll have to decode the following protocol layers:
TCP, assemble packets of a flow. you're already doing that with tcpflow. tshark - wireshark's CLI - could do that too.
HTTP, extract the value of the GET header. A simple regex would do the job here.
URI, extracting the query string
application/x-www-form-urlencoded, info_hash key value pair extraction and handling of percent-encoding
For the last two steps I would look for tools or libraries in your programming language of choice to handle them.

Bug in GCE Developer Console 'equivalent command line'

When attempting to create an instance in a project that includes local SSDs, I am given the following (redacted) command line equivalent:
gcloud compute --project "PROJECTS" instances create "INSTANCE" --zone "us-central1-f" \
--machine-type "n1-standard-2" --network "default" --maintenance-policy "MIGRATE" \
--scopes [...] --tags "http-server" --local-ssd-count "2" \
--image "ubuntu-1404-trusty-v20150316" --boot-disk-type "pd-standard" \
--boot-disk-device-name "INSTANCEDEVICE"
This fails with:
ERROR: (gcloud) unrecognized arguments: --local-ssd-count 2
Indeed, I find no mention of --local-ssd-count in the current docs: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
Changing this to --local-ssd --local-ssd works, as then the default are used.
This is using Google Cloud SDK 0.9.54, the most recent after gcloud components update.
If you've found a bug for GCE or have a feature you'd like to propose/request, the best way to report it is to use the 'Public Issue Tracker' that Google has made available.
Visit this Issue Tracker to report your feedback/bug/feature request. It does not require any support package at all.
I highly encourage you to do so as they have staff actively monitoring and working on those reports (Please note that it's different than what they seem to do on StackOverflow as the tracker is for bugs and feature requests, while SO is for questions). It is likely the best way to get your feedback to their engineers. But we do know that they have staff answering questions on Stack Overflow as well. Questions go here, bug reports go to the tracker as far as I understand.

Create Couchbase documents via REST API

I am new to Couchbase and I want to know about how CRUD can be achieved in it.
I have sucessfully created a bucket and I've tried to insert documents in it using CURL.
Creating a bucket using CURL succeeded like the following:
curl -X POST -u admin:citrus -d name=test-bucket -d ramQuotaMB=100 -d authType=none -d replicaNumber=2 -d proxyPort=11216 http://example.com:8091/pools/default/buckets
Now how can I create sample documents in this bucket?
How can I achieve this by using a REST API, please help me..
Couchbase isn't designed to use a REST API for data creation or mutation, perhaps you are thinking of CouchDB which does offer this and shares some similarities with Couchbase although they are distinct technologies.
You need to use one of the sdks to interact with your bucket, there are a multitude of SDK's available in all the major languages, Java,Ruby,Python,C etc. Check out the list of them here, they also contain getting started guides which covers the basic operations such as get/set and more complex examples of views and topics such as locking.
http://docs.couchbase.com/
I think you should use the Sync Gateway of Couchbase. The Sync Gateway provides a REST API which allows you to Create, Read, Update and Delete (CRUD) documents.
For example, if you have a Couchbase Server running on port 8091 with a bucket called test-bucket, you can set up your Sync Gateway with the following content in your sync_gateway.json configuration file:
{
"log": ["HTTP+"],
"adminInterface": "127.0.0.1:4985",
"interface": "0.0.0.0:4984",
"databases": {
"test-db": {
"server": "http://localhost:8091",
"bucket": "test-bucket",
"username": "test-bucket",
"password": "test-bucket-password",
"users": {
"GUEST": {"disabled": false, "admin_channels": ["*"] }
}
}
}
}
Then, after starting the Sync Gateway, you can create a document like the following:
curl -X PUT -H 'Content-Type: application/json' http://localhost:4984/test-db/myNewDocId -d #document.file
With document.file being a file with the JSON content of the document you'd like to create and with myNewDocId being the ID of the new document.
You can find all supported REST API methods and details in the official documentation: http://developer.couchbase.com/documentation/mobile/1.1.0/develop/references/sync-gateway/rest-api/document/index.html
Couchbase now supports adding document via curl call.
you can do
curl localhost:8093/query/service -u uname:paaswd -d 'statement=INSERT INTO `bucketName` (KEY, VALUE) VALUES ( "my_doc_id", {"Price":"price"} );'
Note we are using 8093 which is query service port. for this to run you have to have query service(n1ql) running.