Store in Cosmos data coming from Orion to Cygnus - fiware

I have one doubt about how to persist data in an architecture where Cygnus is subscribed to Orion Context Broker and then Cygnus must persist data in Cosmos. Is it necessary to implement a custom WebHDFS client for persisting the data from Cygnus to Cosmos or can it be automatically stored if we configure Cosmos via CLI? After reading some documentation I don't know if this "last step" can be done through configuration using CLI or if a custom client is needed. When could be not necessary a custom WebHDFS client?

As said, Cygnus subscribes to Orion in order to receive notifications about certain desired entities, when any of their attributes changes.
What happens then? Cygnus uses WebHDFS REST API for writting data into Cosmos HDFS, typically a file per notified entity. Initially, if the file does not exits, the "create" operation from the REST API is used; if it already exists, the "append" operation is used.
Where are the above files created? Cygnus HDFS files path is as:
/user/<your_cosmos_username>/<notified_fiware_service>/<notified_fiware_servicePath>/<built_destination>/<built_destination>.txt
The notified_fiware_service and notified_fiware_servicePath are Http headers sent by Orion in the notification; they are about how to organize the data. The built_destination is usually the result of concatenating the notified entityId and the entityType.
Finally, your_cosmos_username is your Linux and HDFS username in the FIWARE LAB Cosmos deployment. This is obtained by login with your FIWARE LAB credentials at http://cosmos.lab.fi-ware.org/cosmos-gui/. You only have to this once in your life; it is, let's say, a provisioning step that creates the Unix username and your HDFS userspace.

Related

MySQL Trigger to RabbitMQ Communication

I have an application on Sql Server that send change data to Target database using Sql Service Broker. I just capture the data from Trigger and push data into service broker Queue. Now I want to make compatible my application to MySql. Now the problem is how i achive exactly the same implementation in MySql because its not supported. If I use external Message Broker like RabbitMq how MySql Table Trigger directly communicate with RabbitMq.
Thanks in Advance
If you can use Kafka as an alternative to RabbitMQ there are some tutorials on how to accomplish a similar goal:
Using Kafka Connect
Using GCP services
Change data capture (CDC) is a term used to classify this pattern of reacting to data changes and then delivering those changes in real-time to a downstream process.

How to publish AWS SNS data to MySql database

I am new to AWS/Database.
Since i am completely beginner to this, any suggestions will be appreciated.
Currently in the project it has been planned like data from AWS database will be pushed using SNS HTTP fanout to external MySql Database.
NOTE :
1.The data will be pushed by the Client using AWS SNS
2. We have no access to the AWS account nor we are planning to have a AWS account.
3. External MySql database is a private database running on Linux Server
I have gone through the Official documentation of AWS SNS, and also some websites. This is all i found :
Use external applications like Zapier to map the data.
Develop some application to map the data.
Is it like using a Servlet application in the receiver side to update the table, or is there any other methods?
AWS DB -----> SNS -----> _________ -----> External MySql DB
Thanks
If you cannot have an AWS Account, you can have your own web server consume the SNS Messages. SNS can deliver messages to an HTTP/HTTPS endpoint in a predefined structure. Read more details here. You can enable such an endpoint on your own server and share your server URL with the AWS Account owner. They can create a subscription from their SNS topic to your endpoint.
For setting up this endpoint, there are many options. ExpressJS is one such popular framework to quickly implement HTTP APIs.
Probably, option two would be more suited, or at least first to be considered. For that option you would have have to develop a lambda function which would receive data from SNS, re-format if needed and upload it to MySQL. So your architecture would look like:
Data--->SNS--->Lambda function---> MySQL
Depending on the amount of incoming data to the SNS, you may add SQS queue as well to the mix, to buffer the records and enable fun-out architecture. For example:
/---> SQS queue 1---> Lambda function 1---> MySQL
Data -->SNS --/
\
\--- SQS queue 2 ---> Lambda function 2, EC2 instance, Container ---> Other destination
Other solutions are possible. But I would first consider the above, before looking into other ways.

Sync RDBMS with Apache Directory Ldap

Currently, I am on a requirement to sync data from apache direcotry ldap to any of the RDBMS Databases (MySQL, PostgreSQL). Directory approximately holds a few million of records for now and may grow in future. Ldap directory is being the primary data source for now but the motive is to have real time data in both Ldap as well as in RDBMS since We have a plan to use RDBMS for real-time analytics purpose.
Option1:
Thinking of using spring cloud data flow. A source spring boot app to read ldap data that are changed after the last sync run. Source app pushes data to queue(RabbitMQ for now). Sink would be another spring boot app that collects data directly from queue and persists the data into RDBMS. We will be able to better track and manage the sync process jobs using spring cloud data flow dashboard offerings.
Option2:
Spring LdapTemplate helps us to talk to ldap directory in our application. One approach would be to intercept the ldapTemplate calls wherever applicable and push the data to queue and then an intermediate app reads data from queue(RabbitMQ) and converts the ldap response to the required format that can be updated into RDBMS DB.
I am new to Ldap and spring cloud data flow. So far, I have got only these 2 approaches considering my project's existing technology and system landscape. Any other suggestions/ approach are really appreciated. Thanks in advance.
One another approach if LDAP is Microsoft ad server then creating windows service in C# which will connect to your LDAP server and fetch data every day and send data to your rdbms through socket connection. Which is reliable and consistent.

AWS authentication to Vault

We're using Vault to store our application secrets and config. When our app (Java) starts, a script does all the magic of getting the secrets and config from Vault and storing them locally for the application to read. The script is authenticating to Vault using AWS IAM role.
Now we're getting to a situation where the application needs to read secrets from Vault on the go, not just on startup. For that purpose, I need it to be able to do the authentication pretty much on every request. It's worth mentioning that the app might also run on the developer machine, so whatever authentication done - it needs to work on the EC2 instance as well as the local development environment.
I'm currently leaning towards creating a username and password, store them in Vault for the application to get when starting up. Then the application could use that username/password to authenticate to Vault when it needs.
I'm also considering AppRole, but can't really see any real advantage to it over simple user/password setup.
What's the best solution for this use-case? Any advise would be highly appreciated!
Thanks,
Yosi
The AWS recommendation for storing secrets is to use AWS Systems Manager Parameter Store.
Software running on an Amazon EC2 instance with an assigned Role can use those credentials to access the Parameter Store to retrieve application secrets.
The Parameter Store can also be used outside of EC2, but some AWS credentials will still be needed to authenticate to the Parameter Store.

How to connect with mongoOrion database?

I want to connect to MongoDB by GUI manager e.g 3T MongoChef, MongoDB Compass,Robomongo,MongoBooster.
I use windows.
How to connect with mongoOrion database by GUI manager?
As far as I understand (althoug I don't know 3T MongoChef, so I may be wrong) MongoChef is expecting the URL of a MongoDB endpoint. However, you have provided the URL of the Orion Context Broker endpoint (i.e. the NGSI REST API).
You should use the URL endpoint correspondign to the MongoDB instance used by Orion. By default Orion uses the MongoDB instance running at localhost (i.e. the same host where Orion runs) on default mongo port (i.e. 27017). That default can be overriden using the -dbhost CLI parameter.