Getting historical data in Fi ware using Cosmos - fiware

I'm trying to get all the historic information about a sensor of Fi Ware.
I've seen that Orion uses Cygnus to store historics in Cosmos. Is that information accesible or is it only possible to use IDAS to get it?
Where could I get more info about this?

The way you can consume the data is, in an incremental approach from the learning curve point of view:
Working with the raw data, either "locally" (i.e. logging into the Head Node of the cluster) by using the Hadoop commands, either "remotely" by using the WebHDFS/HttpFS REST API. Please observe within this approach you have to implement whichever analyzing logic you need, since Cosmos only allows you to manage, as said, raw data.
Working with Hive in order to query the data in a SQL-like approach. Again, you can do it locally by invoking the Hive CLI, or remotely by implementing your own Hive client in Java (there are some other languages) using the Hive libraries.
Working with MapReduce (MR) in order to implement strong analysis. In order to do this, you'll have to create your own MR-based application (typically in Java) and run it locally. Once you are done with the local run of the MR app, you can go with Oozie, which allows you to run such MR apps in a remote way.
My advice is you start with Hive (the step 1 is easy but does not provide any analyzing capabilities), first locally trying to execute some Hive queries, then remotely implementing your own client. If this kind of analysis is not enough for you, then move to MapReduce and Oozie.
All the documentation regarding Cosmos can be found in the FI-WARE Catalogue of enablers. Within this documentation, I would highlight:
Quick Start for Programmers.
User and Programmer Guide (functionality described in sections 2.1 and 2.2 is not currently available in FI-LAB).

Related

coding, data migration and deployment process using jive

I am new to Jive, currently going through the documentation provided on https://docs.jivesoftware.com/.
what I am looking for is
Any specific editor to write code in jive x
How to migrate data into jive
Deployment process followed by jive. Like where to develop, test, deploy.
So anyone who has worked using jive, can provide some links/tips.
There is no specific editor to write code for Jive. This is a personal preference which might also depend if you are writing a Plugin in Java or an Add-On in JS. I prefer to use IntelliJ in general.
The best option to migrate data into Jive is to use the REST API. It is important to rate-limit the requests to not overload the instance but the API should be able to handle a considerable number of requests, depending on the underlying infrastructure. You could in theory also use the DB to migrate data into Jive but that would require a deep knowledge of the Jive architecture and the chances of breaking something are high.
For development and early testing the best is a local instance, which you can setup following these steps. For full end-to-end testing the best is to have a UAT environment which replicates as much as possible the production instance/infrastructure.

Unable to find all issues through SonarQube WS API

Goal: Export all SonarQube issues for a project to JSON/CSV.
Approach 1: Mine the sonar mysql database
Approach 2: Use the SonarQube WS API
First I was motivated to go for approach-1, but having discussion with the SonarQube core developer community I got the impression not to touch the database at any situation.
Thus I proceed with approach-2 and developed scripts to get issues. However, later I found that through WS-API, I can get upto 10000 issues which does not meet my goal.
Now I am convinced that the approach-1 i.e., mining the database is best for me. When looking at the "issues" table in sonar db, I have the following question.
Question. What is the format/encoding of the "location" field and how can I decode it from python/java?
Extracting data from database is not recommended at all. Schema and content frequently changes. Each upgrade may break your SQL request. Moreover it contains binary data (issue location) which can't be parsed as-is.
The only way to get data is through web services. If api/issues/search faces a limitation that you consider as critical, then you should explain your functional need to the SonarQube google group.

NetSuite Migrations

Has anyone had much experience with data migration into and out of NetSuite? I have to export DB2 tables into MySQL, manipulate data, and then export ina CSV file. Then take a CSV file of accounts and manipulate the data again for accounts to match up from our old system to new. Anyone tried to do this in MySQL?
A couple of options:
Invest in a data transformation tool that connects to NetSuite and DB2 or MySQL. Look at Dell Boomi, IBM Cast Iron, etc. These tools allow you to connect to both systems, define the data to be extracted, perform data transformation functions and mappings and do all the inserts/updates or whatever you need to do.
For MySQL to NetSuite, php scripts can be written to access MySQL and NetSuite. On the NetSuite side, you can either do SOAP web services, or you can write custom REST APIs within NetSuite. SOAP is probably a bit slower than REST, but with REST, you have to write the API yourself (server side JavaScript - it's not hard, but there's a learning curve).
Hope this helps.
I'm an IBM i programmer; try CPYTOIMPF to create a pretty generic CSV file. I'll go to a stream file - if you have NetServer running you can map a network drive to the IFS directory or you can use FTP to get the CSV file from the IFS to another machine in your network.
Try Adeptia's Netsuite integration tool to perform ETL. You can also try Pentaho ETL for this (As far as I know Celigo's Netsuite connector is built upon Pentaho). Also Jitterbit does have an extension for Netsuite.
We primarily have 2 options to pump data into NS:
i)SuiteTalk ---> Using which we can have SOAP based transformations.There are 2 versions of SuiteTalk synchronous and asynchronous.
Typical tools like Boomi/Mule/Jitterbit use synchronous SuiteTalk to pump data into NS.They also have decent editors to help you do mapping.
ii)RESTlets ---> which are typical REST based architures by NS can also be used but you may have to write external brokers to communicate with them.
Depending on your need you can have whatever you need.IN most of the cases you will be using SuiteTalk to bring in data to Netsuite.
Hope this helps ...
We just got done doing this. We used an iPAAS platform called Jitterbit (similar to Dell Boomi). It can connect to mySql and to NetSuite and you can do transformations in the tool. I have been really impressed with the platform overall so far
There are different approaches, I like the following to process a batch job:
To import data to Netsuite:
Export CSV from old system and place it in Netsuite's a File Cabinet folder (Use a RESTlet or Webservices for this).
Run a scheduled script to load the files in the folder and update the records.
Don't forget to handle errors. Ways to handle errors: send email, create custom record, log to file or write to record
Once the file has been processed move the file to another folder or delete it.
To export data out of Netsuite:
Gather data and export to a CSV (You can use a saved search or similar)
Place CSV in File Cabinet folder.
From external server call webservices or RESTlet to grab new CSV files in the folder.
Process file.
Handle errors.
Call webservices or RESTlet to move CSV File or Delete.
You can also use Pentaho Data Integration, its free and the learning curve is not that difficult. I took this course and I was able to play around with the tool within a couple of hours.

Wrap SQL Server Objects Quickly?

Back in the MSSQL 2000 timeline, there was an IIS integration layer that allowed HTTP GET commands to make select statements, and there were other SqlXml niceties that worked (not that fast or well but they worked) out of the box. I gave a chance to expose database stuff fairly quickly.
What is the comparable technology for MSSQL 2008/2012? I saw slashDb (http://www.slashdb.com/) and it seems to do that, but I am trying to understand the other options out there. Just SQL Server crud and sproc access.
Thanks.
Yes, SlashDB does exactly that and more. Full disclosure: I am the founder and CEO.
Once SlashDB is installed you would use its web interface to connect it with your database. Depending which database login and database schema you use for that connection, you will have the tables and views from that schema turned into of URL endpoints.
Those URLs can be followed in the browser but they are also API endpoints in JSON, XML or CSV. It works for reading and writing (you can control that in user configuration).
In addition to that you can define a set of parameterized SQL queries. Each query is given a name and instantly becomes an API endpoint too.
In order to help you getting started easily SlashDB is available on AWS and Azure marketplaces, as a Docker container from DockerHub, pre-built virtual machines or as .rpm and .deb packages for installation directly on Linux.
For more technical info please visit: https://docs.slashdb.com
The nearest equivalent may be SOAP/HTTP endpoints, however Microsoft has deprecated them for various reasons and recommends WCF or ASP.NET instead. Although the simplest way to get a quick CRUD setup is probably to use a framework or ORM that generates it for you, like LINQ to SQL or whatever else suits your needs.

Migrating subsets of production data back to dev

In our rails app we sometimes have db entries created by users that we'd like to make part of our dev environment, without exporting the whole table. So, we'd like to be able to have a special 'dev and testing' dump.
Any recommended best practices? mysqldump seems pretty cumbersome, and we'd like to pull in rails associations as well, so maybe a rake task would make more sense.
Ideas?
You could use an ETL tool like Pentaho Kettle. Once you have initial transformation setup that you want you could easily run it with different parameters in the future. This way you could also keep all your associations. I wrote a little blurb about Pentaho for another question here.
If you provide a rough schema I could probably help you get started on what your transformation would look like.
I had a similar need and I ended up creating a plugin for that. It was developed for Rails 2.x and worked fine for me, but I didn't have much use for it lately.
The documentation is lacking, but it's pretty simple. You basically install the plugin and then have a method to_sql available on all your models. Options are explained in README.
You can try it out and let me know if you have any issues, I'll try to help.
I'd go after it using a Rails runner script. That will allow your code to access the same things your Rails app would, including the database initializations. ActiveRecord will be able to take advantage of the model relationships you've defined.
Create some "transfer" tables in your production database and copy the desired data into those using the "runner" script. From there you could serialize the data, or use a dump tool, since you'll be dealing with a reduced amount of records. Reverse the process in the development environment to move the data into the database.
I had a need to populate the database in one of my apps from remote web logs and wrote a runner script that fired off periodically via cron, ftps the data from my site and inserts the data.