Unable to find all issues through SonarQube WS API - mysql

Goal: Export all SonarQube issues for a project to JSON/CSV.
Approach 1: Mine the sonar mysql database
Approach 2: Use the SonarQube WS API
First I was motivated to go for approach-1, but having discussion with the SonarQube core developer community I got the impression not to touch the database at any situation.
Thus I proceed with approach-2 and developed scripts to get issues. However, later I found that through WS-API, I can get upto 10000 issues which does not meet my goal.
Now I am convinced that the approach-1 i.e., mining the database is best for me. When looking at the "issues" table in sonar db, I have the following question.
Question. What is the format/encoding of the "location" field and how can I decode it from python/java?

Extracting data from database is not recommended at all. Schema and content frequently changes. Each upgrade may break your SQL request. Moreover it contains binary data (issue location) which can't be parsed as-is.
The only way to get data is through web services. If api/issues/search faces a limitation that you consider as critical, then you should explain your functional need to the SonarQube google group.

Related

Migrate from mysql to postgre in Google Cloud SQL

Does anyone know how to migrate from mysql db to postgre db in Google Cloud SQL ?
I tried browsing the web put I can't really find any instructions how to accomplish this
The Data Migration service only enables you to upgrade major version within same db but not to switch to different db
According to the documentation, DMS currently supports only homogeneous database migration 1 here’s a link for best practices 2.
There are currently no other Google Cloud tools to do the MySQL to PostgreSQL migration as you are looking for.
Nevertheless, in order to do the MySQL to PostgreSQL migration, a conversion would be necessary as the Databases are not entirely similar.
There is a possible workaround in stackoverflow link that shares multiple solutions to do a conversion, please keep in mind that the information is supported by the community meaning Google Cloud Platform cannot vouch for it.
With the aforementioned, you have two options in order to do the migration. In the first one, you would need to follow the next steps:
1.- Do an export of your data in a specific format (dump file or csv) as the documentation mentions 4.
2.- Do the conversion of the data in order to have the right format (Postgresql) 3.
3.- Do the import of the data as the documentation mentions 5.
On the other hand, the second solution could be using the 3rd party tool “pgloader” 6,7 that may help you with the migration.

Tiaga to JIRA migration options

Purpose
My company is looking at migrating to JIRA from our current story tracker Taiga.
Minimal Goal
At a minimum we are looking to migrate the following items:
Issues
User Stories
Epics
Sub Tasks
We are not interested in migrating the project name, settings user's etc, all of this is easy to set up in JIRA.
Looking For
We have looked for hours trying to find a migration tool or plugin to no avail. We ideally are looking for something, even a script that would make moving all of the above easy as there is a large data set.
Looking over the JIRA documentation around importing, they do not have native support for Taiga and the data that is export is difficult to get into the format (CSV or JSON) that they require.
Hoping to find a hidden gem that this community is aware of for the migration.
Both Tiaga and JIRA have a REST API that allows you to script this. If you're familiar with REST APIs this should be reasonably simple.
Taiga's REST API documentation is available here.
For JIRA you could use one of these REST resources:
Create issue: POST /rest/api/2/issue
Create issues in bulk: POST /rest/api/2/issue/bulk

ETL between a MySQL primary Data Store and a MongoDB secondary Data Store

We have a rails app that has a MySQL backend, each client has one DB and the schema is identical. We use a custom gem to change the DB based on the URL of the request (This is some legacy code that we are trying to move away from)
We need to capture some changes from those MySQL databases (Changes in inventory, some order information, etc) transform and store in a single MongoDB database (multitenant data store), this data will be used for analytics at first, but our idea is to move everything there.
There was something in place to do this, using AR callbacks and Rabbit, but to be honest it wasn't working correctly and it looked like it was more trouble to fix it than to start over with a fresh approach.
We did some research and found some tools to do ETL but they are overkill for our needs.
Does anyone have some experience with a similar problem?
Recommendations on how to architect and implement this simple ETL
Pentaho provides change-data-capture option which can solve Data-synchronization problems.
If by Overkill you mean Setup, Configuration, then Yes that is the common problem with ETL tools and PENTAHO is the easiest among them.
If you can provide more details, I'll be glad to provide an elaborate answer.

Getting historical data in Fi ware using Cosmos

I'm trying to get all the historic information about a sensor of Fi Ware.
I've seen that Orion uses Cygnus to store historics in Cosmos. Is that information accesible or is it only possible to use IDAS to get it?
Where could I get more info about this?
The way you can consume the data is, in an incremental approach from the learning curve point of view:
Working with the raw data, either "locally" (i.e. logging into the Head Node of the cluster) by using the Hadoop commands, either "remotely" by using the WebHDFS/HttpFS REST API. Please observe within this approach you have to implement whichever analyzing logic you need, since Cosmos only allows you to manage, as said, raw data.
Working with Hive in order to query the data in a SQL-like approach. Again, you can do it locally by invoking the Hive CLI, or remotely by implementing your own Hive client in Java (there are some other languages) using the Hive libraries.
Working with MapReduce (MR) in order to implement strong analysis. In order to do this, you'll have to create your own MR-based application (typically in Java) and run it locally. Once you are done with the local run of the MR app, you can go with Oozie, which allows you to run such MR apps in a remote way.
My advice is you start with Hive (the step 1 is easy but does not provide any analyzing capabilities), first locally trying to execute some Hive queries, then remotely implementing your own client. If this kind of analysis is not enough for you, then move to MapReduce and Oozie.
All the documentation regarding Cosmos can be found in the FI-WARE Catalogue of enablers. Within this documentation, I would highlight:
Quick Start for Programmers.
User and Programmer Guide (functionality described in sections 2.1 and 2.2 is not currently available in FI-LAB).

Write Breezejs data source provider with Nodejs for MySQL

I would like to use Breeze with NodeJS and MySQL. Unfortunately, so far I could find no examples of this. I have seen that there is an example of NodeJS + MongoDB. Now I try to analyze the MongoDB provider (mongoSaveHandler.js - [npm install breeze-mongodb]?) to write my own provider for MySQL. Unfortunately I could not find any documentation on how such a provider must be established.
The provider should be able to deal with complex data and navigation properties (one-to-many, etc.) and also save/delete/update them properly in the MySQL database.
The following is an example of how the database structure might look like:
Database Image
My questions are now:
Is there already an example with Breeze (+NodeJS) and MySQL that I could use?
Is there a documentation/sample how to write a own data source provider?
If I'm on my own, what should I look for when I create my provider?
There are plenty of Web API+EF+SQL samples, the Node+MongoDB sample you've already seen, a Ruby+SQL sample, and even a NoDB (and 3rd party data) sample ... but no Node+SQL sample yet.
These docs aren't spot on for your use case, but they will likely point you in the right direction:
ToDo Server
The docs are for Web API+EF+SQL, but good detail on how everything is wired together.
MongoDB
The MongoDB docs as well as the Zza! sample are pretty good about showing how they configured the Node server (to talk with MongoDB, sure, but you can see the process all the same).