coding, data migration and deployment process using jive - jive

I am new to Jive, currently going through the documentation provided on https://docs.jivesoftware.com/.
what I am looking for is
Any specific editor to write code in jive x
How to migrate data into jive
Deployment process followed by jive. Like where to develop, test, deploy.
So anyone who has worked using jive, can provide some links/tips.

There is no specific editor to write code for Jive. This is a personal preference which might also depend if you are writing a Plugin in Java or an Add-On in JS. I prefer to use IntelliJ in general.
The best option to migrate data into Jive is to use the REST API. It is important to rate-limit the requests to not overload the instance but the API should be able to handle a considerable number of requests, depending on the underlying infrastructure. You could in theory also use the DB to migrate data into Jive but that would require a deep knowledge of the Jive architecture and the chances of breaking something are high.
For development and early testing the best is a local instance, which you can setup following these steps. For full end-to-end testing the best is to have a UAT environment which replicates as much as possible the production instance/infrastructure.

Related

Google Cloud Platform - I need a webhook to get JSON data. How to approach it?

I am fairly new to the google-cloud-platform world and I am struggling with my 1st steps there.
So, what I want to know is how to make a webhook app, that will run 24/7 and "catch" data sent from another 3rd party service (later I will try to do something with this data - manipulate it and push into a DB, but that's anohter question to ask).
I have set up an instance on GCP which Linux based, but , what's next?
I am familiar with PHP, but I want to do it this time in Phyton (learning it nowdays).
Which service in GCP should I use and how I set up the server to catch every data the 3rd party service is sending?
This sounds like a perfect fit for Google App Engine. As long as the 3rd-party service makes HTTP requests, App Engine is a great fit. You can write your application in Python, PHP, Java, or just about anything else, then GAE takes care of the rest. No need to manage Linux, instances, firewall rules, or anything else.
If your load is minimal, you may even fit into the free tier and pay nothing to run your app.
Check out the GAE Python docs at https://cloud.google.com/appengine/docs/python/.
If you want to run your Web Hook continuously, then you can run it as a Cron job. Here is a guide on how to run Python scripts as cron jobs on Google App Engine: Scheduling Tasks With Cron for Python

Getting historical data in Fi ware using Cosmos

I'm trying to get all the historic information about a sensor of Fi Ware.
I've seen that Orion uses Cygnus to store historics in Cosmos. Is that information accesible or is it only possible to use IDAS to get it?
Where could I get more info about this?
The way you can consume the data is, in an incremental approach from the learning curve point of view:
Working with the raw data, either "locally" (i.e. logging into the Head Node of the cluster) by using the Hadoop commands, either "remotely" by using the WebHDFS/HttpFS REST API. Please observe within this approach you have to implement whichever analyzing logic you need, since Cosmos only allows you to manage, as said, raw data.
Working with Hive in order to query the data in a SQL-like approach. Again, you can do it locally by invoking the Hive CLI, or remotely by implementing your own Hive client in Java (there are some other languages) using the Hive libraries.
Working with MapReduce (MR) in order to implement strong analysis. In order to do this, you'll have to create your own MR-based application (typically in Java) and run it locally. Once you are done with the local run of the MR app, you can go with Oozie, which allows you to run such MR apps in a remote way.
My advice is you start with Hive (the step 1 is easy but does not provide any analyzing capabilities), first locally trying to execute some Hive queries, then remotely implementing your own client. If this kind of analysis is not enough for you, then move to MapReduce and Oozie.
All the documentation regarding Cosmos can be found in the FI-WARE Catalogue of enablers. Within this documentation, I would highlight:
Quick Start for Programmers.
User and Programmer Guide (functionality described in sections 2.1 and 2.2 is not currently available in FI-LAB).

Migrating subsets of production data back to dev

In our rails app we sometimes have db entries created by users that we'd like to make part of our dev environment, without exporting the whole table. So, we'd like to be able to have a special 'dev and testing' dump.
Any recommended best practices? mysqldump seems pretty cumbersome, and we'd like to pull in rails associations as well, so maybe a rake task would make more sense.
Ideas?
You could use an ETL tool like Pentaho Kettle. Once you have initial transformation setup that you want you could easily run it with different parameters in the future. This way you could also keep all your associations. I wrote a little blurb about Pentaho for another question here.
If you provide a rough schema I could probably help you get started on what your transformation would look like.
I had a similar need and I ended up creating a plugin for that. It was developed for Rails 2.x and worked fine for me, but I didn't have much use for it lately.
The documentation is lacking, but it's pretty simple. You basically install the plugin and then have a method to_sql available on all your models. Options are explained in README.
You can try it out and let me know if you have any issues, I'll try to help.
I'd go after it using a Rails runner script. That will allow your code to access the same things your Rails app would, including the database initializations. ActiveRecord will be able to take advantage of the model relationships you've defined.
Create some "transfer" tables in your production database and copy the desired data into those using the "runner" script. From there you could serialize the data, or use a dump tool, since you'll be dealing with a reduced amount of records. Reverse the process in the development environment to move the data into the database.
I had a need to populate the database in one of my apps from remote web logs and wrote a runner script that fired off periodically via cron, ftps the data from my site and inserts the data.

Automated ETL / Database Migration Solution

I'm looking for an ETL solution that we can create a configure by hand and then deploy to run autonomously. This is basic transformation, it need not be feature heavy. Key points would be free or open source'ed software that could be tailored more to suit specific needs.
In fact, this could be reduced to a simple DB migration tool that will run on a Linux server. Essentially the same as the above but we probably won't need to validate / transform the data at all besides renaming columns.
I forgot to mention that this is going to have to be very cross platform. I'd like to be able to deploy it to a server, as well as test it on OSX and Windows.
Try Pentaho or Talend. Pentaho has a nice job-scheduling paradigm as well as the ETL workbench (Kettle). I haven't used Talend, but I've heard good things and I imagine it carries similar functionality.

continuous integration with mysql

My entire environment, java, js, and php are set up with our continuous integration server (Hudson).
But how do I get out database into the mix?
I would like to deploy fresh MySql databases for unit testing, development, and qa.
And then I'd like to diff development against production and have an update script that would be used for releasing.
I would look at liquibase (http://www.liquibase.org/). It's a open source java based db migration tool that can be integrated into your build script and can handle db diffing. I've used it before to manage db updates on a project before with a lot of success.
You could write a script in Ant to do all that stuff and execute it during the build.
Perhaps investigate database migrations such as migrate4j.
Write a script that sets up your test database. Run it from your build tool, whatever that is, before your build tests run. I do this manually and it works pretty well; still integrating it into maven. Shouldn't be too much trouble.
Isn't HyperSQL in-memory DB (http://hsqldb.org/) better for running your tests?
For managing migrations to your database sechema between releases, you could do a lot worse than to use Scala migrations:
http://opensource.imageworks.com/?p=scalamigrations
It's an open source tool that I've found to integrate well in a Java development ecosystem, and has extra appeal if any of your team have been looking at ways to introduce Scala.
It should also be able to build you a database from scratch, for testing purposes.
Give http://code.google.com/p/mysql-php-migrations/ a try!
Very PHP oriented, but seems to work well for most purposes.