Adding my own package into Murano package repository in FIWARE - fiware

I am trying to use Murano as my application deployment component, I was reading the information that I have from OpenStack but I do not know how can I add my own package to Murano package repository in FIWARE?

You can use murano CLI to upload your package into murano catalogue. You can find some information in http://murano.readthedocs.org/en/latest/articles/client.html#importing-packages-in-murano. To use it, you just need to export your FIWARE Lab credentials and use murano package-import
export OS_USERNAME=(your_FIWARE_LAB_username)
export OS_TENANT_NAME=(your_FIWARE_LAB_tenant)
export OS_PASSWORD=(your_FIWARE_LAB_password)
export OS_AUTH_URL=http://cloud.lab.fiware.org:4731/v2.0
murano package-import /path/to/package.zip
Your package (package.zip) should have a concrete structure. You can find some information about how to create murano packages in http://murano.readthedocs.org/en/latest/articles/app_pkg.html#app-pkg.

Related

Automate SSIS package (File system Deployment) using Devops (Soure control : GIT)

Currently we are using GIT as source control for SSIS packages. We use package deployment model with each package has a relevant .dtsConfig File like the below.
For example,
Package1.dtsx
Config1.dtsConfig
We have many environments like Dev, Dev-1, STG, STG-1,........ We do SSIS package deployments (File System Deployment) for all the environments manually through DBA assistance.
We planned to automate SSIS package deployments through Devops. In order to do that, we have to create a generic folder structure like the below.
Packages
Package1.dtsx
Dev
Config1.dtsConfig
Dev-1
Config1.dtsConfig
STG
Config1.dtsConfig
STG-1
Config1.dtsConfig
....
....
I mean the package.dtsx file is common for all environments so we place it in Packages folder. Created a separate folder for each environment to place config files.
Since configuration details are changeable based on environment.
I am new to SSIS deployments, I am not sure is this feasible or not.
Is any way we can plan to do the deployments through Devops. What would be the best idea to implement this process. Is any better way we can do it.
Yes. You can setup something you are expecting and keep the same config file. You can update the configuration values while deploying to respective environment using powershell.
Alternatively, you can consider migrating the SSIS solution to Project deployment mode to SSISDB(I Believe it is Available for 2012). Which is most preferred way.

Installation of openzeppelin/contracts Library

I have created a node.js project, within which I have created a truffle directory and initialised its project. I have installed the openzeppelin (npm install #openzeppelin/contracts) library in this truffle project directory, but nothing appears to have been installed, although I did not received any error during the install process. The import statement in my project displays the error hereafter:
import "#openzeppelin/contracts/token/ERC721/ERC721Full.sol";
Source "#openzeppelin/contracts/token/ERC721/ERC721Full.sol" not found: File import callback not supported
It looks like they changed the name, and now the contract name is
ERC721.sol and not ERC721Full.sol, so try
import "#openzeppelin/contracts/token/ERC721/ERC721.sol";
Try this:
import "github.com/openzeppelin/contracts/token/ERC721/ERC721Full.sol"

Trying to use multi-rake with Google Cloud Functions

I am trying to use this library here: multi-rake
However, as stated in the docs, we have to run this before installing multi-rake:
CFLAGS="-Wno-narrowing" pip install cld2-cffi
So I cannot simply put cld2-cffi and multi-rake in requirements.txt because cld2-cffi needs to be installed like this beforehand. How could I overcome this problem?
According to the official documentation you have to package as local dependencies.
You can also package and deploy dependencies alongside your function.
This approach is useful if your dependency is not available via the
pip package manager or if your Cloud Functions environment's internet
access is restricted. For example, you might use a directory structure
such as the following:
You can then use code as usual from the included local dependency,
localpackage. You can use this approach to bundle any Python packages
with your deployment.
Note: You can still use a requirements.txt file to specify additional
dependencies you haven't packaged alongside your function.
Specifying dependencies in Python

Python/Openshift application using NLTK resources

I hosted a Python webservice application in openshift which uses RSLP Stemmer module of nltk, but the log of service reported that:
[...] Resource 'stemmers/rslp/step0.pt' not found. Please use the NLTK Downloader to obtain the resource: >>> nltk.download()
Searched in:
- '/var/lib/openshift/539a61ab5973caa2410000bf/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data' [...]
I concluded that the module is not installed properly. Someone knows how install resources of nltk in OpenShift/Python application??
PS: portuguese stopwords module also contains an error like this.
You can use NLTK package on OpenShift. The reason it is not working for you is because NLTK package by default expect corpus in user home directory. In OpenShift, you cannot write to user home but have to use $OPENSHIFT_DATA_DIR for storing data. To solve this problem do the follwing:
Create an environment variable called NLTK_DATA with value $OPENSHIFT_DATA_DIR. After creating environment variable restart the app using rhc app-restart command.
SSH into your application gear using rhc ssh command
Activate the virtual environment and download the corpus using the commads shown below.
. $VIRTUAL_ENV/bin/activate
curl https://raw.github.com/sloria/TextBlob/master/download_corpora.py | python
I have written a blog on Textblob package that underneath uses NLTK package https://www.openshift.com/blogs/day-9-textblob-finding-sentiments-in-text

SSIS Why generate a new package id when upgrading?

SSIS Package Upgrade Wizard has an option to generate a new package ID during upgrade. What is the benefit?
You create a new package ID when you upgrade the package so that it will be a new package. That way you could use the old package and the new package without conflict. The GUID will identify the new package as a different package than the old one.
A package is identified by its ID (a GUID). Whenever a log file gets written, the package ID is put into the log file. If you had two different versions of a SSIS package both reporting the same name, you would have a hard time figuring out the log file. That is why it is best practice to change the Package ID even when you are only creating a copy of the package (not even upgrading it). Here is a link with more information about packages and their IDs:
http://msdn.microsoft.com/en-us/library/ms141134.aspx