Azure SDK for Python vs. Calling Azure CLI using subprocess - azure-cli

I need to build a series of Automated Tests using Python language. The tests need a query a series of Azure resources, get results (probably JSON format) and check a few conditions.
I can think of two options:
Call Azure CLI in my Python code using Python's subprocess to get JSON output
Use Azure SDK for Python
Which of the above options is easier to query Azure resources? Do I have an easier option?

I think using Azure CLI in python is easier, you can get the resources with one line command and simply use --query parameter to query the command output, e.g. filter with a condition, get a specific property, etc.
For python SDK, normally you need to use different packages for different resources, define different clients to call the different methods, it is not so convenient.

Related

How to make two json file configurations for different Firebase projects?

We have two Firebase project, one for developing another production project. We use cloud functions. In one cloud functions, you need to use service-account-credentials.json. The problem is how can I make this function take data from service-account-credentials-dev.json when it proceeds to the development project, and when on production, then from service-account-credentials-prod.json?
I know about the environment, but as I understand, this feature does not allow you to download the json file for a particular project.
I found the answer to my question here. Doug Stevenson wrote "There is not. You could write your own program to read the json and convert that into a series of firebase CLI commands that get the job done"

Bulk loading data into MarkLogic via external RESTful services

I have series of documents that I need to migrate into MarkLogic. The documents are available to me via RESTful services in JSON. What I want to know is there anyway, such as through the MLCP or Query Console to call those RESTful services and pull in the data, otherwise I have to create a small Java app and dump the files to a share then pick them up from MarkLogic.
mlcp is designed to source data from the file system or a MarkLogic database. Take a look at the Java Client API to perform ingestion from other sources. For example, you can fire up your favorite HTTP client in Java and add the results to a DocumentWriteSet. The write set acts like a buffer, allowing you to batch requests to MarkLogic for efficiency. You can then send that DocumentWriteSet to MarkLogic with one of the DocumentManager.write() methods. Take a look at the documentation for many more details or the "Bulk Writes" section of the getting started cookbook.

Getting historical data in Fi ware using Cosmos

I'm trying to get all the historic information about a sensor of Fi Ware.
I've seen that Orion uses Cygnus to store historics in Cosmos. Is that information accesible or is it only possible to use IDAS to get it?
Where could I get more info about this?
The way you can consume the data is, in an incremental approach from the learning curve point of view:
Working with the raw data, either "locally" (i.e. logging into the Head Node of the cluster) by using the Hadoop commands, either "remotely" by using the WebHDFS/HttpFS REST API. Please observe within this approach you have to implement whichever analyzing logic you need, since Cosmos only allows you to manage, as said, raw data.
Working with Hive in order to query the data in a SQL-like approach. Again, you can do it locally by invoking the Hive CLI, or remotely by implementing your own Hive client in Java (there are some other languages) using the Hive libraries.
Working with MapReduce (MR) in order to implement strong analysis. In order to do this, you'll have to create your own MR-based application (typically in Java) and run it locally. Once you are done with the local run of the MR app, you can go with Oozie, which allows you to run such MR apps in a remote way.
My advice is you start with Hive (the step 1 is easy but does not provide any analyzing capabilities), first locally trying to execute some Hive queries, then remotely implementing your own client. If this kind of analysis is not enough for you, then move to MapReduce and Oozie.
All the documentation regarding Cosmos can be found in the FI-WARE Catalogue of enablers. Within this documentation, I would highlight:
Quick Start for Programmers.
User and Programmer Guide (functionality described in sections 2.1 and 2.2 is not currently available in FI-LAB).

meteor reporting of data in existing mysql db. how?

I'm trying to make some reports using meteor and raphael js. I have to report data from an existing MySQL database. I do not wish to write to that database. I need only the "R" from CRUD.
I have thought of various manual ways of: exporting .csv files from the MySQL db via the application itself (Limesurvey) and using mongoimport to populate a MongoDB collection, and then do my CollectionName.find() etc in Meteor.
or perhaps some way of exposing REST full endpoints only to consume data, and use the http package for Meteor.
Is there a good clean solution for using existing SQL data in a Meteor JS application?
How can one use pre-existing SQL data?
(I've no problem with duplication in MongoDB, mind you. however it has to be...)
Thank You
You can do it without any duplication completely from inside Meteor, but you will have to jump through a couple of hoops.
Firstly, use the mysql npm package to query the SQL database. Though Meteor provides Npm to require node packages, I find that using meteor-npm is an easier. Then to do the "R"eading form MySQL, create a Meteor.method on your server which queries the MySQL directly.
Then the second problem is that the mysql package is completely asynchronous. Hence, the execution of the SQL query returns value in a call back and by that point, your Meteor.method call would return leaving the client with an undefined. To fix that issue, we can use Future.
There are a couple of ways of smoothing over this step:
Using `meteor-sync-methods
Spinning out your own version from advice from the issue to allow this natively
Use this easy to implement one-time pattern: "fence has already activated -- too late to add writes"
Hope that helps.

NetSuite Migrations

Has anyone had much experience with data migration into and out of NetSuite? I have to export DB2 tables into MySQL, manipulate data, and then export ina CSV file. Then take a CSV file of accounts and manipulate the data again for accounts to match up from our old system to new. Anyone tried to do this in MySQL?
A couple of options:
Invest in a data transformation tool that connects to NetSuite and DB2 or MySQL. Look at Dell Boomi, IBM Cast Iron, etc. These tools allow you to connect to both systems, define the data to be extracted, perform data transformation functions and mappings and do all the inserts/updates or whatever you need to do.
For MySQL to NetSuite, php scripts can be written to access MySQL and NetSuite. On the NetSuite side, you can either do SOAP web services, or you can write custom REST APIs within NetSuite. SOAP is probably a bit slower than REST, but with REST, you have to write the API yourself (server side JavaScript - it's not hard, but there's a learning curve).
Hope this helps.
I'm an IBM i programmer; try CPYTOIMPF to create a pretty generic CSV file. I'll go to a stream file - if you have NetServer running you can map a network drive to the IFS directory or you can use FTP to get the CSV file from the IFS to another machine in your network.
Try Adeptia's Netsuite integration tool to perform ETL. You can also try Pentaho ETL for this (As far as I know Celigo's Netsuite connector is built upon Pentaho). Also Jitterbit does have an extension for Netsuite.
We primarily have 2 options to pump data into NS:
i)SuiteTalk ---> Using which we can have SOAP based transformations.There are 2 versions of SuiteTalk synchronous and asynchronous.
Typical tools like Boomi/Mule/Jitterbit use synchronous SuiteTalk to pump data into NS.They also have decent editors to help you do mapping.
ii)RESTlets ---> which are typical REST based architures by NS can also be used but you may have to write external brokers to communicate with them.
Depending on your need you can have whatever you need.IN most of the cases you will be using SuiteTalk to bring in data to Netsuite.
Hope this helps ...
We just got done doing this. We used an iPAAS platform called Jitterbit (similar to Dell Boomi). It can connect to mySql and to NetSuite and you can do transformations in the tool. I have been really impressed with the platform overall so far
There are different approaches, I like the following to process a batch job:
To import data to Netsuite:
Export CSV from old system and place it in Netsuite's a File Cabinet folder (Use a RESTlet or Webservices for this).
Run a scheduled script to load the files in the folder and update the records.
Don't forget to handle errors. Ways to handle errors: send email, create custom record, log to file or write to record
Once the file has been processed move the file to another folder or delete it.
To export data out of Netsuite:
Gather data and export to a CSV (You can use a saved search or similar)
Place CSV in File Cabinet folder.
From external server call webservices or RESTlet to grab new CSV files in the folder.
Process file.
Handle errors.
Call webservices or RESTlet to move CSV File or Delete.
You can also use Pentaho Data Integration, its free and the learning curve is not that difficult. I took this course and I was able to play around with the tool within a couple of hours.