We have two Firebase project, one for developing another production project. We use cloud functions. In one cloud functions, you need to use service-account-credentials.json. The problem is how can I make this function take data from service-account-credentials-dev.json when it proceeds to the development project, and when on production, then from service-account-credentials-prod.json?
I know about the environment, but as I understand, this feature does not allow you to download the json file for a particular project.
I found the answer to my question here. Doug Stevenson wrote "There is not. You could write your own program to read the json and convert that into a series of firebase CLI commands that get the job done"
Related
With a colleague of mine, we're building an app written in Dart with Flutter on Android Studio. We've arrived at the point where we need to start integrating a database to collect and send user filled data, and so we chose MongoDB which will be integrated into Docker so that our app is ready to function on multiple devices. Since we will have many users and each of them will be entering their own data, we have a lot of parameters to take into account so we're creating a JSON skeleton to map out the structure of what data goes where. The obstacle is we have no clue what the best way is to approach MongoDB-Docker integration with our Android Studio code, as it is our first time using MongoDB and Docker. Any good tips or resources that could put us on the right track ? Thank you
Hi your real question should be : what will you use in beetween those two ? You should (I guess) create an API to simplify and securise your user-DB interactions.
If you're not familiar with those principles a quick reasearch should help.
If you want to continue without any API, you should put many effort into having a VERY clean code as your app will have a lot of information inside its source code and proceed to create a documentation. Also you could use an ORM.
I am deploying a node.js app on Heroku through GitHub and I want to write to/update a JSON file that is in the GitHub repository. Is that possible and if so how do I do it?
No, and it's not really a recommended pattern. What are you trying to achieve? Generally it's a bad practice to mix your plumbing (code in your repo) and what runs through it (the data your app uses and creates)
A database is generally used to store and retrieve data.
I'm trying to get all the historic information about a sensor of Fi Ware.
I've seen that Orion uses Cygnus to store historics in Cosmos. Is that information accesible or is it only possible to use IDAS to get it?
Where could I get more info about this?
The way you can consume the data is, in an incremental approach from the learning curve point of view:
Working with the raw data, either "locally" (i.e. logging into the Head Node of the cluster) by using the Hadoop commands, either "remotely" by using the WebHDFS/HttpFS REST API. Please observe within this approach you have to implement whichever analyzing logic you need, since Cosmos only allows you to manage, as said, raw data.
Working with Hive in order to query the data in a SQL-like approach. Again, you can do it locally by invoking the Hive CLI, or remotely by implementing your own Hive client in Java (there are some other languages) using the Hive libraries.
Working with MapReduce (MR) in order to implement strong analysis. In order to do this, you'll have to create your own MR-based application (typically in Java) and run it locally. Once you are done with the local run of the MR app, you can go with Oozie, which allows you to run such MR apps in a remote way.
My advice is you start with Hive (the step 1 is easy but does not provide any analyzing capabilities), first locally trying to execute some Hive queries, then remotely implementing your own client. If this kind of analysis is not enough for you, then move to MapReduce and Oozie.
All the documentation regarding Cosmos can be found in the FI-WARE Catalogue of enablers. Within this documentation, I would highlight:
Quick Start for Programmers.
User and Programmer Guide (functionality described in sections 2.1 and 2.2 is not currently available in FI-LAB).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Upfront my question is: Are there any standard/common methods for implementing a software package that maintains and updates a MySQL database?
I'm an undergraduate research assistant and I've been tasked with creating a cron job that updates one of our university's in house bioinformatics databases.
Instead of building on monolithic binary that does every aspect of the work, I've divided the problem into subtasks and written a few python/c++ modules to handle the different tasks, as listed in the pipeline below:
Query the remote database for a list of updated files and return the result for the given time interval (monthly updated files / weekly / daily);
Module implemented in python. URL of updated file(s) output to stdout
Read in relative URL's of updated files and download to local directory
Module implemented in python
Unzip each archive of new files
Implemented as bash script
Parse files into CSV format
Module implemented in C++
Run MySQL query to insert CSV files into database
Obviously just a bash script
I'm not sure how to go about combining these modules into one package that can be easily moved to another machine, say if our current servers run out of space and the DB needs to be copied to another file-system (It's already happened once before).
My first thought is to create a bash script that pipes all of these modules together given that they all operate with stdin/stdout anyway, but this seems like an odd way of doing things.
Alternatively, I could write my C++ code as a python extension, package all of these scripts together and just write one python file that does this work for me.
Should I be using a package manager so that my code is easily installed on different machines? Does a simple zip archive of the entire updater with an included makefile suffice?
I'm extremely new to database management, and I don't have a ton of experience with distributing software, but I want to do a good job with this project. Thanks for the help in advance.
Inter-process communication (IPC) is a standard mechanism of composing many disparate programs into a complex application. IPC includes piping the output of one program to the input of another, using sockets (e.g. issuing HTTP requests from one application to another or sending data via TCP streams), using named FIFOs, and other mechanisms. In any event, using a Bash script to combine these disparate elements (or similarly, writing a Python script that accomplishes the same thing with the subprocess module) would be completely reasonable. The only thing that I would point out with this approach is that, since you are reading/writing to/from a database, you really do need to consider security/authentication with this approach (e.g. can anyone who can invoke this application write to the database? How do you verify that the caller has the appropriate access).
Regarding distribution, I would say that the most important thing is to ensure that you can find -- at any given version and prior versions -- a snapshot of all components and their dependencies at the versions that they were at the time of release. You should set up a code repository (e.g. on GitHub or some other service that you trust), and create a release branch at the time of each release that contains a snapshot of all the tools at the time of this release. That way if, God forbid, the one and only machine in which you have installed the tools fails, you will still be able to instantly grab a copy of the code and install it on a new machine (and if something breaks, you will be able to go back to an earlier release and binary search until you find out where the breakage happened)
In terms of installation, it really depends on how many steps are involved. If it is as simple as unzipping a folder and ensuring that the folder is in your PATH environment variable, then it is probably not worth the hassle to create any special distribution mechanism (unless you are able to do so easily). What I would recommend, though, is clearly documenting the installation steps in the INSTALL or README documentation in the repository (so that the instructions are snapshotted) as well as on the website for your repository. If the number of steps is small and easy to accomplish, then I wouldn't bother with much more. If there are many steps involved (like downloading and installing a large number of dependencies), then I would recommend writing a script that can automate the installation process. That being said, it's really about what the University wants in this case.
Has anyone had much experience with data migration into and out of NetSuite? I have to export DB2 tables into MySQL, manipulate data, and then export ina CSV file. Then take a CSV file of accounts and manipulate the data again for accounts to match up from our old system to new. Anyone tried to do this in MySQL?
A couple of options:
Invest in a data transformation tool that connects to NetSuite and DB2 or MySQL. Look at Dell Boomi, IBM Cast Iron, etc. These tools allow you to connect to both systems, define the data to be extracted, perform data transformation functions and mappings and do all the inserts/updates or whatever you need to do.
For MySQL to NetSuite, php scripts can be written to access MySQL and NetSuite. On the NetSuite side, you can either do SOAP web services, or you can write custom REST APIs within NetSuite. SOAP is probably a bit slower than REST, but with REST, you have to write the API yourself (server side JavaScript - it's not hard, but there's a learning curve).
Hope this helps.
I'm an IBM i programmer; try CPYTOIMPF to create a pretty generic CSV file. I'll go to a stream file - if you have NetServer running you can map a network drive to the IFS directory or you can use FTP to get the CSV file from the IFS to another machine in your network.
Try Adeptia's Netsuite integration tool to perform ETL. You can also try Pentaho ETL for this (As far as I know Celigo's Netsuite connector is built upon Pentaho). Also Jitterbit does have an extension for Netsuite.
We primarily have 2 options to pump data into NS:
i)SuiteTalk ---> Using which we can have SOAP based transformations.There are 2 versions of SuiteTalk synchronous and asynchronous.
Typical tools like Boomi/Mule/Jitterbit use synchronous SuiteTalk to pump data into NS.They also have decent editors to help you do mapping.
ii)RESTlets ---> which are typical REST based architures by NS can also be used but you may have to write external brokers to communicate with them.
Depending on your need you can have whatever you need.IN most of the cases you will be using SuiteTalk to bring in data to Netsuite.
Hope this helps ...
We just got done doing this. We used an iPAAS platform called Jitterbit (similar to Dell Boomi). It can connect to mySql and to NetSuite and you can do transformations in the tool. I have been really impressed with the platform overall so far
There are different approaches, I like the following to process a batch job:
To import data to Netsuite:
Export CSV from old system and place it in Netsuite's a File Cabinet folder (Use a RESTlet or Webservices for this).
Run a scheduled script to load the files in the folder and update the records.
Don't forget to handle errors. Ways to handle errors: send email, create custom record, log to file or write to record
Once the file has been processed move the file to another folder or delete it.
To export data out of Netsuite:
Gather data and export to a CSV (You can use a saved search or similar)
Place CSV in File Cabinet folder.
From external server call webservices or RESTlet to grab new CSV files in the folder.
Process file.
Handle errors.
Call webservices or RESTlet to move CSV File or Delete.
You can also use Pentaho Data Integration, its free and the learning curve is not that difficult. I took this course and I was able to play around with the tool within a couple of hours.