NoFlo and Loading JSON Graphs - json

I am reviewing the possibility of using NoFlo as an Orchestration Engine.
To keep a "Separation Of Concerns", and using NodeJS, I will basically create a RESTful API, using Express, that will have a series of POST and GET requests. This RESTFful API will interact with the Orchestrations, (i.e. NoFlo Graphs and Runtime) by starting and stopping graphs in the runtime. From a behavior point of view, a POST requests will start/stop an Orchestration and a GET requests will get information about the Orchestration (i.e. Status, Errors...). From a state point of view, a POST will create an Orchestration and a GET will enumerate the Orchestration.
Based on what I read in various Stack Posts (i.e. - Starting out with noflo, running it from nodejs) it appears possible but I still have a few questions. Here is one of them.
Is it possible to load a JSON Graph from memory into the Noflo runtime, instead of having a persisted file then loading it into the NoFlo Network from this file? I would like to load the graph as a JSON object.
I am trying to do two thing with this:
- Load and Save Graphs to a Database.
- Have a UI manage these Graphs in the Database.
Any thoughts on this question and topic would be greatly appreciated.

Yes, it is possible to make NoFlo run a JSON (or .fbp) graph definition from memory, from a file, wherever.
This happens in two steps:
Load the graph string/object into an instance of noflo.Graph
noflo.graph.loadJSON(graphDefinition, function (err, graph) {
if (err) {
// Handle graph loading errors
}
// Now you have a graph object, you can create a network from it
});
Instantiate a NoFlo network based on the graph definition
noflo.createNetwork(graph, function (err, network) {
if (err) {
// Handle network setup/starting errors
}
// Now you have a running network object. You can use its
// lifecycle events and methods to interact with it
})
Additionally the graph object loaded above is "live", so if you make changes to it, the network will react accordingly. This means you can add/change nodes and connections at runtime.

Related

Caching API response from simple ruby script

I'm working on a simple ruby script with cli that will allow me to browse certain statistics inside the terminal.
I'm using API from the following website: https://worldcup.sfg.io/matches
require 'httparty'
url = "https://worldcup.sfg.io/matches"
response = HTTParty.get(url)
I have to goals in mind. First is to somehow save the JSON response (I'm not using a database) so I can avoid unnecessary requests. Second is to check if the new data is available, and if it is, to override the previously saved response.
What's the best way to go about this?
... with cli ...
So caching in memory is likely not available to you. In this case you can save the response to a file on disk.
Second is to check if the new data is available, and if it is, to override the previously saved response.
The thing is, how can you check if new data is available without doing a request for the data? Not possible (given the information you provided). So you can simply keep fetching data every 5 minutes or so and updating your local file.

nodejs control a bidirectional playback of geo-coded json data from mongodb or file

I have a js frontend, to a backend WAMP router, that has a nodejs app that publishes JSON encoded geo-coded data in real time to a WAMP topic. The frontend subscribes to that WAMP topic and visualizes the moving geo-coded targets on a map. That data is also stored on the backend within MongoDB.
I need to add a function whereby the user can request, play, stop, rewind, and fast-forward through archived data. I have a backend nodejs function that receives the begin/end times for the query from the frontend, makes the MongoDB query and uses the MongoDB db.find.stream() interface to publish the data to a WAMP topic in realtime. The stream.on("data", ...) method pauses/resumes the stream to playback the data in time-step. This works fine for playback and stop. But, I don't know how to handle rewind and fast-forward. Based on what I've seen, you can't rewind or iterate backwards on a stream.
My brute force approach would be to load the entire huge query result into an in memory array and have methods the frontend calls to control the increment/decrement of the array pointer depending on if they are playing or rewinding so that they can see the targets moving on the map and find what they are looking for.
Is there an API or library that would allow me to accomplish rewind without loading the array into memory? Possibly storing the result into a file and moving back and forth through it?

webRequest API not capturing all page requests from application

I am trying to download JSON data from a web application. The URL/API is static and I can use it to call the webpage that returns the data. There is a session variable parameter that needs to be added to the URL/API call to connect to the server and download the JSON data which is created when you launch the application, but times out if the application is not actively used. My current process is to open the developer tools, launch the web application and when the specific JSON request is made I copy the parameter value then add it to a script that mimics the page request and downloads the JSON data.
I am trying to avoid manually copying and pasting this session variable parameter. I want to be able to automatically capture the web request, parse out the value that I need, set a cookie on my machine and then pick up the cookie by a php script to initiate the JSON data download with the valid session value.
I have looked into creating an extension in chrome using the chrome.webRequest.onResponseStarted with the following code:
chrome.webRequest.onCompleted.addListener(function(details) {
console.log(details);
chrome.cookies.set(
{ url: "http://localhost/MySite/", name: "MyCookie", value: "Tested" }
);
}, {urls:["<all_urls>"]} );
This code works for the main web requests but it doesn’t pick up all the JSON data requests that are made by the application. The application is swf format which is most likely the problem, but I can see the requests in the Network Panel tab of the Developer Tools and they are captured using chrome://net-internals which that leads me to believe that I should be able to capture them somehow.
I have looked into chrome.devtools.network but I cannot seem to figure out how that is supposed to work. Any advice or direction would be greatly appreciated.

How to preserve data through AngularJS routing?

I'm new to AngularJS and am trying to build myself a simple little app. I have JSON data for the app being fetched with $resource, and this data should be the same across multiple views/routes. However, when I go to a new route, the JSON data (stored as $scope.data) is no longer available to the new view. What can I do to pass this data to the new view and not require another fetch? (The tutorial phone-catalog app re-fetches this data every time from what I can tell.)
From what I understand, $rootScope can do this but it seems to be generally frowned upon. I apologize if this doesn't make much sense; I'm very much diving in the deep end here.
Use a service to store the data. Inject that service into each controller that needs access to this data. Each time a controller is created and executes (because you switch to another view/route), it can ask the service for the data. If the service doesn't yet have the data, it can make a request to the server and return a promise to the controller (see below for how to do that). If the service has the data, it can return it to the controller immediately.
See also Processing $http response in service
Note that services are singletons, unlike controllers.
Another variation: when the service is created, it can go fetch the data itself, and then store it for later use. Controllers can than $watch properties or functions on the service. For an example of this approach see How do I store a current user context in Angular?

wso2 API Manager and BAM - How to control API invocation?

How can I retrieve the number of API invocations? I know the data has to be somewhere because wso2 BAM shows piecharts with similar data...
I would like to get that number in a mediation sequencel; is that possible? Might this might be achieved via a DB-lookup?
The way how API Usage Monitoring in WSO2 API Manager works is, there is an API handler (org.wso2.carbon.apimgt.usage.publisher.APIUsageHandler) that gets invoked for each request and response passing through the API gateway. In this handler all pertinent information with regard to API usage is published to the WSO2 BAM server. The WSO2 BAM server persists this data in Cassandra database that is shipped with it. Then there is a BAM Toolbox that has been packaged with required analytic scripts written using Apache Hive that can be installed on the BAM server. These scripts would summarize the data periodically and persist the summarized data to an sql database. So the graphs and charts shown in the API Publisher web application are created using the summarized data from the sql database.
Now, if what you require is extractable from these summarized sql tables then i suppose the process is very straight forward. You could use the DBLookup mediator for this. But if some dimension of the data which you need has been lost due to the summarizing, then you will have a little more work to do.
You have two options.
The easiest approach which involves no coding at all would be to write a custom Hive script that suits your requirement and summarize data to a sql table. Then, like before use a DBLookup mediator to read the data. You can look at the existing Hive scripts that are shipped with the product to get an idea of how it is written.
If you dont want BAM in the picture, you still can do it with minimal coding as follows. The implementation class which performs the publishing is org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataBridgeDataPublisher. This class implements the interface org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataPublisher. The interface has three instace methods as follows.
public void init()
public void publishEvent(RequestPublisherDTO requestPublisherDTO)
public void publishEvent(ResponsePublisherDTO responsePublisherDTO)
The init() method runs just once during server startup. Here is where you can add all your logic which is needed to bootstrap the class.
The publishEvent(RequestPublisherDTO) is where you publish request events and publishEvent(ResponsePublisherDTO) is where you publish response events. The DTO objects are encapsulated representations of the request and response data respectively.
What you will have to do is write a new implementation for this interface and configure it as the value for DataPublisherImpl property in api-manager.xml. To make things easier you can simply extends the exsiting org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataBridgeDataPublisher, write your necessary logic to persist usage data to an sql database within the init(), publishEvent(RequestPublisherDTO) and publishEvent(ResponsePublisherDTO) and at the end of each method just call its respective super class method.
E.g. the overriding init() will call super().init(). This way you are only adding the neccessary code for your requirement, and leaving the BAM stat collection requirement to the super class.