Finding draw_if_interactive() in pyplot.py - function

There are multiple draw_if_interactive() expressions in the pyplot module but I can't find this function's definition anywhere in the module.
From intuition and readings, it's an easy guess that the function enables on-demand plotting but where can I read its definition? Thanks.

The function is actually in the backend code. The actual implementation depends on your backend. For example the function with the TkAgg backend is in backend_tkagg.py:
def draw_if_interactive():
if matplotlib.is_interactive():
figManager = Gcf.get_active()
if figManager is not None:
figManager.show()
Same kind of functions seem to be for other backends, they use the matplotlib.is_interactive to determine if this is an interactive session and then use the backend specific drawing commands to draw the image.

Related

Accessing regmap RegFields

I am trying to find a clean way to access the regmap that is used with *RegisterNode for creating documentation and testing files. The TLRegisterNode has methods for generating the json through some Annotations. These are done in the regmap method by adding them to the ElaborationArtefacts object. Other protocols don't seem to have these annotations.
Is there anyway to iterate over the "regmap" Register Fields post elaboration or during?
I cannot just access the regmap as it's not really a val/var since it's a method. I can't quite figure out where this information is being stored. I don't really believe it's actually "storing" any information as much as it is simply creating the hardware to attach the specified logic to the RegisterNode based logic.
The JSON output is actually fine for me as I could just write a post processing script to convert JSON to my required formats, but I'm wondering if I can access this information OR if I could add a custom function call at the end. I cannot extend the case class *RegisterNode, but I'm not sure if it's possible to add custom functions to run at the end of the regmap method.
Here is something I threw together quickly:
//in *RegisterRouter.scala
def customregmap(customFunc: (RegField.Map*) => Unit, mapping: RegField.Map*) = {
regmap(mapping:_*)
customFunc(mapping:_*)
}
def regmap(mapping: RegField.Map*) = {
//normal stuff
}
A user could then create a custom function to run and pass it to the regmap or to the RegisterRouter
def myFunc(mapping: RegField.Map*): Unit = {
println("I'm doing my custom function for regmap!")
}
// ...
node.customregmap(myFunc,
0x0 -> coreControlRegFields,
0x4 -> fdControlRegFields,
0x8 -> fdControl2RegFields,
)
This is just a quick example I have. I believe what would be better, if something like this was possible, would be to have a Seq of functions that could be added to the RegisterNode that are ran at the end of the regmap method, similar to how TLRegisterNode currently works. So a user could add an arbitrary number and you still use the regmap call.
Background (not directly part of question):
I have a unified register script that I have built over the years in which I describe the registers for a particular IP. It works very similar to the RegField/node.regmap, except it obviously doesn't know about diplomacy and the like. It will generate the Verilog, but also a variety of files for DV (basic `defines for simple verilog simulations and more complex uvm_reg_block defines also with the ability to describe multiple of the IPs for a subsystem all the way up to an SoC level). It will also print out C Header files for SW and Sphinx reStructuredText for documentation.
Diplomacy actually solves one of the main issues I've been dealing with so I'm obviously trying to push most of my newer designs to Chisel/Diplo.
I ended up solving this by creating my own RegisterNode which is the same as the rocketchip RegisterNodes except that I use a different Elaboration Artifact to grab the info and store it for later.

Where exactly does trainable_variables method belong in Tensorflow?

I'm a newbie in both deep learning and tensorflow and now trying to learn how to implement deep learning codes based on function API (not keras) by following example codes.
Inside the codes I'm looking at, I found out sources saying 'gradients=tape.gradient(loss,model.trainable variables)'
I intuitionally got what trainable variables mean, however in order to understand clearly,I tried to search on tensorflow documentation (which module or class the method belongs to, which are key arguments, etc) ,but I wasn't able to find the information I want. ('trainable variables' method was not in their documentation index and I'm wondering why)
So can anyone please tell me the module/class which trainable_variable method belongs to, and which arguments it takes, and also how it is able to judge and get all the trainable variables from the model ?
The reason you did not find this method is because trainable_variables is not a method, but an attribute/property. The Model class has a trainable_variables attribute, which is not documented officialy. It is inherited from the base class Layer, and to put it shortly, the list (of trainable variables) gets populated as new layers are added, since all layers have an init parameter trainable (this comes from base class Layer too). You can check the source code if you want to: "the source of the property", "adding new weights to layer appends to the list".

Lambda function calling another Lambda function

I want to create a Lambda function that runs through S3 files and if needed triggers other Lambda functions to parse the files in parallel.
Is this possible?
Yes it's possible. You would use the AWS SDK (which is included in the Lambda runtime environment for you) to invoke other Lambda functions, just like you would do in code running anywhere else.
You'll have to specify which language you are writing the Lambda function in if you want a more detailed answer.
If I understand your problem correctly you want one lambda that goes through a list of files in a S3-bucket. Some condition will decide whether a file should be parsed or not. For the files that should be parsed you want another 'file-parsing' lambda to parse those files.
To do this you will need two lambdas - one 'S3 reader' and one 'S3 file parser'.
For triggering the 'S3 file parser' lambda you have many few different options. Here are a two:
Trigger it using a SNS topic. (Here is an article on how to do that). If you have a very long list of files this might be an issue, as you most likely will surpass the number of instances of a lambda that can run in parallel.
Trigger it by invoking it with the AWS SDK. (See the article 'Leon' posted as comment to see how to do that.) What you need to consider here is that a long list of files might cause the 'S3 reader' lambda that controls the invocation to timeout since there is a 5 min runtime limit for a lambda.
Depending on the actual use case another potential solution is to just have one lambda that gets triggered when a file gets uploaded to the S3 bucket and let it decide whether it should get parsed or not and then parse it if needed. More info about how to do that can be found in this article and this tutorial.

Automatically generate R source code to build a package

I wrote bindings to an API and put everything into an R package, including tests, vignettes, etc., but the API keeps constantly changing. This brings up some issues
updating my package is error-prone, maybe I miss a new function or forget to mark an old as deprecated
submitting the package to CRAN is not a good idea, since it's changing frequently and packages are reviewed by hand
I got a hard time keeping this software up2date, since the API chance irregularly and therefor I maybe miss them
I came up with the idea to generate the bindings automatically. The API itself provides everything required for that via an online JSON documentation. These docs reflect constantly the current definition of the API.
Writing some code which converts the JSON docs to R functions is not the problem. But if I do so, I still need to update the package on CRAN. The best solution would be, to create a package that (on load) looks up the API definition and creates the required functions. Ideally these functions should be unit tested.
I am thankful for any hint on that.
Best
Edit: The API is the firebrowse API with an example of what the input would be.
This is really challenging and thus there's no obvious way to do it. The whole idea behind wsdl was to be able to do this easily using a standardized XML description. That was never really implemented in R and it never really took off more broadly (because of the emergence of RESTful services and JSON).
You can definitely generate functions dynamically by creating a so-called "function factories" (Hadley discussed these a bit here). In short, you write a function that takes JSON as input and returns a function that does whatever is described in the JSON. (Creating such a factory that dynamically does this whenever the package is loaded seems risky but I suppose it's possible. I'd probably just keep the factory to myself and use it to create and update the package.)
I'm not going to attempt to deal with your API specifically, but to see how this would work:
# create factory with arguments to control returned function
factory <- function(action, endpoint, content = TRUE, parsed = FALSE) {
if (content) {
if(parsed) {
out <- function() httr::content(httr::VERB(action, endpoint))
} else {
out <- function() httr::content(httr::VERB(action, endpoint), "text")
}
} else {
out <- function() httr::VERB(action, endpoint)
}
return(out)
}
# use factory to create different functions
(a <- factory("GET", "http://example.com", content = TRUE, parsed = FALSE))
## function() httr::content(httr::VERB(action, endpoint), "text")
(b <- factory("GET", "http://example.com", content = TRUE, parsed = TRUE))
## function() httr::content(httr::VERB(action, endpoint))
(c <- factory("GET", "http://example.com", content = FALSE))
function() httr::VERB(action, endpoint)
# evaluate each function
a() # returns a character string
b() # returns parsed HTML
c() # returns an httr response object
The best solution would be, to create a package that (on load) looks up the API definition and creates the required functions. Ideally these functions should be unit tested.
This is a very well known problem. React to server changes without breaking the clients is a pain not just in your situation, but also for mobile applications (that needs to be resubmitted every time API changes).
While your approach may work (generate the client on the fly), the best result can be reached if the server may collaborate to reach the achievement.
You have to decouple the client from API implementation. How? Using REST (for real), thous introducing the concept of state and transitions.
This is not the right place to explain how it works, but a great introduction can be found in this great presentation by Glenn Block, and then continuing to read.
This won't solve your particular problem, but it is, in my opinion, the right way to approach the problem.
You may want to have a look to this video as well, 15:24 part.

Angular - building a "public" function (newbie)

I'm After several days learning angularJS through converting my standart JS app to a ng one.
I was wondering about this simple scenario:
I have a global function called fb_connect(),
it can be used from any page (or any controller if you like) to make a facebook-based login.
This function makes a simple http call and receives a JSON object contain data to move on (display a pop up, login, etc...)
I read that I can define a Factory or a Service for my app and use it in any controller, which works fine.
So, I created a fb_connect factory function.
The problem is that now, in every page (every controller), I have to define that fb_connect in the constructor of every controller - for example :
function welcome($scope,fb_connect){});
What is the proper way to do this kind of actions using Angular without having to define these functions each and every time in every controller?
Thanks
Setting up factories and services is all part of the dependency injection system of Angular. Using that system is great when you need to create things that depend on other injected things. It's a big tree of dependencies. It's also nice for creating singletons, such that everywhere in your code end up using the same instance of some object.
It sounds to me like neither of these benefits apply in your case. I'd suggest just not using Angular's DI for it. You have some function defined globally, just call it directly and skip the DI. There's nothing wrong with that.
Of course you say it makes an Ajax call, so doesn't depend on the Angular $http service?
Your two options are:
Declare the function on the $rootScope
Inject it as a service
My advice is to go with making it a service. The whole purpose of services is explained in the Angular.js docs, just like this quote:
Angular services are singletons that carry out specific tasks common to web apps... To use an Angular service, you identify it as a dependency for the dependent (a controller, or another service) that depends on the service.
As you mentioned in your question, you'd prefer to not define the service in every controller you wish to use it in. With $rootScope you'll be injecting that also in every controller. So really it's a question of which you prefer, although to answer your question, the proper way of using a factory or service is to inject it into the controller you wish to use it in.
You can always put it in the $rootScope
myApp.run(function($rootScope, fb_connect){
$rootScope.welcome = function(){
};
});