const TelegramBot= require('./telegram-bot') // It's currently only local.
var bot = new TelegramBot()
console.log(bot)
// This does not print a complete JSON of the object. It misses stuff like
constructor, method, prototype and super_.
Is there some way or npm module that prints a JSON compatible output of the object?
My only work around so far is console logging it out like this and repeatedly checking the log and printing out another but I think it'll be a lot of easier by having a JSON export and using a JSON online viewer that views like a directory tree.
console.log(`
TelegramBot:
> ${Object.getOwnPropertyNames(TelegramBot)}
TelegramBot.prototype:
> ${Object.getOwnPropertyNames(TelegramBot.prototype)}
TelegramBot.prototype.constructor:
> ${Object.getOwnPropertyNames(TelegramBot.prototype.constructor)}
TelegramBot.prototype.constructor.super_:
> ${Object.getOwnPropertyNames(TelegramBot.prototype.constructor.super_)}
`)
I'm aware functions can't be seen with JSON.parse(). I don't mind if they appear as a string like "Anonymous Function()" or "FunctionWithAName()". Or something like this.
I'm doing this since I'm having another go trying to learn prototypes and I've used util.inherits(TelegramBot, EventEmitter) in the TelegramBot object.
To avoid name clashes between TelegramBot methods I've made and the super class of EventEmitter names. I'd like to keep a clear view of the whole object structure. Or do I not have to worry since they use this variable shadowing thing? If I'm correct it checks the object's instance first, then it's prototype. Not sure if EventEmitter prototype checked first or TelegramBot's.
Related
I am trying to find a clean way to access the regmap that is used with *RegisterNode for creating documentation and testing files. The TLRegisterNode has methods for generating the json through some Annotations. These are done in the regmap method by adding them to the ElaborationArtefacts object. Other protocols don't seem to have these annotations.
Is there anyway to iterate over the "regmap" Register Fields post elaboration or during?
I cannot just access the regmap as it's not really a val/var since it's a method. I can't quite figure out where this information is being stored. I don't really believe it's actually "storing" any information as much as it is simply creating the hardware to attach the specified logic to the RegisterNode based logic.
The JSON output is actually fine for me as I could just write a post processing script to convert JSON to my required formats, but I'm wondering if I can access this information OR if I could add a custom function call at the end. I cannot extend the case class *RegisterNode, but I'm not sure if it's possible to add custom functions to run at the end of the regmap method.
Here is something I threw together quickly:
//in *RegisterRouter.scala
def customregmap(customFunc: (RegField.Map*) => Unit, mapping: RegField.Map*) = {
regmap(mapping:_*)
customFunc(mapping:_*)
}
def regmap(mapping: RegField.Map*) = {
//normal stuff
}
A user could then create a custom function to run and pass it to the regmap or to the RegisterRouter
def myFunc(mapping: RegField.Map*): Unit = {
println("I'm doing my custom function for regmap!")
}
// ...
node.customregmap(myFunc,
0x0 -> coreControlRegFields,
0x4 -> fdControlRegFields,
0x8 -> fdControl2RegFields,
)
This is just a quick example I have. I believe what would be better, if something like this was possible, would be to have a Seq of functions that could be added to the RegisterNode that are ran at the end of the regmap method, similar to how TLRegisterNode currently works. So a user could add an arbitrary number and you still use the regmap call.
Background (not directly part of question):
I have a unified register script that I have built over the years in which I describe the registers for a particular IP. It works very similar to the RegField/node.regmap, except it obviously doesn't know about diplomacy and the like. It will generate the Verilog, but also a variety of files for DV (basic `defines for simple verilog simulations and more complex uvm_reg_block defines also with the ability to describe multiple of the IPs for a subsystem all the way up to an SoC level). It will also print out C Header files for SW and Sphinx reStructuredText for documentation.
Diplomacy actually solves one of the main issues I've been dealing with so I'm obviously trying to push most of my newer designs to Chisel/Diplo.
I ended up solving this by creating my own RegisterNode which is the same as the rocketchip RegisterNodes except that I use a different Elaboration Artifact to grab the info and store it for later.
In ASP.NET Core, the JsonConfigurationProvider will load configuration from appsettings.json, and then will read in the environment version, appsettings.{Environment}.json, based on what IHostingEnvironment.EnvironmentName is. The environment version can override the values of the base appsettings.json.
Is there any reasonable way to preview what the resulting overridden configuration looks like?
Obviously, you could write unit tests that explicitly test that elements are overridden to your expectations, but that would be a very laborious workaround with upkeep for every time you change a setting. It's not a good solution if you just wanted to validate that you didn't misplace a bracket or misspell an element name.
Back in ASP.NET's web.config transforms, you could simply right-click on a transform in Visual Studio and choose "Preview Transform". There are also many other ways to preview an XSLT transform outside of Visual Studio. Even for web.config parameterization with Parameters.xml, you could at least execute Web Deploy and review the resulting web.config to make sure it came out right.
There does not seem to be any built-in way to preview appsettings.{Environment}.json's effects on the base file in Visual Studio. I haven't been able to find anything outside of VS to help with this either. JSON overriding doesn't appear to be all that commonplace, even though it is now an integral part of ASP.NET Core.
I've figured out you can achieve a preview with Json.NET's Merge function after loading the appsettings files into JObjects.
Here's a simple console app demonstrating this. Provide it the path to where your appsettings files are and it will emit previews of how they'll look in each environment.
static void Main(string[] args)
{
string targetPath = #"C:\path\to\my\app";
// Parse appsettings.json
var baseConfig = ParseAppSettings($#"{targetPath}\appsettings.json");
// Find all appsettings.{env}.json's
var regex = new Regex(#"appsettings\..+\.json");
var environmentConfigs = Directory.GetFiles(targetPath, "*.json")
.Where(path => regex.IsMatch(path));
foreach (var env in environmentConfigs)
{
// Parse appsettings.{env}.json
var transform = ParseAppSettings(env);
// Clone baseConfig since Merge is a void operation
var result = (JObject)baseConfig.DeepClone();
// Merge the two, making sure to overwrite arrays
result.Merge(transform, new JsonMergeSettings
{
MergeArrayHandling = MergeArrayHandling.Replace
});
// Write the preview to file
string dest = $#"{targetPath}\preview-{Path.GetFileName(env)}";
File.WriteAllText(dest, result.ToString());
}
}
private static JObject ParseAppSettings(string path)
=> JObject.Load(new JsonTextReader(new StreamReader(path)));
While this is no guarantee there won't be some other config source won't override these once deployed, this will at least let you validate that the interactions between these two files will be handled correctly.
There's not really a way to do that, but I think a bit about how this actually works would help you understand why.
With config transforms, there was literal file modification, so it's easy enough to "preview" that, showing the resulting file. The config system in ASP.NET Core is completely different.
It's basically just a dictionary. During startup, each registered configuration provider is run in the order it was registered. The provider reads its configuration source, whether that be a JSON file, system environment variables, command line arguments, etc. and builds key-value pairs, which are then added to the main configuration "dictionary". An "override", such as appsettings.{environment}.json, is really just another JSON provider registered after the appsettings.json provider, which obviously uses a different source (JSON file). Since it's registered after, when an existing key is encountered, its value is overwritten, as is typical for anything being added to a dictionary.
In other words, the "preview" would be completed configuration object (dictionary), which is composed of a number of different sources, not just these JSON files, and things like environment variables or command line arguments will override even the environment-specific JSON (since they're registered after that), so you still wouldn't technically know the the environment-specific JSON applied or not, because the value could be coming from another source that overrode that.
You can use the GetDebugView extension method on the IConfigurationRoot with something like
app.UseEndpoints(endpoints =>
{
if(env.IsDevelopment())
{
endpoints.MapGet("/config", ctx =>
{
var config = (Configuration as IConfigurationRoot).GetDebugView();
return ctx.Response.WriteAsync(config);
});
}
});
However, doing this can impose security risks, as it'll expose all your configuration like connection strings so you should enable this only in development.
You can refer to this article by Andrew Lock to understand how it works: https://andrewlock.net/debugging-configuration-values-in-aspnetcore/
I'm experiencing a discrepancy between the first compilation of a Grails app and the compilation that happens when a file changes while the app is running.
Background:
My app creates some spring beans from Spring LDAP (docs) using conf/spring/resources.groovy.
I have an LdapUser.groovy class in src/groovy (I'm using it similarly to a domain class, except it isn't in grails-app/domain as it doesn't map to a database table).
In BootStrap.groovy I register a JSON marshaller for LdapUser (using JSON.registerObjectMarshaller).
I have a controller with an index method that responds a list of LdapUser objects. This renders correctly in JSON (according to the marshaller).
With that background, here are the pieces of the problem:
When the show method, which responds a single LdapUser, gets called, I get an exception that LdapUser cannot be converted to grails.converters.JSON. (fair enough)
But, if I save the LdapUser.groovy file, thus invoking a recompile on the file while the app is running, the JSON marshaller suddenly works fine.
Before saving the LdapUser.groovy, my controller has a to an LdapUserRepo (a class instantiated via an #EnableLdapRepositories annotation on the controller), but this reference becomes null after I save LdapUser.groovy. I'm not sure how this relates to the problem, as I was also able to reproduce the problem in a controller lacking an injected LdapUserRepo (but with the annotated controllers still in the app).
I also at one point was setting an asType method on the LdapUser class, which was called as expected before the save-invoked recompile. After the recompile, however, my asType method was no longer called and the JSON marshaller was taking over. ( I was doing exception-worthy things in the asType that were throwing before recompile and not throwing after... )
My understanding of the problem is therefore:
Somehow the asType method of the LdapUser.groovy class is not being automatically generated on first compile when running the app, but is being generated on subsequent compiles.
The LdapUser class is tied to the LdapUserRepo in more ways than merely being a type the Repo uses, and the recompile is not reflecting that connection correctly.
Methods rendering lists of objects are somehow unaffected by the asType method. This leads me to believe that the JSON marshaller gets called directly on list elements (instead of via asType) when the list asType has been called (whether or not the "as" operation is implicit...).
My question then is:
what is the Grails compiler doing differently on run-app vs on compile while app is running that could be causing this behavior?
how can I restructure things to ensure it works properly out of the box?
If I need to RTFM, what would be the FM section? (My google-fu is sadly quite weak).
Note: this question is vaguely similar, but doesn't have any meaningfulness to the answer:
Grails: Defining a JSON custom marshaller as static method in domain
I have a project in Apps script that uses several libraries. The project needed a more complex logger (logging levels, color coding) so I wrote one that outputs to google docs. All is fine and dandy if I immediately print the output to the google doc, when I import the logger in all of the libraries separately. However I noticed that when doing a lot of logging it takes much longer than without. So I am looking for a way to write all of the output in a single go at the end when the main script finishes.
This would require either:
Being able to define the logging library once (in the main file) and somehow accessing this in the attached libs. I can't seem to find a way to get the main projects closure from within the libraries though.
Some sort of singleton logger object. Not sure if this is possible from with a library, I have trouble figuring it out either way.
Extending the built-in Logger to suit my needs, not sure though...
My project looks at follows:
Main Project
Library 1
Library 2
Library 3
Library 4
This is how I use my current logger:
var logger = new BetterLogger(/* logging level */);
logger.warn('this is a warning');
Thanks!
Instead of writing to the file at each logged message (which is the source of your slow down), you could write your log messages to the Logger Library's ScriptDB instance and add a .write() method to your logger that will output the messages in one go. Your logger constructor can take a messageGroup parameter which can serve as a unique identifier for the lines you would like to write. This would also allow you to use different files for logging output.
As you build your messages into proper output to write to the file (don't write each line individually, batch operations are your friend), you might want to remove the message from the ScriptDB. However, it might also be a nice place to pull back old logs.
Your message object might look something like this:
{
message: "My message",
color: "red",
messageGroup: "groupName",
level: 25,
timeStamp: new Date().getTime(), //ScriptDB won't take date objects natively
loggingFile: "Document Key"
}
The query would look like:
var db = ScriptDb.getMyDb();
var results = db.query({messageGroup: "groupName"}).sortBy("timeStamp",db.NUMERIC);
I'm usin Flash Builder to create some actionscript code that uses SharedObjects.
First question: how can I delete my local SharedObject in Flash Builder? I am debugging my program and the SharedObject sems to persist between runs. I want to start fresh and clean with no SharedObject storing my data. How do I get rid of it?
Also, in my SharedObject, I used mySharedObject.data["mykey"] to store a Dictionary. This Dictionary will have String keys and values of MyCustomClass. The problem is that when I later try to loop over the values of this Dictionary, I get error #1034 cannot convert object to type MyCustomClass. It seems like I can put an item of type MyCustomClass into this dictionary, but I can't get the item back out as anything other than an object.
Any idea what is going wrong?
Those are essentially two questions, so should have been asked as two questions. Anyway, I'd answer them here but still prefer that you break them up in two parts (possibly leave a link to the other one here for reference sake):
Local shared object, are useful exactly for persistence across runs. And then there's SharedObject.clear() to clear the state as required.
For you second issue, Shared Object's serialize your object into AMF, so that it can be written to disk or sent over network using RTMP. Now, your custom class can't really be serialized in AMF. What actually happens is that the public properties (and dynamic ones, if the class is declared dynamic) are serialized into the structure. So, the public data is stored... but it's essentially a general Object.
To work around that, you can have a public static readFrom(object:Object):MyCustomClass type function, which would read the properties from the passed object to construct a new MyCustomClass representing that information.
There are ways to register your class with the player to be stored in SharedObject (see here)... but you need to make sure that the code that de-serializes that data is aware of the class as well.
To make a class available for conversion, in your global initialization use registerClassAlias() call with MyCustomClass and its fully qualified name as parameters. The manual. Say your custom class is foo.bar.TheClass, you write:
registerClassAlias('foo.bar.TheClass',foo.bar.TheClass);
In order to drop old SO use delete call against so.data["mykey"] and do so.flush(). Edit: SharedObject.clear() is way better.
1/ Being persistent is one of the particularity of a SharedObject. To cleanup all its content, you need to call the clear method.
var shareObject:SharedObject = SharedObject.getLocal('justatest');
shareObject.data.test = 'test';
trace(shareObject.data.test)
shareObject.clear();
trace(shareObject.data.test)
output
test
undefined
2/ To store complex data types in SO, you need to use flash.net.registerClassAlias (example here)