Please correct me where I'm wrong (still learning Gulp, Streams, etc.) I'd like to create a custom reporter for my gulp-jscs results. For example, let's say I have 3 files in my gulp.src() stream. To my knowledge, each is piped one at a time through jscs, which attaches a .jscs object onto the file with its results, one such variable in that object is .errorCount.
What I'd like to do is have a variable I create, ie: maxErrors which I set to, say 5. Since we're processing 3 files, let's say the first file passes with 0 errors, but the next has 3 errors. I don't want to prematurely stop processing since the maxErrors tally has not been reached (3/5 currently). So it should continue to process the next file which lets say has 3 errors as well, putting us over our max, so that we interrupt jscs from continuing to process more files and instead fail out and then let our custom reporter function gain access to the files that have been processed so I can look at their .jscs objects and customize some output.
My problem here is that I don't understand the docs when they say: .pipe(jscs.reporter('name-of-reporter')) How does a string value invoke my reporter (which currently exists as a function I've imported called libs.reporters.myJSCSReporter. I know pipe() expects Stream objects, so I can't just put a function in the .pipe() call.
I hope I've explained myself well enough (please ask for clarifications otherwise).
Related
With stable-baselines3 given an agent, we can call "action = agent.predict(obs)". And then with Gym, this would be "new_obs, reward, done, info = env.step(action)". (more or less, maybe missed an input or an output).
We also have "agent.learn(10_000)" as an example, yet here we're less involved in the process and don't call the environment.
Looking for a way to train the agent while still calling "env.step". If you wander why, just trying to implement self play (agent and a previous version of it) playing with one environment (for example turns play as Chess).
WKR, Oren.
But why do you need it? If you take a look at the implementation of any learn method, you will see it is nothing more than an iteration over time steps calling collect_rollouts and train with some additional logging and setup at the beginning for, e.g., further saving the agent etc. Your env.step is called inside collect_rollouts.
I'd better suggest you to write a callback based on CheckpointCallback, which saves your agent (model) after N training steps and then attach this callback to your learn call. In your environment you could instantiate each N steps a "new previous" version of your model by calling ModelClass.load(file) on the file saved by a callback, so that finally you would be able to select actions of the other player using a self-play in your environment
I have used many discord API wrappers, but as an experienced python developer, unfortunately I somehow still do not understand how a command gets called!
#client.command()
async demo(ctx):
channel = ctx.channel
await channel.send(f'Demonstration')
Above a command has been created (function) and it is placed after its decorator #client.command()
To my understanding, the decorator is in a way, a "check" performed before running the function (demo) but I do not understand how the discord.py library seemingly "calls" the demo function.....?? Is there some form of short/long polling system in the local imported discord.py library which polls the discord API and receives a list of jobs/messages and checks these against the functions the user has created?
I would love to know how this works as I dont understand what "calls" the functions that the user makes, and this would allow me to make my own wrapper for another similar social media platform! Many thanks in advance.
I am trying to work out how functions created by the user are seemingly "called" by the discord.py library. I have worked with the discord.py wrapper and other API wrappers before.
(See source code attached at the bottom of the answer)
The #bot.command() decorator adds a command to the internal lists/mappings of commands stored in the Bot instance.
Whenever a message is received, this runs through Bot.process_commands. It can then look through every command stored to check if the message starts with one of them (prefix is checked beforehand). If it finds a match, then it can invoke it (the underlying callback is stored in the Command instance).
If you've ever overridden an on_message event and your commands stopped working, then this is why: that method is no longer being called, so it no longer tries to look through your commands to find a match.
This uses a dictionary to make it far more efficient - instead of having to iterate over every single command & alias available, it only has to check if the first letters of the message match anything at all.
The commands.Command() decorator used in Cogs works slightly different. This turns your function into a Command instance, and when adding a cog (using Bot.add_cog()) the library checks every attribute to see if any of them are Command instances.
References to source code
GroupMixin.command() (called when you use #client.command()): https://github.com/Rapptz/discord.py/blob/24bdb44d54686448a336ea6d72b1bf8600ef7220/discord/ext/commands/core.py#L1493
As you can see, it calls add_command() internally to add it to the list of commands.
Adding commands (GroupMixin.add_command()): https://github.com/Rapptz/discord.py/blob/24bdb44d54686448a336ea6d72b1bf8600ef7220/discord/ext/commands/core.py#L1315
Bot.process_commands(): https://github.com/Rapptz/discord.py/blob/master/discord/ext/commands/bot.py#L1360
You'll have to follow the chain - most of the processing actually happens in get_context which tries to create a Context instance out of the message: https://github.com/Rapptz/discord.py/blob/24bdb44d54686448a336ea6d72b1bf8600ef7220/discord/ext/commands/bot.py#L1231
commands.Command(): https://github.com/Rapptz/discord.py/blob/master/discord/ext/commands/core.py#L1745
I'm trying to deploy an app to production and getting a little confused by environment and application variables and what is happening at compile time vs runtime.
In my app, I have a genserver process that requires a token to operate. So I use config/releases.exs to set the token variable at runtime:
# config/releases.exs
import Config
config :my_app, :my_token, System.fetch_env!("MY_TOKEN")
Then I have a bit of code that looks a bit like this:
defmodule MyApp.SomeService do
use SomeBehaviour, token: Application.get_env(:my_app, :my_token),
other_config: :stuff
...
end
In production the genserver process (which does some http stuff) gives me 403 errors suggesting the token isn't there. So can I clarify, is the use keyword getting evaluated at compile time (in which case the application environment doest exist yet)?
If so, what is the correct way of getting runtime environment variables in to a service like this. Is it more correct to define the config in application.ex when starting the process? eg
children = [
{MyApp.SomeService, [
token: Application.get_env(:my_app, :my_token),
other_config: :stuff
]}
...
]
Supervisor.start_link(children, opts)
I may have answered my own questions here, but would be helpful to get someone who knows what they're doing confirm and point me in the right way. Thanks
elixir has two stages: compilation and runtime, both written in Elixir itself. To clearly understand what happens when one should figure out, that everything is macro and Elixir, during compilation stage, expands these macros until everything is expanded. That AST comes to runtime.
In your example, use SomeBehaviour, foo: :bar is implicitly calling SomeBehaviour.__using__/1 macro. To expand the AST, it requires the argument (keyword list) to be expanded as well. Hence, Application.get_env(:my_app, :my_token) call happens in compile time.
There are many possibilities to move it to runtime. If you are the owner of SomeBehaviour, make it accept the pair {:my_app, :my_token} and call Application.get_env/2 somewhere from inside it.
Or, as you suggested, pass it as a parameter to children; this code belongs to function body, meaning it won’t be attempted to expand during compilation stage, but would rather be passed as AST to the resulting BEAM to be executed in runtime.
I want to create a Lambda function that runs through S3 files and if needed triggers other Lambda functions to parse the files in parallel.
Is this possible?
Yes it's possible. You would use the AWS SDK (which is included in the Lambda runtime environment for you) to invoke other Lambda functions, just like you would do in code running anywhere else.
You'll have to specify which language you are writing the Lambda function in if you want a more detailed answer.
If I understand your problem correctly you want one lambda that goes through a list of files in a S3-bucket. Some condition will decide whether a file should be parsed or not. For the files that should be parsed you want another 'file-parsing' lambda to parse those files.
To do this you will need two lambdas - one 'S3 reader' and one 'S3 file parser'.
For triggering the 'S3 file parser' lambda you have many few different options. Here are a two:
Trigger it using a SNS topic. (Here is an article on how to do that). If you have a very long list of files this might be an issue, as you most likely will surpass the number of instances of a lambda that can run in parallel.
Trigger it by invoking it with the AWS SDK. (See the article 'Leon' posted as comment to see how to do that.) What you need to consider here is that a long list of files might cause the 'S3 reader' lambda that controls the invocation to timeout since there is a 5 min runtime limit for a lambda.
Depending on the actual use case another potential solution is to just have one lambda that gets triggered when a file gets uploaded to the S3 bucket and let it decide whether it should get parsed or not and then parse it if needed. More info about how to do that can be found in this article and this tutorial.
I have a project in Apps script that uses several libraries. The project needed a more complex logger (logging levels, color coding) so I wrote one that outputs to google docs. All is fine and dandy if I immediately print the output to the google doc, when I import the logger in all of the libraries separately. However I noticed that when doing a lot of logging it takes much longer than without. So I am looking for a way to write all of the output in a single go at the end when the main script finishes.
This would require either:
Being able to define the logging library once (in the main file) and somehow accessing this in the attached libs. I can't seem to find a way to get the main projects closure from within the libraries though.
Some sort of singleton logger object. Not sure if this is possible from with a library, I have trouble figuring it out either way.
Extending the built-in Logger to suit my needs, not sure though...
My project looks at follows:
Main Project
Library 1
Library 2
Library 3
Library 4
This is how I use my current logger:
var logger = new BetterLogger(/* logging level */);
logger.warn('this is a warning');
Thanks!
Instead of writing to the file at each logged message (which is the source of your slow down), you could write your log messages to the Logger Library's ScriptDB instance and add a .write() method to your logger that will output the messages in one go. Your logger constructor can take a messageGroup parameter which can serve as a unique identifier for the lines you would like to write. This would also allow you to use different files for logging output.
As you build your messages into proper output to write to the file (don't write each line individually, batch operations are your friend), you might want to remove the message from the ScriptDB. However, it might also be a nice place to pull back old logs.
Your message object might look something like this:
{
message: "My message",
color: "red",
messageGroup: "groupName",
level: 25,
timeStamp: new Date().getTime(), //ScriptDB won't take date objects natively
loggingFile: "Document Key"
}
The query would look like:
var db = ScriptDb.getMyDb();
var results = db.query({messageGroup: "groupName"}).sortBy("timeStamp",db.NUMERIC);