Are chisel3.util.Queue `full` `maybe_full` signals unaccessible external to the queue module? - chisel

I tried accessing maybe_full and full directly, but since the error is 'Bool (Reg in Queue)' is not visible from the current module, I assume these signals are only internal to Queue? Will I need to customize the queue util if I want to make these outputs for my module?

Related

How can I externalize ISchedulerExecutorService to run tasks in an external hazelcast cluster(Hazecast 5.2) without using UserCodeDeployment?

I am working on externalizing our IScheduledExecutorService so I can run tasks externally on a external cluster. I am able to write a test and get the Runnable to actually run ONLY if I turn on UserCode deployment. If I want to change this task at all and run the tests again I get the below in my external cluster member's logs..
java.lang.IllegalStateException: Class com.mycompany.task.ScheduledTask is already in local cache and has conflicting byte code representation
I want to be able to change the task if I could and redeploy to Hazelcast to just handle it. I do this kind of thing with our external maps now. It can handle different versions of our objects using compact serialization.
Am I stuck using user code deployment for these functional objects? If I need to make a change to it I need to change the class name and redeploy to production. I'm hoping to get this task right the first time and not have to ever do that but I have a way of handling it if I do.
The cluster is already running in production and I'll have to add the following to each member
HZ_USERCODEDEPLOYMENT_ENABLED=true
and the appropriate client code(listed below) to enable this.
What I've done...
Added the following to my local docker file
HZ_USERCODEDEPLOYMENT_ENABLED=true
and also in the code that creates a hazelcast client connecting to my external cluster with
ClientConfig clientConfig = new ClientConfig(); ClientUserCodeDeploymentConfig clientUserCodeDeploymentConfig = new ClientUserCodeDeploymentConfig(); clientUserCodeDeploymentConfig.addClass("com.mycompany.task.ScheduledTask"); clientUserCodeDeploymentConfig.setEnabled(true); clientConfig.setUserCodeDeploymentConfig(clientUserCodeDeploymentConfig);
However, if I remove those two pieces I get the following Exception with a failing test. It doesn't know about my class at all.
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: com.mycompany.task.ScheduledTask
Side Note:
We are using compact serialization for several maps already and when I try to configure this Runnable task via compact serialization I get the below error. I don't think that's the right approach either.
[Scheduler: myScheduledExecutorService][Partition: 121][Task: 7afe68d5-3185-475f-b375-5a82a7088de3] Exception occurred during run
java.lang.ClassCastException: class com.hazelcast.internal.serialization.impl.compact.DeserializedGenericRecord cannot be cast to class java.lang.Runnable (com.hazelcast.internal.serialization.impl.compact.DeserializedGenericRecord is in unnamed module of loader 'app'; java.lang.Runnable is in module java.base of loader 'bootstrap')
at com.hazelcast.scheduledexecutor.impl.ScheduledRunnableAdapter.call(ScheduledRunnableAdapter.java:49) ~[hazelcast-5.2.0.jar:5.2.0]
at com.hazelcast.scheduledexecutor.impl.TaskRunner.call(TaskRunner.java:78) ~[hazelcast-5.2.0.jar:5.2.0]
at com.hazelcast.internal.util.executor.CompletableFutureTask.run(CompletableFutureTask.java:64) ~[hazelcast-5.2.0.jar:5.2.0]

Discord.py assistance - How does the library forcibly call an asynchronous command function?

I have used many discord API wrappers, but as an experienced python developer, unfortunately I somehow still do not understand how a command gets called!
#client.command()
async demo(ctx):
channel = ctx.channel
await channel.send(f'Demonstration')
Above a command has been created (function) and it is placed after its decorator #client.command()
To my understanding, the decorator is in a way, a "check" performed before running the function (demo) but I do not understand how the discord.py library seemingly "calls" the demo function.....?? Is there some form of short/long polling system in the local imported discord.py library which polls the discord API and receives a list of jobs/messages and checks these against the functions the user has created?
I would love to know how this works as I dont understand what "calls" the functions that the user makes, and this would allow me to make my own wrapper for another similar social media platform! Many thanks in advance.
I am trying to work out how functions created by the user are seemingly "called" by the discord.py library. I have worked with the discord.py wrapper and other API wrappers before.
(See source code attached at the bottom of the answer)
The #bot.command() decorator adds a command to the internal lists/mappings of commands stored in the Bot instance.
Whenever a message is received, this runs through Bot.process_commands. It can then look through every command stored to check if the message starts with one of them (prefix is checked beforehand). If it finds a match, then it can invoke it (the underlying callback is stored in the Command instance).
If you've ever overridden an on_message event and your commands stopped working, then this is why: that method is no longer being called, so it no longer tries to look through your commands to find a match.
This uses a dictionary to make it far more efficient - instead of having to iterate over every single command & alias available, it only has to check if the first letters of the message match anything at all.
The commands.Command() decorator used in Cogs works slightly different. This turns your function into a Command instance, and when adding a cog (using Bot.add_cog()) the library checks every attribute to see if any of them are Command instances.
References to source code
GroupMixin.command() (called when you use #client.command()): https://github.com/Rapptz/discord.py/blob/24bdb44d54686448a336ea6d72b1bf8600ef7220/discord/ext/commands/core.py#L1493
As you can see, it calls add_command() internally to add it to the list of commands.
Adding commands (GroupMixin.add_command()): https://github.com/Rapptz/discord.py/blob/24bdb44d54686448a336ea6d72b1bf8600ef7220/discord/ext/commands/core.py#L1315
Bot.process_commands(): https://github.com/Rapptz/discord.py/blob/master/discord/ext/commands/bot.py#L1360
You'll have to follow the chain - most of the processing actually happens in get_context which tries to create a Context instance out of the message: https://github.com/Rapptz/discord.py/blob/24bdb44d54686448a336ea6d72b1bf8600ef7220/discord/ext/commands/bot.py#L1231
commands.Command(): https://github.com/Rapptz/discord.py/blob/master/discord/ext/commands/core.py#L1745

How to use an external variable in linkage section in COBOL and pass values from it into a new module and write into my new output file

Could someone please tell me why a variable is declared as "External" in a module and how to use that in other modules through Linkage section and how to pass them into new fields so I can write it to a new file.
EXTERNAL items are commonly found in WORKING-STORAGE. These are normally not passed from one program to another via CALL and LINKAGE but shared directly via the COBOL runtime.
Declaring an item as EXTERNAL behaves like "runtime named global storage", you assign a name and a length to a global piece of memory and can access it anywhere in the same runtime unit (no direct CALL needed), even in cases like the following:
MAIN
-> CALL B
B: somevar EXTERNAL
-> MOVE 'TEST' TO somevar
-> CANCEL B
-> CALL C
C: somevar EXTERNAL -> now contains 'TEST'
On an IBM Z mainframe, running z/OS, the runtime routines for all High Level Languages (HLLs) is called Language Environment (LE). Decades ago, each HLL had its own runtime and this caused some problems when they were mixed into the same run unit; starting in the early 1990s IBM switched all HLLs to LE for their runtime.
LE has the concept of an enclave. Part of the text at that link says an enclave is the equivalent of a run unit in COBOL.
Your question is tagged CICS, and sometimes behavior is different when running in that environment. Quoting from that link...
Under CICS the execution of a CICS LINK command creates what Language Environment calls a Child Enclave. A new environment is initialized and the child enclave gets its runtime options. These runtime options are independent of those options that existed in the creating enclave.
[...]
Something similar happens when a CICS XCTL command is executed. In this case we do not get a child enclave, but the existing enclave is terminated and then reinitialized with the runtime options determined for the new program. The same performance considerations apply.
So, as #SimonSobich noted, if you use CALLs to invoke your subroutines when running in CICS, EXTERNAL data is global to the run unit. But, if you use EXEC CICS XCTL to invoke your subroutines, you may see different behavior and have to design your application differently.

Pass proxy to loaded module in a PureMVC MultiCore app?

I'm creating a flash campaign which will be loaded into a client's framework, which I have no control over. The framework will already have loaded a few things such as locale, fonts and copy, and will pass these things to my swf upon initialization.
Since the size of my swf (let's call it the shell) is restricted it will in turn display a campaign-specific preloader and then load another swf (let's call this the campaign) with the rest of the site.
The shell and the campaign will both be PureMVC modules. The shell will create a few proxies and populate these with data passed from the framework (locale constants, fonts etc), before loading in the campaign.
When the campaign is loaded it too will need locale and fonts etc. so my question is, what is the best way to pass this data along to the campaign module from the shell module?
I could create the same proxies in the campaign module and load the data again, which will be cached, but this obviously feels like the wrong way to go.
I've investigated the use of the pipes utility but this seems like a bit of an overkill in my case since the communication will be one-way and will just happen once during the initialization of the campaign.
Would it be "ok" from a design pattern point of view to pass the proxies to an init method of the campaign module and then register these proxies in the campaign module startup command? This seems wrong since these proxies have references to my shell application facade through notification names. Would it be ok if I move the notification names to some "NotificationConstants" class which both modules can use?
I could create similar proxies in the campaign module but this time populate them with the data objects from my old proxies passed to the previously mentioned init method? Spontaneously this feels like the best way to do it since the data objects don't have any references to my shell module but the "old" proxies do..
The solution I usually use is to create an interface:
interface Campaign {
function set campaignDetails(value:CampaignDetails):void;
//...
}
The campaign-module should implement this interface - in the implementation I recommend you to use a different proxy in the module, so that you would avoid having duplicated notifications and references.
When the shell is ready with the loading of the module it just has to:
if (module is Campaign)
{
(module as Campaign).campaignDetails = ...;
}
I'm sure I'm telling you nothing new. You just need to make sure to keep the acquaintance between the shell and the module only on an interface level. Then you just pass the data and leave the module MVC core to deal with it independently from the shell.

MEF: "Unable to load one or more of the requested types. Retrieve the LoaderExceptions for more information"

Scenario: I am using Managed Extensibility Framework to load plugins (exports) at runtime based on an interface contract defined in a separate dll. In my Visual Studio solution, I have 3 different projects: The host application, a class library (defining the interface - "IPlugin") and another class library implementing the interface (the export - "MyPlugin.dll").
The host looks for exports in its own root directory, so during testing, I build the whole solution and copy Plugin.dll from the Plugin class library bin/release folder to the host's debug directory so that the host's DirectoryCatalog will find it and be able to add it to the CompositionContainer. Plugin.dll is not automatically copied after each rebuild, so I do that manually each time I've made changes to the contract/implementation.
However, a couple of times I've run the host application without having copied (an updated) Plugin.dll first, and it has thrown an exception during composition:
Unable to load one or more of the requested types. Retrieve the LoaderExceptions for more information
This is of course due to the fact that the Plugin.dll it's trying to import from implements a different version of IPlugin, where the property/method signatures don't match. Although it's easy to avoid this in a controlled and monitored environment, by simply avoiding (duh) obsolete IPlugin implementations in the plugin folder, I cannot rely on such assumptions in the production environment, where legacy plugins could be encountered.
The problem is that this exception effectively botches the whole Compose action and no exports are imported. I would have preferred that the mismatching IPlugin implementations are simply ignored, so that other exports in the catalog(s), implementing the correct version of IPlugin, are still imported.
Is there a way to accomplish this? I'm thinking either of several potential options:
There is a flag to set on the CompositionContainer ("ignore failing imports") prior to or when calling Compose
There is a similar flag to specify on the <ImportMany()> attribute
There is a way to "hook" on to the iteration process underlying Compose(), and be able to deal with each (failed) import individually
Using strong name signing to somehow only look for imports implementing the current version of IPlugin
Ideas?
I have also run into a similar problem.
If you are sure that you want to ignore such "bad" assemblies, then the solution is to call AssemblyCatalog.Parts.ToArray() right after creating each assembly catalog. This will trigger the ReflectionTypeLoadException which you mention. You then have a chance to catch the exception and ignore the bad assembly.
When you have created AssemblyCatalog objects for all the "good" assemblies, you can aggregate them in an AggregateCatalog and pass that to the CompositionContainer constructor.
This issue can be caused by several factors (any exceptions on the loaded assemblies), like the exception says, look at the ExceptionLoader to (hopefully) get some idea
Another problem/solution that I found, is when using DirectoryCatalog, if you don't specify the second parameter "searchPattern", MEF will load ALL the dlls in that folder (including third party), and start looking for export types, that can also cause this issue, a solution is to have a convention name on all the assemblies that export types, and specify that in the DirectoryCatalog constructor, I use *_Plugin.dll, that way MEF will only load assemblies that contain exported types
In my case MEF was loading a NHibernate dll and throwing some assembly version error on the LoaderException (this error can happen with any of the dlls in the directory), this approach solved the problem
Here is an example of above mentioned methods:
var di = new DirectoryInfo(Server.MapPath("../../bin/"));
if (!di.Exists) throw new Exception("Folder not exists: " + di.FullName);
var dlls = di.GetFileSystemInfos("*.dll");
AggregateCatalog agc = new AggregateCatalog();
foreach (var fi in dlls)
{
try
{
var ac = new AssemblyCatalog(Assembly.LoadFile(fi.FullName));
var parts = ac.Parts.ToArray(); // throws ReflectionTypeLoadException
agc.Catalogs.Add(ac);
}
catch (ReflectionTypeLoadException ex)
{
Elmah.ErrorSignal.FromCurrentContext().Raise(ex);
}
}
CompositionContainer cc = new CompositionContainer(agc);
_providers = cc.GetExports<IDataExchangeProvider>();