I am implementing Munit for a flow which involves Mule Requester. This mule requester would be picking up a file.
So, when i run the java class as Junit, it throws out an exception as, Cannot perform the operation on the FileConnector as it is stopped.
The expression used in mule requester is ,
file ://${path}?connector=FileConnector
I have also defined a global file connector.
Please let me know how to resolve this issue.
Thank you.
All connectors and inbound-endpoints are disabled by default in MUnit. This is to prevent flow accidentally processing/generating real data. (Some explanation here). For the same reason File Connector is also disabled.
To enable connectors, you need to override a method in your MUnitsuite as below -
#Override
protected boolean haveToMockMuleConnectors() {
return false;
}
For XML Munit, see this to enable connectors.
Note: This will enable and start all the connectors that you are using in your mule-configs under test. If you have SMTP connector, DB connector, MQ connector etc, they all be started during test, so use it with caution.
Check whether the file connector is defined in the files you loaded for munit.
<spring:beans>
<spring:import resource="classpath:api.xml"/>
</spring:beans>
You may also try mocking the mule requester.
Related
I am working on externalizing our IScheduledExecutorService so I can run tasks externally on a external cluster. I am able to write a test and get the Runnable to actually run ONLY if I turn on UserCode deployment. If I want to change this task at all and run the tests again I get the below in my external cluster member's logs..
java.lang.IllegalStateException: Class com.mycompany.task.ScheduledTask is already in local cache and has conflicting byte code representation
I want to be able to change the task if I could and redeploy to Hazelcast to just handle it. I do this kind of thing with our external maps now. It can handle different versions of our objects using compact serialization.
Am I stuck using user code deployment for these functional objects? If I need to make a change to it I need to change the class name and redeploy to production. I'm hoping to get this task right the first time and not have to ever do that but I have a way of handling it if I do.
The cluster is already running in production and I'll have to add the following to each member
HZ_USERCODEDEPLOYMENT_ENABLED=true
and the appropriate client code(listed below) to enable this.
What I've done...
Added the following to my local docker file
HZ_USERCODEDEPLOYMENT_ENABLED=true
and also in the code that creates a hazelcast client connecting to my external cluster with
ClientConfig clientConfig = new ClientConfig(); ClientUserCodeDeploymentConfig clientUserCodeDeploymentConfig = new ClientUserCodeDeploymentConfig(); clientUserCodeDeploymentConfig.addClass("com.mycompany.task.ScheduledTask"); clientUserCodeDeploymentConfig.setEnabled(true); clientConfig.setUserCodeDeploymentConfig(clientUserCodeDeploymentConfig);
However, if I remove those two pieces I get the following Exception with a failing test. It doesn't know about my class at all.
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: com.mycompany.task.ScheduledTask
Side Note:
We are using compact serialization for several maps already and when I try to configure this Runnable task via compact serialization I get the below error. I don't think that's the right approach either.
[Scheduler: myScheduledExecutorService][Partition: 121][Task: 7afe68d5-3185-475f-b375-5a82a7088de3] Exception occurred during run
java.lang.ClassCastException: class com.hazelcast.internal.serialization.impl.compact.DeserializedGenericRecord cannot be cast to class java.lang.Runnable (com.hazelcast.internal.serialization.impl.compact.DeserializedGenericRecord is in unnamed module of loader 'app'; java.lang.Runnable is in module java.base of loader 'bootstrap')
at com.hazelcast.scheduledexecutor.impl.ScheduledRunnableAdapter.call(ScheduledRunnableAdapter.java:49) ~[hazelcast-5.2.0.jar:5.2.0]
at com.hazelcast.scheduledexecutor.impl.TaskRunner.call(TaskRunner.java:78) ~[hazelcast-5.2.0.jar:5.2.0]
at com.hazelcast.internal.util.executor.CompletableFutureTask.run(CompletableFutureTask.java:64) ~[hazelcast-5.2.0.jar:5.2.0]
In quarkus 2.9.2.Final I try to configure flyway placeholders in src/test/resources/application.properties by
quarkus.flyway.placeholders.a=a
quarkus.flyway.placeholders.b=b
where in the flyway script these placeholder are used like
create user ${a} with password '${b}';
However, when I run the test which starts up db and executes flyway I get error
Caused by: org.flywaydb.core.api.FlywayException: No value provided for placeholder: ${a}. Check your configuration!
at org.flywaydb.core.internal.parser.PlaceholderReplacingReader.read(PlaceholderReplacingReader.java:165)
at java.base/java.io.FilterReader.read(FilterReader.java:65)
at org.flywaydb.core.internal.parser.PositionTrackingReader.read(PositionTrackingReader.java:33)
at java.base/java.io.FilterReader.read(FilterReader.java:65)
at org.flywaydb.core.internal.parser.RecordingReader.read(RecordingReader.java:33)
at java.base/java.io.FilterReader.read(FilterReader.java:65)
at org.flywaydb.core.internal.parser.PeekingReader.refillPeekBuffer(PeekingReader.java:73)
at org.flywaydb.core.internal.parser.PeekingReader.peek(PeekingReader.java:183)
at org.flywaydb.core.internal.parser.PeekingReader.peek(PeekingReader.java:165)
at org.flywaydb.core.internal.parser.Parser.readToken(Parser.java:478)
at org.flywaydb.core.internal.parser.Parser.getNextStatement(Parser.java:173)
... 73 more
I checked that my config is valid by defining a config mapping
#ConfigMapping(prefix = "quarkus.flyway")
interface MyConfig {
Map<String,String> placeholders();
}
and injecting it in a service where the Map was exactly having these two elements.
So now I really wonder why flyway quarkus extension cannot handle it / which trick I am missing.
Please help, I need this simple thing to work.
Oh no I found it..small but IMO nasty one driving me crazy.
In my project, Flyway is only to be used with my named datasource 'myds'.
So config had to be changed to:
quarkus.flyway.myds.placeholders.a=a
quarkus.flyway.myds.placeholders.b=b
Would be nice to have an error message which gives a clue. But maybe it's only my fault, so I guess not worth asking for improvement in flyway.
I am trying to figure out whether logback is potentially losing messages.
Quoting from the log4j2 page:
"Like Logback, Log4j 2 can automatically reload its configuration upon modification. Unlike Logback, it will do so without losing log events while reconfiguration is taking place"
So, can anyone comment on logback losing log events? Does it really happen?
(I have seen that event loss might occur using Async appenders, but can be solved using setting discardingThreshold 0, but the statement on log4j2 is talking about configuration reload)
I am trying to understand whether log4j2 is really more reliable, or shall we just use logback...
Thanks.
When logback is reconfigured it removes all the appender references and level settings from the loggers. It then reads the new configuration and applies them to the loggers. While this is happening logging is still continuing.
Log4j 2 separates the loggers from their configuration. Once a new configuration is created the loggers are pointed at the LoggerConfig of the new configuration. So for a brief time you will have some loggers pointing at the old configuration and some pointing at the new one, but they will never be unconfigured.
I'm currently investigating this very question and, looking at the code, it indeed seems that log messages can be lost during reconfiguration. The following code is called by ReconfigureOnChangeFilter's ReconfiguringThread:
private void performXMLConfiguration(LoggerContext lc) {
JoranConfigurator jc = new JoranConfigurator();
jc.setContext(context);
StatusUtil statusUtil = new StatusUtil(context);
List<SaxEvent> eventList = jc.recallSafeConfiguration();
URL mainURL = ConfigurationWatchListUtil.getMainWatchURL(context);
lc.reset();
long threshold = System.currentTimeMillis();
try {
jc.doConfigure(mainConfigurationURL);
if (statusUtil.hasXMLParsingErrors(threshold)) {
fallbackConfiguration(lc, eventList, mainURL);
}
} catch (JoranException e) {
fallbackConfiguration(lc, eventList, mainURL);
}
}
In lc.reset(), all appenders are removed (along with other configuration attributes), then the logger context is reconfigured. There is no apparent synchronization going on.
A quick test verified that messages are lost during reconfiguration.
It seems like a very common issue with SSIS packages is releasing a package to Production that ends up with running the wrong connectionstring parameters. This could happen by making any one of many mistakes or ommisions. As a result, I find it helpful to dump all ConnectionString values to a log file. This helps me understand what connectionstrings were actually applied to the package at run time.
Now, I am considering having my packages check to see if every connnection object in my package had its connectionstring overriden by an entry in the config file and if not, return a warning or even fail the package. This is to allow easier configuration by extracting all environment variables to a config file. If a connectionstring is never overridden, this risks that a package, when run in production, may use development settings or a package, when run in a non production setting when testing, may accidentily be run against production.
I'd like to borrow from anyone who may have tried to do this. I'd also be interested in suggestions on how to accomplish this with minimal work.
Thx
Technical question 1 - what are my connection string
This is an easy question to answer. In your package, add a Script Task and enumerate through the Connections collection. I fire the OnInformation event and if I had this scheduled, I'd be sure to have the /rep iew options in my dtexec to ensure I record Information, Errors and Warnings.
namespace TurnDownForWhat
{
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
/// <summary>
/// ScriptMain is the entry point class of the script. Do not change the name, attributes,
/// or parent of this class.
/// </summary>
[Microsoft.SqlServer.Dts.Tasks.ScriptTask.SSISScriptTaskEntryPointAttribute]
public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
{
public void Main()
{
bool fireAgain = false;
foreach (var item in Dts.Connections)
{
Dts.Events.FireInformation(0, "SCR Enumerate Connections", string.Format("{0}->{1}", item.Name, item.ConnectionString), string.Empty, 0, ref fireAgain);
}
Dts.TaskResult = (int)ScriptResults.Success;
}
enum ScriptResults
{
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
};
}
}
Running that on my package, I can see I had two Connection managers, CM_FF and CM_OLE along with their connection strings.
Information: 0x0 at SCR Enum, SCR Enumerate Connections: CM_FF->C:\ssisdata\dba_72929.csv
Information: 0x0 at SCR Enum, SCR Enumerate Connections: CM_OLE->Data Source=localhost\dev2012;Initial Catalog=tempdb;Provider=SQLNCLI11;Integrated Security=SSPI;
Add that to ... your OnPreExecute event for all the packages and no one sees it but every reports back.
Technical question 2 - Missed configurations
I'm not aware of anything that will allow a package to know it's under configuration. I'm sure there's an event as you will see in your Information/Warning messages that a package attempted to apply a configuration, didn't find one and is going to retain it's design time value. Information - I'm configuring X via Y. Warning - tried to configure X but didn't find Y. But how to have a package inspect itself to find that out, I have no idea.
That said, I've seen reference to a property that fails package on missed configuration. I'm not seeing it now, but I'm certain it exists in some crevice. You can supply the /w parameter to dtexec which treats warnings as errors and really, warnings are just errors that haven't grown up yet.
Unspoken issue 1 - Permissions
I had a friend who botched an XML config file as part of their production deploy. Their production server started consuming data from a dev server. Bad things happened. It sounds like you have had a similar situation. The resolution is easy, insulate your environments. Are you using the same service account for your production class SQL Server boxes and dev/test/uat/qa/load/etc? STOP. Make a new one. Don't allow prod to talk to any boxes that aren't in their tier of service. Someone bones a package and doesn't set a configuration? First of all, you'll catch it when it goes from dev to something-before-production because that tier wouldn't be able to talk to anything else that's not that level. But if you're in the ultra cheap shop and you've only got dev and prod, so be it. Non-configured package goes to prod. Prod SQL Agent fires off the package. Package uses default connection manager and fails validation because it can't talk to the dev sales database.
Unspoken issue 2 - template
What's your process when you have a new package to build? Does your team really start from scratch? There are so many ways to solve this problem but the core concept is to define your best practices for Configuration, Logging, Package Protection Level, Transaction levels, etc into some easily consumable form. Maybe that's 3 starter packages: one for raw acquisition, maybe one stages and conforms the data and the last one moves data from conformed into the final destination. Teammates then simply have to pick one to start from and fill in the spots that need it. If they choose to do their own thing, that's the stick you beat them with when their package fails to run in production because they didn't follow the standard path.
There are other approaches here. If you're a strong .NET crew, you can gen your template packages that way. At this point, I create my templates with Biml and use that to drive basic package creation.
If I am understanding you correctly the below solution should work.
My suggestion to you is to turn on the Do not save sensitive option for the ProtectionLevel property at the top level of the package.
This will require you to use package configurations for every connection, otherwise it will not have the credentials to make a connection.
Scenario: I am using Managed Extensibility Framework to load plugins (exports) at runtime based on an interface contract defined in a separate dll. In my Visual Studio solution, I have 3 different projects: The host application, a class library (defining the interface - "IPlugin") and another class library implementing the interface (the export - "MyPlugin.dll").
The host looks for exports in its own root directory, so during testing, I build the whole solution and copy Plugin.dll from the Plugin class library bin/release folder to the host's debug directory so that the host's DirectoryCatalog will find it and be able to add it to the CompositionContainer. Plugin.dll is not automatically copied after each rebuild, so I do that manually each time I've made changes to the contract/implementation.
However, a couple of times I've run the host application without having copied (an updated) Plugin.dll first, and it has thrown an exception during composition:
Unable to load one or more of the requested types. Retrieve the LoaderExceptions for more information
This is of course due to the fact that the Plugin.dll it's trying to import from implements a different version of IPlugin, where the property/method signatures don't match. Although it's easy to avoid this in a controlled and monitored environment, by simply avoiding (duh) obsolete IPlugin implementations in the plugin folder, I cannot rely on such assumptions in the production environment, where legacy plugins could be encountered.
The problem is that this exception effectively botches the whole Compose action and no exports are imported. I would have preferred that the mismatching IPlugin implementations are simply ignored, so that other exports in the catalog(s), implementing the correct version of IPlugin, are still imported.
Is there a way to accomplish this? I'm thinking either of several potential options:
There is a flag to set on the CompositionContainer ("ignore failing imports") prior to or when calling Compose
There is a similar flag to specify on the <ImportMany()> attribute
There is a way to "hook" on to the iteration process underlying Compose(), and be able to deal with each (failed) import individually
Using strong name signing to somehow only look for imports implementing the current version of IPlugin
Ideas?
I have also run into a similar problem.
If you are sure that you want to ignore such "bad" assemblies, then the solution is to call AssemblyCatalog.Parts.ToArray() right after creating each assembly catalog. This will trigger the ReflectionTypeLoadException which you mention. You then have a chance to catch the exception and ignore the bad assembly.
When you have created AssemblyCatalog objects for all the "good" assemblies, you can aggregate them in an AggregateCatalog and pass that to the CompositionContainer constructor.
This issue can be caused by several factors (any exceptions on the loaded assemblies), like the exception says, look at the ExceptionLoader to (hopefully) get some idea
Another problem/solution that I found, is when using DirectoryCatalog, if you don't specify the second parameter "searchPattern", MEF will load ALL the dlls in that folder (including third party), and start looking for export types, that can also cause this issue, a solution is to have a convention name on all the assemblies that export types, and specify that in the DirectoryCatalog constructor, I use *_Plugin.dll, that way MEF will only load assemblies that contain exported types
In my case MEF was loading a NHibernate dll and throwing some assembly version error on the LoaderException (this error can happen with any of the dlls in the directory), this approach solved the problem
Here is an example of above mentioned methods:
var di = new DirectoryInfo(Server.MapPath("../../bin/"));
if (!di.Exists) throw new Exception("Folder not exists: " + di.FullName);
var dlls = di.GetFileSystemInfos("*.dll");
AggregateCatalog agc = new AggregateCatalog();
foreach (var fi in dlls)
{
try
{
var ac = new AssemblyCatalog(Assembly.LoadFile(fi.FullName));
var parts = ac.Parts.ToArray(); // throws ReflectionTypeLoadException
agc.Catalogs.Add(ac);
}
catch (ReflectionTypeLoadException ex)
{
Elmah.ErrorSignal.FromCurrentContext().Raise(ex);
}
}
CompositionContainer cc = new CompositionContainer(agc);
_providers = cc.GetExports<IDataExchangeProvider>();