Akka target nodes for remote routing - configuration

I created a remote environment to deploy routees using the following:
Routers with Remote Destinations
deployment {
/router1 {
router = round-robin-pool
nr-of-instances = 7
cluster {
enabled = on
allow-local-routees = off
max-nr-of-instances-per-node = 3
use-roles = ["backend"]
target {
nodes = ["akka.tcp://ClusterSystem#127.0.0.1:2560", "akka.tcp://ClusterSystem#127.0.0.1:2570"]
}
}
}
}
This doesn't work. At the end any new node that joins will have routees deployed to it. I thought this configuration meant that "only" on target nodes the routees will be deployed but instead it deploys on "any" new node.
Is this how it works? How can I make the routees to be deployed only on specific nodes? Something must be wrong otherwise adding the "target" configuration does absolutely nothing.

As mentioned in the Akka documentation
akka.actor.deployment {
/parent/remotePool {
router = round-robin-pool
nr-of-instances = 10
target.nodes = ["akka.tcp://app#10.0.0.2:2552", "akka.tcp://app#10.0.0.3:2552"]
}
}
The above configuration, will clone the actor defined in the Props of the remote pool 10 times and deploy it evenly distributed across the two given target nodes.
Apply this configuration
deployment {
/router1 {
router = round-robin-pool
nr-of-instances = 7
target {
nodes = ["akka.tcp://ClusterSystem#127.0.0.1:2560","akka.tcp://ClusterSystem#127.0.0.1:2570"]
}
}
}
Make sure that the ClusterSystem at 127.0.0.1:2560 and ClusterSystem at 127.0.0.1:2570 are running.

Related

terraform GCP cloud function without having to deploy via terraform in CI or breaking past deployments when running locally?

I have some working terraform definitions among a larger project:
resource "google_storage_bucket" "owlee_functions_bucket" {
name = "owlee_functions_bucket"
location = "europe-west2"
project = "owlee-software"
}
resource "google_storage_bucket_object" "archive" {
name = "index.zip"
bucket = google_storage_bucket.owlee_functions_bucket.name
source = "../apps/backend/dist/index.zip"
}
resource "google_cloudfunctions_function" "backend_function" {
name = "backend_function"
runtime = "nodejs16"
project = "owlee-software"
region = "europe-west2"
available_memory_mb = 128
source_archive_bucket = google_storage_bucket.owlee_functions_bucket.name
source_archive_object = google_storage_bucket_object.archive.name
trigger_http = true
entry_point = "OWLEE"
}
Then I'm trying to deploy via CI, for now, I'm just running terraform apply after zipping up the new version of the function to handle deployment.
It's not great and I'd like to change that to a non-terraform process ideally but that doesn't seem to be documented/possible anywhere which makes me think I have the wrong approach with this.
The second issue which is more urgent to solve --
I want to continue managing my infrastructure locally for now and do not want to have to zip up a new version of the function to deploy everytime I have to run terraform apply locally.
Is there a way -- after its creation -- to avoid overwriting/uploading the function via terraform?
I'm guessing this would be somewhat necessary for the CI deployment to work anyway.
I've looked at a handful of other SO threads but they were looking at specifics around cloud-build and the artifacts registry.
I recommend that you deploy the cloud function by terraform but that the CI of the cloud function is maintained by a cloud build (also created by terraform) I think this is the most logical solution since terraform manages the infrastructure not the implementation of the cloud function.
Instead of using a fixed name as you are, use a random string or depending on needs the commit hash for example. This can be prefixed with other things to make it even more unique.
resource "random_string" "function" {
length = 8
special = false
keepers = {
commit_hash = var.commit_hash,
environment = var.environment,
}
}
resource "google_storage_bucket_object" "archive" {
name = "index.zip"
bucket = google_storage_bucket.owlee_functions_bucket.name
source = "../apps/backend/dist/${random_string.function.result}.zip"
}
resource "google_cloudfunctions_function" "backend_function" {
name = "backend_function"
runtime = "nodejs16"
project = "owlee-software"
region = "europe-west2"
available_memory_mb = 128
source_archive_bucket = google_storage_bucket.owlee_functions_bucket.name
source_archive_object = google_storage_bucket_object.archive.name
trigger_http = true
entry_point = "OWLEE"
}
This way if you provide an environmnet such as prod and the same commit hash every time, it will create the same zip file.
If you provide a new environment, say "local", it will generate a new zip. You can then create multiple instances of functions or make more changes to the google_cloudfunctions_function so that it can be used with workspaces

MassTransit servicebuse optimize consumer

I am trying to use MassTransit for Request/Response communication through Azure service bus queue. Sender is an Azure WebApp, Consumer is a windows service installed at on-premise machine.
Everything works fine when it is about small volumes of messages. However as soon as I start sending more than ~20 msg/sec i see severe(1-2 sec) delays in responses from consumer. My telemetry tells me that delay is happening at point when consumer needs to grab messages from queue.
One strange, but I think important part of behavior: I can see that with current load amount of unread messages in queue is on avg constant and its 25. If I send 2x more messages, than I see on avg 50messages in queue. With delays on consumption side i would expect queue to GROW, but it is constant, so it is definitely something inside code that throttles the connection.
Quick info:
There are no problems with hardware on the machine. CPU/Mem not high.
I tried playing with the UseConcurrencyLimit, MaxConcurrentCalls, PrefetchCount configs on consuner side. It did not help
My solution code of sender and consumer are next to classic examples.
Consumer: .Net framework 4.7.2 and MassTransit.Azure.ServiceBus.Core 5.5.2
Here's my listener class with all business logic removed:
public class QueueListener
{
private IBusControl Bus { get; set; }
public QueueListener()
{
Bus = MassTransit.Bus.Factory.CreateUsingAzureServiceBus(serviceBusFactoryConfigurator =>
{
var host = serviceBusFactoryConfigurator.Host(SettingsHelper.AzureServiceBusConnectionString,
(config) =>
{
config.OperationTimeout = TimeSpan.FromSeconds(60);
config.TransportType = TransportType.AmqpWebSockets;
});
serviceBusFactoryConfigurator.ReceiveEndpoint(host, SettingsHelper.CouponQueryQueueName, e =>
{
e.Handler<JToken>(HandleMessage);
e.UseConcurrencyLimit(16);
e.MaxConcurrentCalls = 16;
e.PrefetchCount = 32;
});
serviceBusFactoryConfigurator.EnableBatchedOperations = true;
serviceBusFactoryConfigurator.DefaultMessageTimeToLive = TimeSpan.FromSeconds(60);
});
}
private async Task HandleMessage(ConsumeContext context)
{
await Task.Delay(800);
if (context.ExpirationTime > SystemDateTime.Now)
{
await context.RespondAsync(new CouponUsedList { CouponsUsed = new List<CouponCurrentUsed>() });
}
}
public Task LaunchAsync()
{
return Bus.StartAsync();
}
public Task StopAsync()
{
return Bus.StopAsync();
}
}
Seems that here, once again, it was just missing one config. All code that you write inside ReceiveEndpoint configurates the consumer listener queue and configurations that you provide in CreateUsingAzureServiceBus are configurations for a consumer response queue.
All I needed was to add one line inside consumer configuration. Without this config all prefetched messages are handled gradually
e.EnableBatchedOperations = true;

Child process of .net core application is using parent's configuration file

I have 2 .NET Core 2.0 console applications. The first application calls the second one via System.Diagnostics.Process.Start(). Somehow the second app is inheriting the development configuration information located in the appsettings.development.json of the first app.
I execute the first app by running either dotnet run in the root of the project or dotnet firstapp.dll in the folder where the DLL exists. This is started from in Powershell.
Both apps are separate directories. I'm not sure how this is happening.
UPDATE WITH CODE
The apps reside in
C:\Projects\ParentConsoleApp
C:\Projects\ChildConsoleApp
This is how I call the app from parent application:
System.Diagnostics.Process.Start("dotnet", "C:\\projects\\ChildConsoleApp\\bin\\Debug\\netcoreapp2.0\\publish\\ChildConsoleApp.dll" + $" -dt {DateTime.Now.Date.ToString("yyyy-MM-dd")}");
This is how I load the configuration from JSON (this is same in both apps):
class Program
{
private static ILogger<Program> _logger;
public static IConfigurationRoot _configuration;
public static IServiceProvider Container { get; private set; }
static void Main(string[] args)
{
RegisterServices();
_logger = Container.GetRequiredService<ILogger<Program>>();
_logger.LogInformation("Starting GICMON Count Scheduler Service");
Configure();
// At this point DBContext has value from parent! :(
var repo = Container.GetService<ICountRepository>();
var results = repo.Count(_configuration.GetConnectionString("DBContext"), args[0]);
}
private static void Configure()
{
string envvar = "DOTNET_ENVIRONMENT";
string env = Environment.GetEnvironmentVariable(envvar);
if (String.IsNullOrWhiteSpace(env))
throw new ArgumentNullException("DOTNET_ENVIRONMENT", "Environment variable not found.");
_logger.LogInformation($"DOTNET_ENVIRONMENT environment variable value is: {env}.");
var builder = new ConfigurationBuilder().SetBasePath(Directory.GetCurrentDirectory()).AddJsonFile("appsettings.json");
if (!String.IsNullOrWhiteSpace(env)) // environment == "Development"
{
builder.AddJsonFile($"appsettings.{env}.json", optional: true);
}
_configuration = builder.Build();
}
private static void RegisterServices()
{
var services = new ServiceCollection();
services.AddSingleton<ILoggerFactory, LoggerFactory>();
services.AddSingleton(typeof(ILogger<>), typeof(Logger<>));
services.AddLogging((builder) => builder.SetMinimumLevel(LogLevel.Trace));
var serviceProvider = services.BuildServiceProvider();
var loggerFactory = serviceProvider.GetRequiredService<ILoggerFactory>();
loggerFactory.AddNLog(new NLogProviderOptions { CaptureMessageTemplates = true, CaptureMessageProperties = true });
loggerFactory.ConfigureNLog("nlog.config");
Container = serviceProvider;
}
}
The problem is caused by the fact that you set base path for configuration builder to the current working directory:
var builder = new ConfigurationBuilder().SetBasePath(Directory.GetCurrentDirectory()).AddJsonFile("appsettings.json");
When you create a child process, it inherits current directory from the parent process (unless you set current directory explicitly).
So the child process basically uses JSON configs from the directory of parent process.
There are several possible fixes:
Do not set base path to the current directory.
When the application is launched, you don't know for sure that current directory will match directory where application binaries are placed.
If you have an exe file in c:\test\SomeApp.exe and launch it from the command line while the current directory is c:\, then current directory of your application will be c:\. In this case, if you set base path for configuration builder to current directory, it will not be able to load configuration files.
By default, configuration builder loads config files from AppContext.BaseDirectory which is the directory where application binaries are placed. It should be desired behavior in most cases.
So just remove SetBasePath() call:
var builder = new ConfigurationBuilder().AddJsonFile("appsettings.json");
If for some reason you want to set the base path of configuration builder to the current directory, then you should set correct current directory for the launched child process:
var childDllPath = "C:\\projects\\ChildConsoleApp\\bin\\Debug\\netcoreapp2.0\\publish\\ChildConsoleApp.dll";
var startInfo = new ProcessStartInfo("dotnet", childDllPath + $" -dt {DateTime.Now.Date.ToString("yyyy-MM-dd")}")
{
WorkingDirectory = Path.GetDirectoryName(childDllPath),
};
Process.Start(startInfo);
As #CodeFuller explained, the reason is that both apps read the same appsettings.{env}.json file. For simplicity, you may just rename the config file (and the corresponding name in .AddJsonFile) for the second app to prevent any possible overrides.
See, when you register JSON file as a configuration source by .AddJsonFile, configuration API allows you to use whatever file name you need and
you are not forced to use the same $"appsettings.{env}.json" pattern for both applications.

Passing JSON Config into multiple instances of the same bundled React Application

I have several instances of a react-slick carousel. Each of them requires a different set of config options.
Currently, I have the carousel component bundled up via webpack and then deployed to multiple locations. Unfortunately, this means that the bundle is slightly different in each case, as the config file changes the overall bundle! What's the right approach for this solution?
I feel like I can think of the following solutions:
1) Load the config file asynchronously. Seems like a lazy solution, because making an extra round trip is overkill.
2) Try to use require.ensure to split out the config file into it's own chunk.
What's the right approach for this solution?
Thanks!
To reply for point 1, I've managed to accomplish runtime loading of config this way:
import xhr from 'xhr'
class Config {
load_external_config = (cb) => {
xhr.get("config.json", {
sync: true,
timeout: 3000
},(error, response, body)=>{
if(response.statusCode==200) {
try{
const conf = JSON.parse(body);
for(var i in conf) {
this[i] = conf[i];
}
}catch(e){
/* Manage error */
}
} else {
/* Manage error */
}
})
}
}
export let config = new Config();
The class above has two basic functions, on the one hand it is a "singleton", so every time you import it in each file of your project, the istance remain the same and will not be duplicated. On the other hand, through a XHR package it loads (synchronously) an external json file and puts every config voice in its instance as a first level attribute. Later, you will be able to do this:
import { config } from './config'
config.load_external_config();
config.MY_VAR
For point 2 I would like to see some examples, and I will remain tuned to this post for someone more skilled than me.

Configure different implementations to be resolvable based on Web.config parameter in Castle Windsor

I need to configure different implementations of some interface, and make it resolve based on Web.config setting.
So that, having IExternalService interface I would like to have TestExternalService and ExternalService one. And I have "TestMode" app setting in Web.config.
How can I register TestExternalService and ExternalService in Castle Windsor, so that when, for example, TestMode is 0, then ExternalService is resolved, and when TestMode is 1, then TestExternalService is resolved.
Use a Handler Selector.
I would recommend taking one of two approaches. If you have a large number of services that need to change based on this setting then I would implement two versions of the IWindsorInstaller interface and load the correct one based on the web.config setting.
var container = new WindsorContainer();
var testMode= WebConfigurationManager.AppSettings["Testmode"];
if(testMode == "1") {
container.Install(new [] { new TestServiceInstaller() });
else
container.Install(new [] { new ServiceInstaller() });
If you only have one or two services that need to change you can write one instance of IWindsorInstaller and put the logic for registering the component inside the installer.
var testMode = WebConfigurationManager["TestMode"];
if(testMode == "1") {
container.Register(Component.For<IExtenalService>().ImplementedBy<TestExternalService>())
}
else
{
container.Register(Component.For<IExternalService>().ImplementedBy<ExternalService>();