Akka :: dispatcher [%name%] not configured, using default-dispatcher - configuration

I created the followind application.conf:
akka {
actor {
prio-dispatcher {
type = "Dispatcher"
mailbox-type = "my.package.PrioritizedMailbox"
}
}
}
when dumping configuration with
actorSystem = ActorSystem.create()
println(actorSystem.settings)
I'm getting the output:
# application.conf: 5
"prio-dispatcher" : {
# application.conf: 7
"mailbox-type" : "my.package.PrioritizedMailbox",
# application.conf: 6
"type" : "Dispatcher"
},
and later on
[WARN] [08/30/2012 22:44:54.362] [default-akka.actor.default-dispatcher-3] [Dispatchers] Dispatcher [prio-dispatcher] not configured, using default-dispatcher
What am I missing here?
UPD Found the solution here, had to use the name "akka.actor.prio-dispatcher"

The configuration above dictates that name of mailbox is akka.actor.prio-dispatcher
Description of the problem: http://groups.google.com/group/akka-user/browse_thread/thread/678f2ae1c068e0fa

Related

Play Framework WebSocket ArrayIndexOutOfBounds Error

I'm having a WebSocket endpoint exposed by the Play Framework like this:
def socket: WebSocket = WebSocket.acceptOrResult[JsValue, JsValue] { request =>
Future.successful(
if (acceptedSubProtocol(request.headers, appConfig.ocppServerCfg.supportedProtocols)) { // TODO: Get the Protocol via AppConfig
Right(ActorFlow.actorRef { actorRef =>
Props(new ProvisioningActor(actorRef))
})
} else {
logger.warn(s"Supported Protocol is not one of ${appConfig.ocppServerCfg.supportedProtocols.mkString(",")} " +
"in the Sec-WebSocket-Protocol header")
Left(Forbidden)
}
)
}
The following implicit conversion for the incoming Json which looks like this:
implicit val ocppCallRequestReads2: Reads[OCPPCallRequest] = Reads { jsValue =>
val messageTypeIdE = (jsValue \ 0).toEither
val messageIdE = (jsValue \ 1).toEither
val actionNameE = (jsValue \ 2).toEither
val payloadE = (jsValue \ 3).toEither
val yielded = for {
messageTypeId <- messageTypeIdE
messageId <- messageIdE
actionName <- actionNameE
payload <- payloadE
} yield {
OCPPCallRequest( // Here I know all 4 exists, so safe to call head
messageTypeId.head.as[Int],
messageId.head.as[String],
actionName.head.as[String],
payload
)
}
yielded match {
case Right(ocppCallRequest) => JsSuccess(ocppCallRequest)
case Left(errors) =>
println("****************+ ERRORS")
errors.messages.foreach(println)
println("****************+ ERRORS")
JsError(errors)
}
}
And the actual JSON:
[
"19223201",
"BootNotification",
{
"reason": "PowerUp",
"chargingStation": {
"serialNumber" : "12345",
"model" : "",
"vendorName" : "",
"firmwareVersion" : "",
"modem": {
"iccid": "",
"imsi": ""
}
}
}
]
What I'm trying to do is to validate the incoming JSON and propogate any errors to the client. When I tried to run my example., I'm not able to pass the error back as a JSON response, but the WebSocket endpoint closes with an internal server error as seen below:
[info] a.a.CoordinatedShutdown - Running CoordinatedShutdown with reason [ApplicationStoppedReason]
[info] a.e.s.Slf4jLogger - Slf4jLogger started
[info] play.api.Play - Application started (Dev) (no global state)
****************+ ERRORS
Array index out of bounds in ["19223201","BootNotification",{"reason":"PowerUp","chargingStation":{"serialNumber":"12345","model":"","vendorName":"","firmwareVersion":"","modem":{"iccid":"","imsi":""}}}]
****************+ ERRORS
[error] p.c.s.c.WebSocketFlowHandler - WebSocket flow threw exception
java.lang.ClassCastException: scala.Tuple2 cannot be cast to play.api.libs.json.JsValue
at akka.stream.impl.fusing.Map$$anon$1.onPush(Ops.scala:52)
at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:542)
at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:496)
at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:390)
at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:650)
at akka.stream.impl.fusing.ActorGraphInterpreter$SimpleBoundaryEvent.execute(ActorGraphInterpreter.scala:61)
at akka.stream.impl.fusing.ActorGraphInterpreter$SimpleBoundaryEvent.execute$(ActorGraphInterpreter.scala:57)
at akka.stream.impl.fusing.ActorGraphInterpreter$BatchingActorInputBoundary$OnNext.execute(ActorGraphInterpreter.scala:104)
at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:625)
at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:800)
Any clues as to how I can control the WebSocket endpoint not to crash into an exception? What happens is that the connection gets closed and I do not want that.

openApiGenerate don`t generate Models

I use gradle plugin
id "org.openapi.generator" version "5.1.1"
and task in gradle.plugin
openApiGenerate {
generatorName = "kotlin"
inputSpec = "$rootDir/src/main/resources/META-INF/resources/API.v1.yaml".toString()
outputDir = "$rootDir/generated".toString()
apiPackage = "org.openapi.example.api"
invokerPackage = "org.openapi.example.invoker"
modelPackage = "org.openapi.example.model"
configOptions = [
dateLibrary: "java8"
]
}
when I call gradlew openApiGenerate
I got
Execution failed for task ':openApiGenerate'.
> There were issues with the specification. The option can be disabled via validateSpec (Maven/Gradle) or --skip-validate-spec (CLI).
| Error count: 244, Warning count: 0
Errors:
-attribute paths.'/v1/pet/{name}/home/'(post).responses.406.content is unexpected
-attribute paths.'/v1/pet/{name}/home'(post).responses.400.content is unexpected
..........
But if I call in CLI version my yaml generate Models and API
Why it call that Exceptions?
Unfortunatelly, I can not add here even a part of my yaml, because site told me "ït looks you add a lot of code"
If I add
skipValidateSpec = true
I get only API without Models. Why?

Glue_version and python_version not working in terraform

Hellow everyone,
I am using terraform to create the glue job. Now AWS Glue now supports the ability to run ETL jobs on Apache Spark 2.4.3 (with Python 3).
I want to use this feature. but whenever i am making changes it is throwing error.
I am using
aws-cli/1.16.184.
Terraform v0.12.6
aws provider 2.29
resource "aws_glue_job" "aws_glue_job_foo" {
glue_version = "1"
name = "job-name"
description = "job-desc"
role_arn = data.aws_iam_role.aws_glue_iam_role.arn
max_capacity = 1
max_retries = 1
connections = [aws_glue_connection.connection.name]
timeout = 5
command {
name = "pythonshell"
script_location = "s3://bucket/script.py"
python_version = "3"
}
default_arguments = {
"--job-language" = "python"
"--ENV" = "env"
"--ROLE_ARN" = data.aws_iam_role.aws_glue_iam_role.arn
}
execution_property {
max_concurrent_runs = 1
}
}
But it is throwing error to me,
Error: Unsupported argument
An argument named "glue_version" is not expected here.
This Terraform issue has been resolved.
Terraform aws_glue_job now accepts a glue_version argument.
Previous Answer
With or without python_version in the Terraform command block, I must go to the AWS console to edit the job and set "Glue version". My job fails without this manual step.
Workaround #1
This issue has been reported and debated and includes a workaround.
resource "aws_glue_job" "etl" {
name = "${var.job_name}"
role_arn = "${var.iam_role_arn}"
command {
script_location = "s3://${var.bucket_name}/${aws_s3_bucket_object.script.key}"
}
default_arguments = {
"--enable-metrics" = ""
"--job-language" = "python"
"--TempDir" = "s3://${var.bucket_name}/TEMP"
}
# Manually set python 3 and glue 1.0
provisioner "local-exec" {
command = "aws glue update-job --job-name ${var.job_name} --job-update 'Command={ScriptLocation=s3://${var.bucket_name}/${aws_s3_bucket_object.script.key},PythonVersion=3,Name=glueetl},GlueVersion=1.0,Role=${var.iam_role_arn},DefaultArguments={--enable-metrics=\"\",--job-language=python,--TempDir=\"s3://${var.bucket_name}/TEMP\"}'"
}
}
Workaround #2
Here is a different workaround.
resource "aws_cloudformation_stack" "network" {
name = "${local.name}-glue-job"
template_body = <<STACK
{
"Resources" : {
"MyJob": {
"Type": "AWS::Glue::Job",
"Properties": {
"Command": {
"Name": "glueetl",
"ScriptLocation": "s3://${local.bucket_name}/jobs/${var.job}"
},
"ExecutionProperty": {
"MaxConcurrentRuns": 2
},
"MaxRetries": 0,
"Name": "${local.name}",
"Role": "${var.role}"
}
}
}
}
STACK
}
This has been released in version 2.34.0 of the Terraform AWS provider.
It looks like terraform uses python_version instead of glue_version
By using python_version = "3", you should be using glue version 1.0. Glue version 0.9 doesn't support python 3.

How to define config file variables?

I have a configuration file with:
{path, "/mnt/test/"}.
{name, "Joe"}.
The path and the name could be changed by a user. As I know, there is a way to save those variables in a module by usage of file:consult/1 in
-define(VARIABLE, <parsing of the config file>).
Are there any better ways to read a config file when the module begins to work without making a parsing function in -define? (As I know, according to Erlang developers, it's not the best way to make a complicated functions in -define)
If you need to store config only when you start the application - you may use application config file which is defined in 'rebar.config'
{profiles, [
{local,
[{relx, [
{dev_mode, false},
{include_erts, true},
{include_src, false},
{vm_args, "config/local/vm.args"}]
{sys_config, "config/local/yourapplication.config"}]
}]
}
]}.
more info about this here: rebar3 configuration
next step to create yourapplication.config - store it in your application folder /app/config/local/yourapplication.config
this configuration should have structure like this example
[
{
yourapplicationname, [
{path, "/mnt/test/"},
{name, "Joe"}
]
}
].
so when your application is started
you can get the whole config data with
{ok, "/mnt/test/"} = application:get_env(yourapplicationname, path)
{ok, "Joe"} = application:get_env(yourapplicationname, name)
and now you may -define this variables like:
-define(VARIABLE,
case application:get_env(yourapplicationname, path) of
{ok, Data} -> Data
_ -> undefined
end
).

How do I tell my dancer app to serialize objects in its cache?

I'm using a CHI interface to memcached (or File in devel) in my Dancer app, but I'm getting an error in the serializer when I cache an object. I have the following in my dancer config:
engines:
JSON:
allow_blessed: 1
convert_blessed: 1
What else do I need?
Error message:
Error while loading bin/app.pl: encountered object 'C3M::CMF=HASH(0x3ef8aa8)', but neither allow_blessed nor convert_blessed settings are enabled at /usr/lib/perl5/site_perl/5.10/CHI/Serializer/JSON.pm line 19.
CHI::Serializer::JSON doesn't use the same serializer as Dancer::Serializer::JSON. Dancer::Serializer::JSON uses setting('engines') in config.yml, but there's no way to send configuration options to CHI::Serializer::JSON.
workaround:
use CHI::Serializer::JSON;
my $JSON = JSON->new->utf8->canonical;
$JSON->allow_blessed(1);
$JSON->convert_blessed(1);
*CHI::Serializer::JSON::serialize = sub { $JSON->encode( $_[1] ) };
*CHI::Serializer::JSON::deserialize = sub { $JSON->decode( $_[1] ) };