Akka remote configuration unnamed actors - configuration

I'm a newbie Akka developer and I've just started with remoting. I'm always seeing this type of configuration:
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
deployment {
"/mainRepository/*" {
remote = "akka.tcp://MqttRemote#127.0.0.1:2553"
}
}
}
remote {
netty.tcp {
hostname = "127.0.0.1"
}
}
remote.netty.tcp.port = 2553
}
where actors are named, for example "mainRepository" but what if I want to create unnamed remote actors? What should I specify in the configuration? Or can I accomplish that by just not setting the name parameter in the ActorSystem when requesting a new actor?
Also, what does the "*" character mean? Or where can I learn more about the remote configuration? (aside from akka.io)

What this config is saying is that if any actor instances are created under the path /user/mainRepository/* (that is, any children of the actor instance bound to the name /user/mainRepository) should not be deployed to the local ActorSystem but should instead use the remote daemon of the remote system MqttRemote#127.0.0.1:2553 to deploy this actor in that remote system. So if I do something like:
context.actorOf(Props[MyActor], "foo")
Where context is the ActorContext for my mainRepository actor instance, then that child will be deployed remotely.
The * is a wildcard that lets you be more general with what actors would be deployed remotely. If the config was this:
"/mainRepository/foo" {
remote = "akka.tcp://MqttRemote#127.0.0.1:2553"
}
Then only the child bound to the name foo would be remotely deployed. Any other children to my mainRepository actor would be deployed into the local ActorSystem.
So using this approach, with the wildcard, you can indeed create unnamed children and have them deployed remotely, as long as their parent is properly named and that name is configured (as in this example) to deploy it's children remotely.
You can also programmatically deploy actor instances remotely if using this config driven approach does not appeal to you. That would look something like this:
import akka.actor.{ Props, Deploy, Address, AddressFromURIString }
import akka.remote.RemoteScope
val address = Address("akka.tcp", "RemoteSystem", "1.2.3.4", 1234)
val ref = system.actorOf(Props[MyActor].
withDeploy(Deploy(scope = RemoteScope(address))))
In the above code, an instance of MyActor will be deployed on the remote node RemoteSystem#1.2.3.4:1234.
For more info, you can consult the Remoting docs here.

Related

MySQL Server - Puppet Labs module and Oracle Linux 7

With the popular MySQL server module from Puppet Labs, it sets $provider to mariadb on Oracle Linux 7.x, which causes issues if I am not using Maria, but instead using Percona. The issue is in params.pp. I was wondering if there is a way to force the $provider to be mysql. I could try creating a symlink to mariadb.log, to get around this issue but it is ugly...
Here's the code from params.pp:
case $::osfamily {
'RedHat': {
case $::operatingsystem {
'Fedora': {
if versioncmp($::operatingsystemrelease, '19') >= 0 or $::operatingsystemrelease == 'Rawhide' {
$provider = 'mariadb'
} else {
$provider = 'mysql'
}
}
/^(RedHat|CentOS|Scientific|OracleLinux)$/: {
if versioncmp($::operatingsystemmajrelease, '7') >= 0 {
$provider = 'mariadb'
} else {
$provider = 'mysql'
}
}
default: {
$provider = 'mysql'
}
}
Source: https://github.com/puppetlabs/puppetlabs-mysql/blob/master/manifests/params.pp
Error: Could not set 'present' on ensure: No such file or directory # rb_sysopen - /var/log/mariadb/mariadb.log at /[redacted]/modules/mysql/manifests/server/installdb.pp:25
Bsically, I am looking for a graceful workaround via Puppet over rides. But not experienced enough to know how to implement it. :(
Thanks!
That you are trying to work through a provider suggests that you are approaching this through resources, such as mysql::db, but that's never going to work if the server is not configured to match. The $provider variable you highlight in the question is both set and used only inside class mysql::params, and only for certain OS families even there. It is an ordinary variable belonging to the class, not a class parameter, and, being undocumented, it should be considered private to that class. In any event, no, Puppet provides no way to override that variable's value without modifying the module.
It is class mysql::server that provides the avenue for configuring for an alternative MySQL fork. It offers numerous parameters by which you can configure all the details, but no once-for-all mechanism for setting a different MySQL personality. I think you will find that if you do that correctly then all the resource types will just work. In any case, you should not be declaring any resources of private resource types, nor overriding the properties of resources you do not declare.
An example covering almost the exact use case you've asked about is presented in the module docs as Install Percona Server on CentOS. Note that I have replicated the heading from the docs, but the word "Install" in it is a bit misleading. That should not only install the server, but set the stage for all the module's resource types to manage it.
I gather that you would prefer a simpler way to configure for a non-default fork, but the module does not presently offer one.

Injecting DbContext into FileProvider in ASP.NET Core

I am trying to load some of the views from the database as described in here. So I want to use EF Core in the File provider.
RazorViewEngineOptions has a FileProviders property that you can add your file provider to. The problem is that you have to give it an instace of the file provider. So you'll need to instantiate all of the file providers' dependencies right there in Startup's ConfigureServices method.
Currently I inject an instance of IServiceProvider into the Configure method of Startup. Then I store the instance in a field (called _serviceProvider):
IServiceProvider _serviceProvider;
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IServiceProvider provider)
{
_serviceProvider = provider;
...
}
Then in ConfigureServices I use that field to instanciate the UIDbContext.
services.Configure<RazorViewEngineOptions>(options =>
{
var fileProvider = new DbFileProvider(_serviceProvider.GetService<UIDbContext>());
options.FileProviders.Add(fileProvider);
});
Is there any better way to be able to inject the UIDbContext into the DbFileProvider constructor Or any way to instantiate a UIDbContext inside DbFileProvider without IServiceProvider?
You don't want to use DbContext as a file provider source the way you did.
DbContext isn't thread-safe, so it won't work when you have one single DbContext instance for the whole provider, because multiple requests could call the DbContext and it's operation more than once at the same time, resulting in exception when trying to execute 2 queries in parallel.
You'd have to instantiate a connection (like in the linked article) or DbContext per IFileInfo/IDirectoryContents instance.
DbContextOptions<UIDbContext> should be registered as singleton, so you can resolve it onceinside Configure` w/o any issues and pass it to your provider.
Alternatively you can also call DbContextOptionsBuilder and build/construct a DbContextOptions<T>, but then you have to repeat the configuration for you did inside AddDbContext (i.e. .UseSqlServer()).
However it can be useful, as it allows you to set different settings (i.e. changing the way how includes, errors etc. are logged).

How to check ActiveMQ queues in unit test using JUnit Rule with EmbeddedActiveMQBroker

I created an Integration test (based on apache camel and blueprint) that sends some messages to an ActiveMQ service on my machine.
Via the admin-web interface i can check if my messages arrived. To decouple from a locally running ActiveMQ i am now using the EmbeddedActiveMQBroker with JUnit Rule (followed instructions from here):
#Rule
public EmbeddedActiveMQBroker broker = new EmbeddedActiveMQBroker() {
#Override
protected void configure() {
try {
this.getBrokerService().addConnector("tcp://localhost:61616");
} catch (Exception e) {
// noop test should fail
}
}
};
The test works fine as before.
But: Is there a way to check the number of (queued)messeages for a given queue? The test sends messages to the queue "q".
Your EmbeddedActiveMQBroker instance wraps around an ActiveMQ BrokerService object that is the real embedded ActiveMQ broker. Because you have access to that through the EmbeddedActiveMQBroker instance you have access to all the stats maintained by the broker via the AdminView (broker.getBrokerService().getAdminView())
From there you can get all sorts of useful info like number of subscriptions, number of Queues etc. All this data is kept in the broker's JMX management context tree so standard JMX applies. One easy way to get info on number of messages in a Queue then is to lookup the Queue in the Broker's management context using code similar to the following:
// For this example the broker name is assumed to be "localhost"
protected QueueViewMBean getProxyToQueue(String name) throws MalformedObjectNameException, JMSException {
ObjectName queueViewMBeanName = new ObjectName("org.apache.activemq:type=Broker,brokerName=localhost,destinationType=Queue,destinationName="+name);
QueueViewMBean proxy = (QueueViewMBean) brokerService.getManagementContext()
.newProxyInstance(queueViewMBeanName, QueueViewMBean.class, true);
return proxy;
}
From there you can use the QueueViewMBean to see what's in the Queue:
QueueViewMBean queueView = getProxyToQueue("myQueue");
LOG.info("Number of messages in my Queue:{}", queueView.getQueueSize());
It looks as though the current implementation disables JMX by default which is unfortunate but can be worked around. You have to give the embedded broker instance a configuration URI which is either a string containing the connector to add or an xbean configuration file.
One option would be to do something along these lines (note the useJmx=true):
#Rule
public EmbeddedActiveMQBroker broker = new EmbeddedActiveMQBroker("broker:(tcp://0.0.0.0:0)/localhost?useJmx=true&persistent=false");

MySql pooled datasource in standalone JAVA app (no J2EE container, no JNDI, no TOMCAT etc.)

I've been reading dozens of topics here with no real enlightment: I'm running a standalone java app, on a Synology NAS with Java 8. I also installed MariaDB on the same NAS. So everything is local.
I am able to setup a datasource and get a connection, but I would like to be able to access it in any instance of any of my classes / threads for connection pooling. Everything seem to show that I would need JNDI. But I don't have a J2EE container and this seems overkill for my need. Idem for developping my own implementation of JNDI.
I've seen replies to similar questions where people suggest C3PO. But this is just another datasource implementation. I don't think it solves the standalone app issue with no way to access datasource from anywhere in the code :
How to retrieve DB connection using DataSource without JNDI?
Is there another way to share a datasource across java threads ?
Can I pass the Datasource instance as a parameter to each thread, so
they can get a connection when they need ?
Should I assign a given connection to each thread - also passed as a
parameter ? and in this case, never really close it properly ?
Or should I really install something like tomcat, jboss, jetty ? are
they equivalent ? Is there a super light J2EE container that could
provide JNDI ?
Thanks
Vincent
You could use the singleton pattern, like this for example:
public class DataSourceFactory {
private static DataSource instance = null;
private DataSourceFactory() { }
public static synchronized DataSource getDataSource(){
if(instance == null){
instance = // initialize your datasource
}
return instance;
}
}
Then any from any thread you can call DataSourceFactory.getDataSource() to get a reference to your DataSource object.

Connection with mysql with netbeans for jsp [duplicate]

This question already has answers here:
The infamous java.sql.SQLException: No suitable driver found
(21 answers)
How should I connect to JDBC database / datasource in a servlet based application?
(2 answers)
Closed 2 years ago.
I am using NetBeans 7.0.1 IDE for JSP/servlet
I am trying to make a database connection for my project. Already downloaded the jar file 'mysql-connector-java-5.1.24-bin.jar' pasted it to jdk's jre/lib dir, also added it to my netbean projects libraries dir.
then I created a servlet and wrote the following code:
import java.sql.*;
import java.io.IOException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
public class tstJDBC extends HttpServlet {
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
try{
String dbURL = "jdbc:mysql://localhost:3306/murach";
String username="root";
String password="1234";
Connection con2 = DriverManager.getConnection(dbURL, username, password);
String query = "insert into tblUser1(firstname) values('shaon')";
Statement statmnt = con2.createStatement();
statmnt.executeUpdate(query);
}
catch(SQLException e)
{
e.printStackTrace();
}
}
}
But it can establish the connection. From the line Connection con2, its directly going to catch() block ; without executing the query.
Try loading the driver prior to using the DriverManager class.
try{
String dbURL = "jdbc:mysql://localhost:3306/murach";
String username="root";
String password="1234";
Class.forName("com.mysql.jdbc.Driver");//load driver
Connection con2 = DriverManager.getConnection(dbURL, username, password);
String query = "insert into tblUser1(firstname) values('shaon')";
Statement statmnt = con2.createStatement();
statmnt.executeUpdate(query);
}
From O'Reilly:
Before you can use a driver, it must be registered with the JDBC
DriverManager. This is typically done by loading the driver class
using the Class.forName( ) method:
This is required since you have placed the library within the JDK/lib folder which I'm assuming is loaded using a different ClassLoader than the one used by your application. Since different class loaders were used the automatic registration that takes place by JDBC 4.0+ drivers will not take effect. You could try to place the driver jar file within the lib of your application server, which should use the same ClassLoader of your application. See: When is Class.forName needed when connecting to a database via JDBC in a web app?
Regarding Automatic Registration
In JDBC 4.0, we no longer need to explicitly load JDBC drivers using
Class.forName(). When the method getConnection is called, the
DriverManager will attempt to locate a suitable driver from among the
JDBC drivers that were loaded at initialization and those loaded
explicitly using the same class loader as the current application.
The DriverManager methods getConnection and getDrivers have been
enhanced to support the Java SE Service Provider mechanism (SPM).
According to SPM, a service is defined as a well-known set of
interfaces and abstract classes, and a service provider is a specific
implementation of a service. It also specifies that the service
provider configuration files are stored in the META-INF/services
directory. JDBC 4.0 drivers must include the file
META-INF/services/java.sql.Driver. This file contains the name of the
JDBC driver's implementation of java.sql.Driver. For example, to load
the JDBC driver to connect to a Apache Derby database, the
META-INF/services/java.sql.Driver file would contain the following
entry:
org.apache.derby.jdbc.EmbeddedDriver
Let's take a quick look at how we can use this new feature to load a
JDBC driver manager. The following listing shows the sample code that
we typically use to load the JDBC driver. Let's assume that we need to
connect to an Apache Derby database, since we will be using this in
the sample application explained later in the article:
Class.forName("org.apache.derby.jdbc.EmbeddedDriver");
Connection conn =
DriverManager.getConnection(jdbcUrl, jdbcUser, jdbcPassword);
But in JDBC 4.0, we don't need the Class.forName() line. We can simply
call getConnection() to get the database connection.
Source
Regarding Service Loaders
For the purpose of loading, a service is represented by a single
type, that is, a single interface or abstract class. (A concrete class
can be used, but this is not recommended.) A provider of a given
service contains one or more concrete classes that extend this service
type with data and code specific to the provider. The provider class
is typically not the entire provider itself but rather a proxy which
contains enough information to decide whether the provider is able to
satisfy a particular request together with code that can create the
actual provider on demand. The details of provider classes tend to be
highly service-specific; no single class or interface could possibly
unify them, so no such type is defined here. The only requirement
enforced by this facility is that provider classes must have a
zero-argument constructor so that they can be instantiated during
loading.
A service provider is identified by placing a provider-configuration
file in the resource directory META-INF/services. The file's name is
the fully-qualified binary name of the service's type. The file
contains a list of fully-qualified binary names of concrete provider
classes, one per line. Space and tab characters surrounding each name,
as well as blank lines, are ignored. The comment character is '#'
('\u0023', NUMBER SIGN); on each line all characters following the
first comment character are ignored. The file must be encoded in
UTF-8.
If a particular concrete provider class is named in more than one
configuration file, or is named in the same configuration file more
than once, then the duplicates are ignored. The configuration file
naming a particular provider need not be in the same jar file or other
distribution unit as the provider itself. The provider must be
accessible from the same class loader that was initially queried to
locate the configuration file; note that this is not necessarily the
class loader from which the file was actually loaded.
Source
just Keep the "mysql-connector-java" in "C:\Program Files\Java\jdk1.7.0_25\jre\lib\ext"
the "jdk1.7.0_25" is my version of jdk may be you have different version but will must have the sub folders "\jre\lib\ext" inside that.