I'm running my project in wildfly in a container in openshift. And I'm trying to add hazelcast-kubernetes plugin( from https://github.com/hazelcast/hazelcast-kubernetes) to my project for DNS lookup. It seems the plugin's DNS lookup doesn't work at all.
Inside the Openshift's logs, it shows the Kubernetes Discovery SPI activated and wildfly starts successfully.
I've already set my hazelcast following the instruction:
this.config.setProperty("hazelcast.discovery.enabled", "true");
this.config.setProperty("hazelcast.rest.enabled","true");
final JoinConfig joinConfig = networkConfig.getJoin();
joinConfig.getMulticastConfig().setEnabled(false);
joinConfig.getTcpIpConfig().setEnabled(false);
final HazelcastKubernetesDiscoveryStrategyFactory factory = new HazelcastKubernetesDiscoveryStrategyFactory();
final DiscoveryStrategyConfig strategyConfig = new DiscoveryStrategyConfig(factory);
strategyConfig.addProperty("service-dns", "kubernetes.default.svc.cluster.local");
strategyConfig.addProperty("service-dns-timeout", "10");
Did I miss something? Any advice would be appreciate.
Related
I have a project that included 3 windows services, the services were worked very well, then for business needs, we need to move from windows server 2008 to windows server 2019.
The issue which I faced is:
When I install the services, It didn't start and returned the error in the Event Viewer:
Service cannot be started. System.Security.SecurityException: The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security, State.
I searched for this issue and I found a lot of answers ( like this) but it won't help me.
I installed the services in Command Line as administrator using InstallUtil.exe.
Then opened the Registry Editor and give the user NETWORK SERVICE a full control in the path as below:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Security
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog
Then I check the subkey of the services in the path:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Application
Also, it exists.
My code related to EventLog :
public class EventViewer
{
public static void WriteEvent(string ServiceName, string msg, EventLogEntryType _EventLogEntryType)
{
EventLog eventLog = new EventLog();
eventLog.Source = ServiceName;
eventLog.Log = "Application";
((System.ComponentModel.ISupportInitialize)(eventLog)).BeginInit();
if (!EventLog.SourceExists(eventLog.Source))
{
EventLog.CreateEventSource(eventLog.Source, eventLog.Log);
}
((System.ComponentModel.ISupportInitialize)(eventLog)).EndInit();
eventLog.WriteEntry(msg, _EventLogEntryType);
}
}
The Event Viewer give me the line of the exception and it refers to:
((System.ComponentModel.ISupportInitialize)(eventLog)).BeginInit();
I tried to debug the service on my machine using Visual Studio 2019, but also give me the same error, and the service wouldn't start to debug using "Attach to Process".
I think the issue is while scanning the registry to check if the event source exist.
https://learn.microsoft.com/en-us/dotnet/api/system.diagnostics.eventlog.createeventsource?view=dotnet-plat-ext-6.0
As per Microsoft the account requires administrative privilege to do this task.
I have also seen there is a new registry hive under 'EventLog' called 'state' in windows 2019 which has less access compared to other hives.
Debug with process monitor and see if you are getting access denied in that hive.
I'm trying to expose hazelcast cache with embedded setup in my service, its works fine in my local, even its able to add members if i tried to run application with multiple ports. Now the same is deployed in OCP (OpenShift) with 2 instance and hazelcast members are not added to each other due to this cache is not updated across the pod. below is the config code which i used for hazelcast.
#Bean
public Config hazelcastConfig(){
return new Config().setInstanceName("hazelcast-instance")
.addMapConfig(new MapConfig().setName("mycache")
.setMaxSizeConfig(new MaxSizeConfig(300,MaxSizeConfig.MaxSizePolicy.FREE_HEAP_SIZE))
.setEvictionPolicy(EvictionPolicy.LRU)
.setTimeToLiveSeconds(2000));
}
please let me know any additional configuration need to added, so that members can clustered in openshift
Please read the following resorces:
Hazelcast Kubernetes/OpenShift plugin documentation
Hazelcast Kubernetes Code Samples
Hazelcast OpenShift Code Sample
This should solve all your issues. The simplest configuration you need to add (assuming you added the RBAC) is the following one:
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
config.getNetworkConfig().getJoin().getKubernetesConfig().setEnabled(true);
I have a django application running on openshift. From the openshift server I move a file from openshift to a private server. I can do this by setting hostkeys to none and using a password, however that password will change every month so I need to use ssh keys.
I have the following on the private server: known_hosts, id_rsa, id_rsa.pub.
When I try to connect from openshift I receive the error "No Known Hostkeys."
I known since this is a dockerized application running on the cloud this might be a bit tricky to answer, but I could really use some help.
Thank you,
I have attempted to put the id_rsa.pub from the private server into a file and use hostkeys.load(id_rsa.pub) and then connect without a password.
Setup
/opt/app-root/src/.ssh/known_hosts - I have the known_hosts from the private server
/views.py -
id_rsa_pub = "known_hosts"
id_rsa_pub = settings.STATICFILES_DIRS[0] + '/' + id_rsa_pub
known_hosts = '/opt/app-root/src/.ssh/known_hosts'
cnopts = pysftp.CnOpts()
print("id_rsa_pub below:")
print(id_rsa_pub)
cnopts.hostkeys.load(known_hosts)
with pysftp.Connection(host=host, username=username,
private_key=id_rsa_pub, cnopts=cnopts) as srv:
id_rsa_pub is located in static files
The error is "pysftp.exceptions.HostKeysException: No Host Keys Found"
Alright, this was quick.
I never solved the hostkey issue, however if you use private_key=id_rsa_pub and you have a path to it on Openshift in you src somewhere, the connection will go through. Make sure to set cnopts.hostkeys = None.
Thanks
I have an external MySQL server that's set up and working fine. I created a database connection in Eclipse and can view the database in the Data Source Explorer tab.
Now, I have a servlet that needs to access that database. How do I do it? Is there a way to reference that database connection created in the data source explorer, or do I have to define everything twice?
Also, what's the best way to open the connection? I've got the mysql-connector-java-5.1.11-bin.jar file included, and I've found two methods that work:
MysqlDataSource d = new MysqlDataSource();
d.setUser("user");
d.setPassword("pass");
d.setServerName("hostname.com");
d.setDatabaseName("db");
Connection c = d.getConnection();
and
Connection c = DriverManager.getConnection("jdbc:mysql://hostname.com/db","user","pass");
Neither is optimal, because first of all, they both use hard-coded strings for everything. This is a Java EE web app project, so is there a good place to put connection data? Or is there a way to forgo all that and just use the connection in the data source explorer?
A common practice is to configure this as a DataSource in the servlet container in question. It will provide you connection pooling facilities which will greatly improve performance. Also a common practice is to externalize the raw settings in some configuration file which is been placed in the classpath.
In case you're using Tomcat as servletcontainer, you need to configure the datasource as per its JNDI documentation. You'll see that there are several ways. Easiest way is to create a /META-INF/context.xml in the webcontent of your dynamic web project (to be clear, the /META-INF is at the same level as the /WEB-INF of the webapp) and fill it with something like:
<?xml version="1.0" encoding="UTF-8"?>
<Context>
<Resource
name="jdbc/db" type="javax.sql.DataSource"
maxActive="100" maxIdle="30" maxWait="10000"
url="jdbc:mysql://hostname.com/db"
driverClassName="com.mysql.jdbc.Driver"
username="user" password="pass"
/>
</Context>
This roughly means that Tomcat server should create a datasource with the JNDI name jdbc/db with a maximum of 100 active connections, a maximum of 30 idle connections and a maximum wait time of 10000 milliseconds before a connection should be returned from your application (actually: closed by your application, so your application has 10 seconds time between acquiring the connection and closing the connection). The remnant of the settings should be familiar and self-explaining enough to you; those are the JDBC settings.
Finally in your web project, edit the file /WEB-INF/web.xml to add the following entry:
<resource-env-ref>
<resource-env-ref-name>jdbc/db</resource-env-ref-name>
<resource-env-ref-type>javax.sql.DataSource</resource-env-ref-type>
</resource-env-ref>
This roughly means that the webapplication should use the server-provided datasource with the name jdbc/db.
Then change your connection manager to something like this:
private DataSource dataSource;
public Database(String jndiname) {
try {
dataSource = (DataSource) new InitialContext().lookup("java:comp/env/" + jndiname);
} catch (NamingException e) {
// Handle error that it's not configured in JNDI.
throw new IllegalStateException(jndiname + " is missing in JNDI!", e);
}
}
public Connection getConnection() {
return dataSource.getConnection();
}
..and replace all Class.forName(driver) calls by new Database("jdbc/db") and replace all DriverManager.getConnection() calls by database.getConnection(). You can if necessary obtain the value jdbc/db from some config file (Properties file?).
Alternatively, inject the DataSource via the #Resource annotation inside a container managed artifact, such as a #WebServlet servlet class:
#Resource(name="jdbc/db")
private DataSource dataSource;
That should be it. Just deploy your webapplication with the above changes and run it. Don't forget to place the database JDBC driver in the Tomcat/lib or to add its path to the shared.loader property of Tomcat/conf/catalina.properties, because the responsibility of loading the JDBC driver is now moved from the webapplication to the server. For more hints and other basic JDBC/JNDI examples you may find this article useful as well.
See also:
How to install JDBC driver in Eclipse web project without facing java.lang.ClassNotFoundexception
Where do I have to place the JDBC driver for Tomcat's connection pool?
Is it safe to use a static java.sql.Connection instance in a multithreaded system?
Show JDBC ResultSet in HTML in JSP page using MVC and DAO pattern
How to retrieve and display images from a database in a JSP page?
You could set up a data source in whatever app server you're deploying your WAR to and fetch a reference to it with JNDI. Or you could package your WAR in an EAR and define the data source in the EAR's data-sources.xml file (and fetch a reference to it with JNDI).
I am working on a Notification Service using IBM MQ messaging provider with JBoss eap 6.1 environment. I am successfully able to send messages via MQ JCA provider rar i.e. wmq.jmsra.rar file. However on consumer part my current configuration looks like this
#MessageDriven(
activationConfig = {
#ActivationConfigProperty(propertyName="destinationType", propertyValue="javax.jms.Queue"),
#ActivationConfigProperty(propertyName="destination", propertyValue="F2.QUEUE"),
#ActivationConfigProperty(propertyName="providerAdapterJNDI", propertyValue="java:jboss/jms/TopicFactory"),
#ActivationConfigProperty(propertyName="queueManager", propertyValue="TOPIC.MANAGER"),
#ActivationConfigProperty(propertyName="hostName", propertyValue="10.239.217.242"),
#ActivationConfigProperty(propertyName="userName", propertyValue="root"),
#ActivationConfigProperty(propertyName = "channel", propertyValue = "TOPIC.CHANNEL"),
#ActivationConfigProperty(propertyName = "port", propertyValue = "1422")
})
My problem is that consumer of this service does not want to add any port numbers, hostName, queueManager properties in these beans. Also they do not want to use ejb-jar.xml to externalize these configs. I have researched and found that we can add a domain IBM Message Driven Bean but with no success. Any suggestions on what I can do here to externalize all these configurations ?
EDIT: Adding --> The JCA resource adapter is deployed at consumer end if it makes it any easier.
Thanks
You can actually externalize an MDBs activation spec properties to the server configuration file.
Create the ejb-jar.xml file, but do not put the actual value in the file, use a property placeholder:
<activation-config-property>
<activation-config-property-name>hostName</activation-config-property-name>
<activation-config-property-value>${wmq.host}</activation-config-property-value>
</activation-config-property>
Do this for all of the desired properties.
Ensure that property replacement for Java EE spec files (ejb-jar.xml, in this case) is enabled in the server configuration file:
<subsystem xmlns="urn:jboss:domain:ee:1.2">
<spec-descriptor-property-replacement>true</spec-descriptor-property-replacement>
Then, in the server configuration file, provide values for your properties:
<system-properties>
<property name="wmq.host" value="10.0.0.150"/>
Once your MDBs are packaged, you will not need to change any of the files in the MDB jar - just provide the properties in the server configuration.
you can avoid to add host name, port number and so on in MDB, you just want to define destinationType in MDB, and rest of the thing u can configure in your application server, like Activation Specification, Queues and Queue Connection Factories.
I have done the same thing but i used IBM Websphere Application Server.