I want to get hibernate conncection info (like driver_class, url, username, password) in logback file. (Not manually)
This belows is my logback file.
<appender name="dbAppender" class="ch.qos.logback.classic.db.DBAppender">
<append>false</append> //unfortunately it does not work.
<connectionSource class="ch.qos.logback.core.db.DriverManagerConnectionSource">
<driverClass>oracle.jdbc.driver.OracleDriver</driverClass>
<url>jdbc:oracle:thin:#localhost:1521:aaa</url>
<user>bbb</user>
<password>ccc</password>
</connectionSource>
</appender>
It works. But I want to get information about connection info(like driver class, url, user, password) from hibernate.cfg.xml automatically in logback.xml.
Your cooperation would be appreciated.
Please Help.
Thanks.
Getting the values out of your hibernate.cfg.xml would probably require writing some code. However, if you externalise all your hibernate properties in a properties file you can import them into your logback configuration.
Hibernate supports using a properties file called hibernate.properties in the root of the classpath, or you could pass an instance of java.util.Properties to Configuration.setProperties() when configuring your session factory.
If you can get your hibernate properties into a properties file you can use it from logback like so:
properties file on the classpath
<configuration>
<property resource="hibernate.properties" />
<appender name="dbAppender" class="ch.qos.logback.classic.db.DBAppender">
<append>false</append> //unfortunately it does not work.
<connectionSource class="ch.qos.logback.core.db.DriverManagerConnectionSource">
<driverClass>${hibernate.connection.driver_class}</driverClass>
<url>${hibernate.connection.url}</url>
<user>${hibernate.connection.username}</user>
<password>${hibernate.connection.password}</password>
</connectionSource>
</appender>
</configuration>
If your properties aren't on the classpath you can use <property file="path/to/some.properties" /> instead.
Related
I am trying to configure logback based log masking for Apache Storm topologies.
When I try to replace logback.xml file inside Apache Storm log4j2- directory and update worker.xml and cluster.xml file, Apache Storm nimbus and supervisors are unable to understand logback based keywords.
Error:
2022-10-02 16:31:51,671 Log4j2-TF-1-ConfiguratonFileWatcher-2 ERROR Unable to locate appender "A1" for logger config "root"
2022-10-02 16:32:51,681 Log4j2-TF-7-ConfiguratonFileWatcher-4 ERROR Error processing element appender ([configuration: null]): CLASS_NOT
Sample cluster.xml file:
<configuration monitorInterval="60" shutdownHook="disable">
<properties>
<property name="pattern">%msg%n</property>
</properties>
<import class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"/>
<import class="ch.qos.logback.core.FileAppender"/>
<FileAppender name="A1">
<file>logfilename.log</file>
<encoder>
<pattern>${pattern}</pattern>
</encoder>
</FileAppender>
<loggers>
<root level="info"> <!-- We log everything -->
<appender-ref ref="A1"/>
</root>
</loggers>
</configuration>
To my best knowledge, Apache Storm uses naturally log4j2, as also your logfile indicates. However, when I used log4j in Storm, I did not need to import any further classes. You also do not seem to use these logback-classes in the rest of your xml-file. So have you tried to simply remove those?
I would like to configure logging appender based on the environment, for example while running in production I would like to configure an appender that would send log to elasticsearch but while on test or development mode this appender would not be enabled.
You can override the default logback config file by using "logback.configurationFile" system variable.
java -Dlogback.configurationFile=logback-prod.xml -jar your.jar
But, if what you need is the ability to use an env variable you can do this
without to use a third-party library:
Override the logback system variable inside the main micronaut class before call main, like following
#Singleton
public class MyMicronautApplication {
public static void main(String[] args) {
var env = System.getenv("MICRONAUT_ENVIRONMENTS");
if (env != null && !env.isEmpty()) {
System.setProperty(ContextInitializer.CONFIG_FILE_PROPERTY, "logback-" + env + ".xml");
}
Micronaut.run(MyMicronautApplication.class);
}
}
create you custom env based logback config file like: logback-dev.xml and put in resources dir.
the set env var MICRONAUT_ENVIRONMENTS=dev according to your deployment logic.
enjoy using logback-dev.xml, logback-prod.xml, logback-stagging.xml, etc
The work around i found was by doing conditional expressions in logback. You will need the following dependency
<!-- https://mvnrepository.com/artifact/org.codehaus.janino/janino -->
<dependency>
<groupId>org.codehaus.janino</groupId>
<artifactId>janino</artifactId>
<version>3.1.2</version>
</dependency>
Then in your logback.xml file, you can do a conditional statement such as following for selecting the appender you want to you use based on a micronaut profile. In my case, I wanted to activate the STDOUT appender if i was running the application locally but i did not want to activate the STDOUT profile if the app was running in any other environment such as dev or prod profiles, instead i wanted the RSYSLOG appender to be used.
<root level="info">
<if condition='property("MICRONAUT_ENVIRONMENTS").contains("local")'>
<then>
<appender-ref ref="STDOUT"/>
</then>
<else>
<appender-ref ref="RSYSLOG"/>
</else>
</if>
</root>
You can use conditional statements to configure other properties in your logback file.
As far I understand, Micronaut doesn't have similar thing like Spring boot ( ) implemented.
I think logback-production.xml (where production is profile
) doesn't work too - only logback.xml and logback-test.xml is suported.
I wasn't crazy about the idea of having multiple logback config files or pulling in another dependency (janino) to support this use-case.
You can also do this using environment variables.
In my logback.xml I defined 2 appenders, one for "DEV" and one for "PROD".
Then I dynamically select which appender to use via the LOG_TARGET variable. If the variable is not set then it defaults to "DEV".
<configuration>
<appender name="DEV" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%date{ISO8601} %-5level [%X{trace_id},%X{span_id}] [%thread] %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<appender name="PROD" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder" />
</appender>
<appender name="OTEL" class="io.opentelemetry.instrumentation.logback.v1_0.OpenTelemetryAppender">
<appender-ref ref="${LOG_TARGET:-DEV}"/>
</appender>
<root level="info">
<appender-ref ref="OTEL"/>
</root>
</configuration>
I'm running Sonatype Nexus 3.15.0-01 and am a little stumped about how to override the default logback configs.
I created a file called 'logback-overrides.xml' in the 'nexus-data/etc/logback' folder containing the following:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<file>/nexus-data/log/myApp.log</file>
<encoder>
<pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</pattern>
</encoder>
</appender>
<root level="debug">
<appender-ref ref="FILE" />
</root>
</configuration>
This is essentially just a simple bit of config that should cause logs to be written to '/nexus-data/log/myApp.log'. I restarted the server after adding this file, to confirm it would pick up the new configs.
However, when I check for that file, it's not present. What am I missing here?
I posted this same question on the Sonatype forums here. To sum up the answer I got there, it isn't possible to override the default logback config this way.
Possible workarounds are:
Create your own logback.xml file and build your own Docker image that extends Sonatype’s official image.
Create a volume mount for /opt/sonatype/nexus/etc/logback and customize the logback.xml on your host machine.
I am trying to deploy a camel application which reads CSV file and process it.I am trying to use camel bindy to unmarshal the csv to POJO.
The camel bindy module was not available in jboss EAP i have added it.
Camel Route:
<?xml version="1.0" encoding="ASCII"?>
<routes xmlns="http://camel.apache.org/schema/spring">
<route>
<from uri="switchyard://FileService" />
<log message="inside route" />
<doTry>
<split streaming="true">
<tokenize token="/n"></tokenize>
<unmarshal ref="bindyDataformat" >
<bindy classType="com.agcs.bih.prototypes.filetosca.Student" type="Csv"/>
</unmarshal>
<process ref="ProcessCSV"></process>
</split>
<doCatch>
<exception>java.lang.Exception</exception>
<log message="FileToScaRoute - message received: ${exception.message}" />
</doCatch>
</doTry>
</route>
</routes>
Iam getting the below exception during deployment.
Caused by: java.lang.IllegalArgumentException: Data format 'bindy-csv' could not be created. Ensure that the data format is valid and the associated Camel component is present on the classpath
Attaching server.log
Can you please help
It sounds like you are using JBoss FSW possibly? Fuse 6.3 on EAP 6.3 includes camel-bindy and there's an example included there for SwitchYard as well if you can upgrade.
Please see;
http://camel.apache.org/bindy.html
Make sure you have created bindyDataFormat
<dataFormats>
<bindy id="bindyDataformat" type="Csv" classType="org.apache.camel.bindy.model.Order"/>
</dataFormats>
After refering the link https://developer.jboss.org/thread/177124.I have added the manifest entry in maven jar plugin pom xml
<manifestEntries>
<Dependencies>org.apache.camel.bindy export services</Dependencies>
</manifestEntries>
iam able to unmarshal it to pojo using camel bindy now.
I am preparing some application with usage of JPA 2.0, Hibernate as provider, MySQL 5 as database, which will be deployed on JBoss AS 7.0.2.I have already configured some basics in persistence.xml and I came into some kind of trouble. I have noticed that some people also defines some specific DataSource on JBoss Management Console level.
My question is. Do I really need to worry about some DataSource or anything like that in Hibernate application?I thought it is important in old JDBC approach.In some books, where examples are shown, there is no such configuration in persistence.xml or hibernate.cfg.xml
Do I have to place mysql connector into JBOSS_HOME/standalone/deployments directory to use MySQL in my application?Here is content of my persistence.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0"
xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
<persistence-unit name="SomeApp">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<properties>
<property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver" />
<property name="hibernate.connection.url" value="jdbc:mysql://localhost:3306/somedb" />
<property name="hibernate.connection.username" value="" />
<property name="hibernate.connection.password" value="" />
<property name="hibernate.dialect" value="org.hibernate.dialect.MySQL5Dialect" />
</properties>
</persistence-unit>
</persistence>
Well, you can either access the database by:
providing the url/driver/password/etc. information in the persistence.xml using your jpa-provider properties (in your case hibernate.connection.*) or the JPA 2.0 standardised javax.persistence.jdbc.* ones - this basically looks like the example you've posted,
creating a Data Source in the ApplicationServer and just referring to it in the persistence.xml (through it's JNDI name you provide during creation) which might look similar to this (without the XML schema definition for the sake of brevity) :
<persistence>
<persistence-unit name="SomeApp">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>jdbc/myDB</jta-data-source>
</persistence-unit>
</persistence>
What you're actually doing right now (with these properties) is using the JDBC.
I would definitely go with the creation of the Data Source in the ApplicationServer rather than providing it in the properties in persistence.xml. It allows you to dynamically change the end-database, it's type, credentials, manage connection pools, etc. without even touching your descriptor.
It's also safer, as the credentials are not written in the plain file left on your server.
As a side note, please remember that the javax.persistence.jdbc.* properties are a JPA provider must requirement for the Java SE environment, but it's optional for Java EE.
Hope that helps!
Do I have to place mysql connector into
JBOSS_HOME/standalone/deployments directory to use MySQL in my
application?
Yes you need to put Mysql J/connector for use it as JDBC Driver. Your application server (JBOss, Weblogic, Glassfish, etc) doesn't provide it because depend of the RDBMS that you are using (in this case Mysql) and the version of it.
In the case of JBoss 7 the JDBC driver can be installed into the container in one of two ways: either as a deployment or as a core module. For the pros/cons of both modes an detailed explanatio you can check the following documentation: http://community.jboss.org/wiki/DataSourceConfigurationInAS7