I need to create my own JBoss configuration.
It will be a little changed "web" configuration.
Is there any documentation or tutorials - how to do it?
What I need in new configuration:
1. Remove JSF implementation included in the JBoss Application Server
How to do it - http://community.jboss.org/wiki/RemoveJSF
2. Add possibility to use twiddle in customized "web" configuration.
How to do it - modify jboss-service.xml.
Replace attribute
<attribute name="Port">-1</attribute>
with following attribute
<attribute name="Port">
<value-factory bean="ServiceBindingManager" method="getIntBinding">
<parameter>jboss:service=Naming</parameter>
<parameter>Port</parameter>
</value-factory>
</attribute>
for mbean
<mbean code="org.jboss.naming.NamingService"
name="jboss:service=Naming"
xmbean-dd="resource:xmdesc/NamingService-xmbean.xml">
3. Remove server/web/deploy/hsqldb-ds.xml
4. ...in process...
PS.
Does anyone know - why supoort of twiddle was disabled for web configuration?
Does this help - http://www.murraywilliams.com/computers/buildjboss/jboss3.html ?
To customize your own JBoss configuration that is based on the web configuration:
copy $JBOSS_HOME/server/web to $JBOSS_HOME/server/my_config
customize the configuration by editing the configuration files in my_config
start you new configuration by running
$JBOSS_HOME/bin/run.sh -c my_config
or
%JBOSS_HOME%/bin/run.sh -c my_config
You can find information related to removing unused services in JBoss 5 in the JBoss 5.x Tunning/Slimming guide.
On the other hand, twiddle can be used with the web profile, what do you mean when you say it's not supported? But it cannot be used with the minimal configuration since the services twiddle relies on are disabled in this configuration.
Related
I managed to set up artifactory using our existing tomcat. I have set to ARTIFACTORY_HOME=/opt/artifactory, that part works well. There is, however, also the jfrog access.war file, which needs to be running as well. I didn't figure out which variable to use to specify its home, therefore it defaults to ~/.jfrog_access, which is not at all what I like.
I moved the content over to my $ARTIFACTORY_HOME/access and symlinked it, but that's not the way to go for sure. Any help appreciated.
In case someone is stumbling over this thread and struggles with the same problem:
Solution for me was to also extract the Context files (access.xml and artifactory.xml which are available in the zip file under <zip extract>/misc/tomcat) to the Tomcat configuration folder, e.g. $CATALINA_HOME/conf/Catalina/localhost/. After that the $ARTIFACTORY_HOME env will be recognized on Access startup.
A previous answer finally put me on the right track for solving this problem on Amazon Linux.
In addition to copying access.xml and artifactory.xml to ${catalina.home}/host/MY_HOSTNAME, I found that some other changes were needed.
I modified the docBase attributes in the XML context files because my server has multiple hostnames:
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/access.xml
<Context path="/access" docBase="${catalina.home}/host/repo.mydomain.org/access.war">
<Parameter name="jfrog.access.bundled" value="true" override="true"/>
<!-- enable annotations scanning of access jar files -->
<JarScanner scanClassPath="false">
<JarScanFilter defaultPluggabilityScan="false" pluggabilityScan="access*" defaultTldScan="false"/>
</JarScanner>
</Context>
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/artifactory.xml
<Context crossContext="true" path="/artifactory" docBase="${catalina.home}/host/repo.mydomain.org/artifactory.war">
</Context>
Important Note: In order to prevent the above two XML files from being deleted by Tomcat Manager during upgrades via Undeploy/Deploy WAR, make sure they are owned by root and not writable by the tomcat user:
chown root.root access.xml artifactory.xml
chmod 644 access.xml artifactory.xml
If you forget to do the above, you will likely end up missing these files, which will break the communication between the access and artifactory web applications, resulting in login failures ("Username or Password Are Incorrect"). In this case, these errors result from the lack of communication between the web applications, not a problem with the credentials themselves.
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/manager.xml
This gives me the ability to upload new versions of access.war and artifactory.war via https://repo.mydomain.org:8443/manager/html:
<Context docBase="${catalina.home}/webapps/manager" privileged="true" antiResourceLocking="false">
</Context>
Additionally, I created the following folder to serve as the artifactory.home:
sudo mkdir /usr/share/artifactory
sudo chown tomcat.tomcat /usr/share/artifactory
tomcat8.conf
Add (or modify) the following line:
JAVA_OPTS="-Dartifactory.home=/usr/share/artifactory -Djfrog.access.home=/usr/share/artifactory/access -Dartifactory.access.client.serverUrl.override=http://localhost:8080/access"
Note: The Access Client URL specified above must use localhost in order to avoid the Server HTTP parameter from being overwritten by Apache and its modules. For instance, if I use:
https://repo.mydomain.org/access/api/v1/system/ping
The Server HTTP header value in the response is:
Server: Apache/2.4.33 (Amazon) OpenSSL/1.0.2k-fips mod_jk/1.2.43
And the Access Client produces the following exception:
[ERROR] (o.j.a.c.AccessClientImpl:154) - Access client/server version mismatch. Client version: 4.1.5, Server version: 2.4.33 (Amazon) OpenSSL
Which means the Access Client is depending on the first string matching #.#.# in the server header. This seems like a really fragile part of the Access Client. They should have used X-JFrog-Access-Server or something instead of trying to control a value that is set by the web server. So, to reiterate, use http://localhost:8080/access to connect directly to the tomcat server.
Artifactory 6.2.0 depends on Apache Derby (the specific version can be found in jfrog-artifactory-oss-6.2.0.zip\artifactory-oss-6.2.0\tomcat\lib). This should be added as a shared library to Tomcat:
mkdir /usr/share/tomcat8/shared
cd /usr/share/tomcat8/shared
wget http://central.maven.org/maven2/org/apache/derby/derby/10.11.1.1/derby-10.11.1.1.jar
Add or modify the following line in catalina.properties:
shared.loader=${catalina.home}/shared/*.jar
Since we want https://repo.mydomain.org to go to the Artifactory webapp:
mkdir /usr/share/tomcat8/host/repo.mydomain.org/ROOT
echo '<html><head><meta http-equiv="refresh" content="0;URL=/artifactory"></meta></head><body></body></html>' > /usr/share/tomcat8/host/repo.mydomain.org/ROOT/index.html
And make sure the services automatically start on reboot:
sudo chkconfig httpd on
sudo chkconfig tomcat8 on
Artifactory will then be available at the url:
https://repo.mydomain.org/artifactory/webapp/
Does anyone know how to enable kerberos with Apache Drill? Is it possible. I can't seem to find any documentation on it, or any questions/answers floating around with the information on it. I am currently running a CDH cluster.
I am getting this error when trying to use HDFS with Drill:
Error: PERMISSION ERROR: SIMPLE authentication is not enabled.
Available:[TOKEN, KERBEROS]
HDFS + Kerberos integration isn't currently supported / tested / documented. Vote on this ticket to track when it becomes available:
https://issues.apache.org/jira/browse/DRILL-3584
There isn't any documentation that the Drill team provides about how to enable kerberos and they haven't tested kerberos with Drill. Drill Eng. does believe that it should work.
In order to gain access onto the cluster once Kerberized, you must configure certain files in order to gain access.
Make an HDFS Superuser account as indicated in this Cloudera doc. On the Main Node, run
•sudo kadmin.local
In addition, add an 'hdfs' principal with this command
•addprinc hdfs#LOCALDOMAIN -- Where localdomain is the principal name
In order to enable authentication with Kerberos, we also need to copy the file hadoop-yarn-api.jar into Drill's class path. Example given below
•cp /opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/hadoop/client/hadoop-yarn-api.jar ~/apache-drill/jars/
The above step and the three following must be performed on each node of the cluster that an Apache Drill is installed.
Next, Drill's conf/core-site.xml file should be edited to contain the following snippet of xml. You might have to copy this file from /etc/hadoop/conf.cloudera.yarn/core-site.xml, etc or a similar path.
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
After this step, you will also need to add the following xml snippet below to the drill core-site.xml file. In this instance, hdfs/_HOST#LOCALDOMAIN is my principal property. The property can be found on the hdfs-site.xml file
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/_HOST#LOCALDOMAIN</value>
</property>
All that is left to do is create an 'hdfs' Kerberos ticket for the user that we're logged into
•kinit hdfs -- hdfs is the super user
Then start up each of the drillbits
•/opt/apachedrillfolder/bin/Drillbit.sh start
So now, Drill has both the configuration and the authority to use our kerberized HDFS store. Give it a shot by opening up a Drill prompt (drill-conf) and trying a query
I am using Logback for logging. Scribe appenders send the logs in real time to a central Scribe aggregator. But I don't know how to add source machine IP in the logs for each log events. Looking at the aggregated central Scribe logs, it is almost impossible to know which machine is sending the logs. Hence, appending the IP of source machine to each log event will be helpful, and will be really great if we can control that through logback configuration.
It's possible to pass down hostname to remote receiver thru contextName.
Add following to logback.xml on all appenders:
<contextName>${HOSTNAME}</contextName>
Then, on aggregator instance, it will be available for inclusion in the pattern:
<pattern>%contextName %d %-5level %logger{35} - %msg %n</pattern>
According to the Logback docs, there's now a CanonicalHostNamePropertyDefiner expressly to add a hostname to your logs. Add a define to your project:
<define name="hostname"
class="ch.qos.logback.core.property.CanonicalHostNamePropertyDefiner"/>
and access it as ${hostname}
well if you are working on a client server project then u can use MDC feature of slf4j/logback full document here and in this case you can have a well structured log file that you can identify which log is for which client
hope this helps!
I have an issue with my gentoo. I tried to install BIND into my gentoo but everytime i want to install it, i will get an error message.
Here is whats happen in my Konsole :
emerge --ask net-dns/bind
* IMPORTANT: 3 config files in '/etc/portage' need updating.
* See the CONFIGURATION FILES section of the emerge
* man page to learn how to update config files.
These are the packages that would be merged, in order:
Calculating dependencies... done!
[ebuild R ] dev-libs/openssl-1.0.1g USE="-bindist*"
[ebuild N ] net-dns/bind-9.9.4_p2 USE="berkdb dlz gost ipv6 ldap odbc ssl -caps -doc -filter-aaaa -fixed-rrset -geoip -gssapi -idn -mysql -postgres -python -rpz -rrl -sdb-ldap (-selinux) -static-libs -threads -urandom -xml"
!!! Multiple package instances within a single package slot have been pulled
!!! into the dependency graph, resulting in a slot conflict:
dev-libs/openssl:0
(dev-libs/openssl-1.0.1g::gentoo, ebuild scheduled for merge) pulled in by
>=dev-libs/openssl-1.0.0:0[-bindist] required by (net-dns/bind-9.9.4_p2::gentoo, ebuild scheduled for merge)
dev-libs/openssl:0[-bindist] required by (net-dns/bind-9.9.4_p2::gentoo, ebuild scheduled for merge)
(dev-libs/openssl-1.0.1g::gentoo, installed) pulled in by
>=dev-libs/openssl-0.9.6d:0[bindist] required by (net-misc/openssh-5.9_p1-r4::gentoo, installed)
It may be possible to solve this problem by using package.mask to
prevent one of those packages from being selected. However, it is also
possible that conflicting dependencies exist such that they are
impossible to satisfy simultaneously. If such a conflict exists in
the dependencies of two different packages, then those packages can
not be installed simultaneously. You may want to try a larger value of
the --backtrack option, such as --backtrack=30, in order to see if
that will solve this conflict automatically.
For more information, see MASKED PACKAGES section in the emerge man
page or refer to the Gentoo Handbook.
!!! The following installed packages are masked:
- media-libs/mesa-9.0::gentoo (masked by: package.mask)
/usr/portage/profiles/package.mask:
# Chí-Thanh Christopher Nguyễn <chithanh#gentoo.org> (26 Mar 2014)
# Affected by multiple vulnerabilities, #445916, #471098 and #472280
For more information, see the MASKED PACKAGES section in the emerge
man page or refer to the Gentoo Handbook.
Can anyone show me how to resolve this issue in my Gentoo. I have a hard time to install anything.
UPDATED
emerge --ask net-dns/bind
* IMPORTANT: 3 config files in '/etc/portage' need updating.
* See the CONFIGURATION FILES section of the emerge
* man page to learn how to update config files.
These are the packages that would be merged, in order:
Calculating dependencies... done!
[ebuild R ] dev-libs/openssl-1.0.1g USE="-bindist*"
[ebuild N ] net-dns/bind-9.9.4_p2 USE="berkdb dlz gost ipv6 ldap odbc ssl -caps -doc -filter-aaaa -fixed-rrset -geoip -gssapi -idn -mysql -postgres -python -rpz -rrl -sdb-ldap (-selinux) -static-libs -threads -urandom -xml"
The following USE changes are necessary to proceed:
see "package.use" in the portage(5) man page for more details)
# required by net-dns/bind-9.9.4_p2[ssl]
# required by net-dns/bind (argument)
=dev-libs/openssl-1.0.1g -bindist
Use --autounmask-write to write changes to config files (honoring
CONFIG_PROTECT). Carefully examine the list of proposed changes,
paying special attention to mask or keyword changes that may expose
experimental or unstable packages.
!!! The following installed packages are masked:
- media-libs/mesa-9.0::gentoo (masked by: package.mask)
/usr/portage/profiles/package.mask:
# Chí-Thanh Christopher Nguyễn <chithanh#gentoo.org> (26 Mar 2014)
# Affected by multiple vulnerabilities, #445916, #471098 and #472280
For more information, see the MASKED PACKAGES section in the emerge
man page or refer to the Gentoo Handbook.
2 steps to solve this problems:
package.use/bind
net-dns/bind -ipv6 dlz
dev-libs/openssl -bindist
net-misc/openssh -bindist
recompile ssl ssh, and install bind
emerge -Uav dev-libs/openssl net-misc/openssh
emerge -av net-dns/bind
the uses for bind:
equery uses bind -i
[ Legend : U - final flag setting for installation]
[ : I - package is installed with flag ]
[ Colors : set, unset ]
* Found these USE flags for net-dns/bind-9.10.2_p2:
U I
+ + berkdb : Add support for sys-libs/db (Berkeley DB
for MySQL)
+ + caps : Use Linux capabilities library to control
privilege
+ + dlz : Enables dynamic loaded zones, 3rd party
extension
- - doc : Add extra documentation (API, Javadoc,
etc). It is recommended to enable per
package instead of globally
- - filter-aaaa : Enable filtering of AAAA records over IPv4
- - fixed-rrset : Enables fixed rrset-order option
- - geoip : Add geoip support for country and city
lookup based on IPs
- - gost : Enables gost OpenSSL engine support
- - gssapi : Enable gssapi support
- - idn : Enable support for Internationalized Domain
Names
- - ipv6 : Add support for IP version 6
- - json : Enable JSON statistics channel
- - ldap : Add LDAP support (Lightweight Directory
Access Protocol)
- - mysql : Add mySQL Database support
- - nslint : Build and install the nslint util
- - odbc : Add ODBC Support (Open DataBase
Connectivity)
- - postgres : Add support for the postgresql database
- - python : Add optional support/bindings for the
Python language
+ + python_targets_python2_7 : Build with Python 2.7
+ + python_targets_python3_3 : Build with Python 3.3
- - python_targets_python3_4 : Build with Python 3.4
- - rpz : Enable response policy rewriting (rpz)
- - seccomp : Enable seccomp for system call filtering
+ + ssl : Add support for Secure Socket Layer
connections
- - static-libs : Build static versions of dynamic libraries
as well
+ + threads : Add threads support for various packages.
Usually pthreads
- - urandom : Use /dev/urandom instead of /dev/random
- - xml : Add support for XML files
maybe this help:
# vi /etc/portage/package.use
and add this line:(this line was changed)
dev-libs/openssl -bindist
I have no other way if it doesn't work, Sorry :(
maybe you can get help from gentoo forums.
good luck.
emerge net-dns/bind --autounmask-write
etc-update
emerge net-dns/bind
remove -bindist from USE flags
Just to help other people having the same error, you need add the line under "# Required by" to your package.use file.
echo "=dev-libs/openssl-1.0.1g -bindist" >> /etc/portage/package.use/zz-autounmask
or
nano -w /etc/portage/package.use/zz-autounmask
and then manually copy the line into the file.
Replace "=dev-libs/openssl-1.0.1g -bindist" with what's required to be added to your package.use
I am working on a Notification Service using IBM MQ messaging provider with JBoss eap 6.1 environment. I am successfully able to send messages via MQ JCA provider rar i.e. wmq.jmsra.rar file. However on consumer part my current configuration looks like this
#MessageDriven(
activationConfig = {
#ActivationConfigProperty(propertyName="destinationType", propertyValue="javax.jms.Queue"),
#ActivationConfigProperty(propertyName="destination", propertyValue="F2.QUEUE"),
#ActivationConfigProperty(propertyName="providerAdapterJNDI", propertyValue="java:jboss/jms/TopicFactory"),
#ActivationConfigProperty(propertyName="queueManager", propertyValue="TOPIC.MANAGER"),
#ActivationConfigProperty(propertyName="hostName", propertyValue="10.239.217.242"),
#ActivationConfigProperty(propertyName="userName", propertyValue="root"),
#ActivationConfigProperty(propertyName = "channel", propertyValue = "TOPIC.CHANNEL"),
#ActivationConfigProperty(propertyName = "port", propertyValue = "1422")
})
My problem is that consumer of this service does not want to add any port numbers, hostName, queueManager properties in these beans. Also they do not want to use ejb-jar.xml to externalize these configs. I have researched and found that we can add a domain IBM Message Driven Bean but with no success. Any suggestions on what I can do here to externalize all these configurations ?
EDIT: Adding --> The JCA resource adapter is deployed at consumer end if it makes it any easier.
Thanks
You can actually externalize an MDBs activation spec properties to the server configuration file.
Create the ejb-jar.xml file, but do not put the actual value in the file, use a property placeholder:
<activation-config-property>
<activation-config-property-name>hostName</activation-config-property-name>
<activation-config-property-value>${wmq.host}</activation-config-property-value>
</activation-config-property>
Do this for all of the desired properties.
Ensure that property replacement for Java EE spec files (ejb-jar.xml, in this case) is enabled in the server configuration file:
<subsystem xmlns="urn:jboss:domain:ee:1.2">
<spec-descriptor-property-replacement>true</spec-descriptor-property-replacement>
Then, in the server configuration file, provide values for your properties:
<system-properties>
<property name="wmq.host" value="10.0.0.150"/>
Once your MDBs are packaged, you will not need to change any of the files in the MDB jar - just provide the properties in the server configuration.
you can avoid to add host name, port number and so on in MDB, you just want to define destinationType in MDB, and rest of the thing u can configure in your application server, like Activation Specification, Queues and Queue Connection Factories.
I have done the same thing but i used IBM Websphere Application Server.