Saving Glassfish JDBC Configuration - configuration

I recently had to re-install Glassfish 3.1.2 from scratch and I found myself spending way too much time re-configuring the JDBC Connection Pools and Resources (copy/paste from another source was not an option). Many applications use the server and there are plenty of things to remember when configuring JDBC connectivity.
Is there a way to "save" the Glassfish JDBC configuration to a file so that I can easily upload it to a new version of the server (or a new server in another machine) without losing my sanity again? A quick hack would also be highly appreciated as well!

Server scoped application resources are stored in the domain.xml file within the <resources> element. There are <jdbc-resource> and <jdbc-connection-pool> elements which store your connections. From my experience you can copy those elements from one domain.xml file to another (at least for all 3.x versions of Glassfish).
Application scoped resources can be stored in glassfish-resources.xml files which need to go into the META-INF dir for an EAR and in WEB-INF for a WAR. They will be deployed together with an application and can only be accessed by this application. More information here.

Related

Trigger external pipeline / job after Jira in OpenShift startet

I'm running jira in openshift using the basic image from atlassian: https://hub.docker.com/r/atlassian/jira-software
So far most things work fine.
I installed a plugin using the web ui which worked as well.
But now I'm running into an issue when a pod is restarted. The pod uses the image and naturally (as specified) my plugin is not installed anymore. I can install the plugin via webservice calls and register it as an osgi module for jira. But I don't want to do this manually. Building a pipeline or jon for this is quite easy (I'm thinking jenkins or ansible tower). But I so far I didn't find a way to trigger this pipeline after the pod is started (or better after jira is started).
Anyone got an idea how to handle this?
Thanks and best regards. Sebastian
Why not create a custom image based on the Atlassian image with everything you need installed?
As far as I know, there isn't a way to trigger a pipeline when a Pod is started; only Webhook, Image Change, and Config Change triggers are available. You'll need to write a Jenkinsfile to script all of the installation and setup you want, but then that can be triggered in one of the three ways mentioned.
I'm thinking an Image Change trigger would work best for you, so when the latest version of Atlassian's image comes out, you can run your pipeline to set everything up on the latest version.
Also, just curious, but do you have some persistent storage attached to the Jira pod? If not, you'll lose everything in Jira if the Pod dies; that means tickets, boards, comments, everything.
Update:
Looking at this page, it looks like most of the stuff you're trying to persist is stored in jira-home, so maybe mounting that as a persistent volume will be a good solution for you.
You're correct that the tickets are stored in the database, but I'm guessing the database connection settings are getting wiped when the Pod is cycled.
The jira-home directory stores your application and database connection settings, as well as a subdirectory for your plugins.
dbconfig.xml
This file (located at the root of your JIRA home directory) defines
all details for JIRA's database connection. This file is typically
created by running the JIRA setup wizard on new installations of JIRA
or by configuring a database connection using the JIRA configuration
tool.
You can also create your own dbconfig.xml file. This is useful if you
need to specify additional parameters for your specific database
configuration, which are not generated by the setup wizard or JIRA
configuration tool. For more information, refer to the 'manual'
connection instructions of the appropriate database configuration
guide in Connecting JIRA to a database.
jira-config.properties
This file (also located at the root of your JIRA home directory)
stores custom values for most of JIRA's advanced configuration
settings. Properties defined in this file override the default values
defined in the jpm.xml file (located in your JIRA application
installation directory). See Advanced JIRA configuration for more
information.
In new JIRA installations, this file may not initially exist and if
so, will need to be created manually. See Making changes to the
jira-config.properties file for more information. This file is
typically present in JIRA installations upgraded from version 4.3 or
earlier, whose advanced configuration options had been customized
(from their default values).
plugins/
This is the directory where plugins built on Atlassian's Plugin
Framework 2 (i.e. 'Plugins 2' plugins) are stored. If you are
installing a new 'Plugins 2' plugin, you will need to deploy it into
this directory under the installed-plugins sub-directory.
'Plugins 1' plugins should be stored in the JIRA application
installation directory.
This directory is created on JIRA startup, if it does not exist
already.

Jhipster: Making a WAR file with client-configurable Datasource

My development team is working on a web application and has assigned me to research on how to make a client-side WAR file with a configurable Datasource.
This is the first commercial project of ours:
Language: Java / JavaScript(jQuery and AngularJS) / HTML / CSS
Database: MySQL
Development Tool: IntelliJ
Automation System: Gradle
Application Generator: JHipster
Version Control: SourceTree
At this time, our client has agreed to simply deploying the app on his own Amazon Web Server's MySQL database using an executable file (currently, a WAR file) delivered by us in a safe USB.
So far, the WAR file can be made with JHipster and deployed with no issue on our internal server. However, we did hard-code all the database-connection (JDBC) stuffs in a YML file under src/main/resources/config.
Naturally, our client has a database with totally different schema, usernames, and passwords. And the WAR file we are about to give him cannot be executed unless the datasource specs in there match his.
Because source codes cannot be extracted from a WAR, the client is not going to modify the datasource from his end. At the same time, he does not fancy giving us datasource information.
Thus, we are to come up with, quote, an executable file which allows him to configure the datasource the first time it is executed on his AWS.
Is there a way we can achieve this while not straying too far from the current deployment method (WAR file)?
Nothing specific to JHipster here, it's purely a Spring Boot question. You can either provide an application-prod.yml external to your war file or keep it internal but use placeholders referring to environment variables defined in your client server.
For more details, read the Spring Boot doc https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html

Configure Standalone Custom Registry in Clustered WebSphere Application Server

I have a problem with configuring standalone custom registry in WebSphere Application Server (Cluster environment). I have followed all steps from IBM manual:
I have implemented UserRegistry interface in DataBaseRegistry class
I have copied .jar to the lib/ext folder of WebSphere
I have assigned all necessary properties on Global Security page
and while trying to assign Standalone Custom Registry as current I got a following error:
Validation failed: Error occurred in RequiredModelMBean while trying to invoke operation getUsers
The funny part is that I followed all of those steps in a standalone version of WebSphere (not clustered) and it is working properly (so the problem is not in the code). Another thing is that there is nothing in the log files. I can see that getUsers is called and then no Exception or anything.
UPDATE: I resolved my problem. I was propably not 100% focused.
It tourned out that I forgot to copy additional jar with JDBC drivers to lib/ext that allow to connect to MSSQL.
LESSON LEARNED: Do not start important configuration work on Friday after lunch ;)

Publish Web does not include some dependency assemblies

In the past I have been using batch files to prepare release packages targeting different environments such as test, staging and production, and then copy the files to the Web site folders through various means. The batch files may run XmlPreProcess to alter web.config for different environments.
Lately I am trialing the Publish Web feature of VS 2012, after installing Web Deploy 3 in the server side. The result is looking good for Hello World.
However, I have a WCF app: MyWcfApp.dll had dependency on MyWcfContracts.dll and MyWcfImplementation.dll which depend on MyData.dll and MySql.Data.dll, yes, I am using MySql. All these files appear in the build folder, say MyWcfAp\bin\Debug.
When running Publish Web, I got some warning: The database provider for this connection string, MySql.Data.MySqlClient, is not supported for incremental database publishing. Incremental database publishing is supported only for SqlClient as well as Entity Framework Code First models.
Then the other dependent assemblies such as MySql.Data.dll got not copied over to the server.
Apparently Publish Web does a lot "smart" things through analyzing Web.config and having a lot presumptions.
Question 1:
Is it good to use Publish Web to deploy WCF service?
Question 2:
Is it possible to run some pre-deployment script say running XmlPreProcess before the deployment so I could target different environments?
Question 3:
Is it possible to ask Publish Web not to analyze Web.config and then just copy every assemblies and files in the build folder?
For the specific issue (Question 3) of not copying over dependent-upon assemblies:
I am working with a small WCF Service Application that I am deploying to my local file system (then hosting the site in IIS) and had the problem of depended-upon assemblies not being copied to the local folder. The solution for me had two steps:
In the service's References, hi-light each reference you need to be copied and hit F4. In the new window make sure 'Copy Local' is set to 'true'.
Right click on the Service Project and select properties. Click the 'Package/Publish Web' section, and from the 'Items to deploy...' dropdown select 'All files in this project folder'.

Java Applet needs to read MySQL

I have a Java Applet hosted online which is merrily reading data from CSV files.
However my host has MySQL and I'd like to start read/writing a database instead.
I'm happily accessing MySQL on my home PC with Java (NOT an Applet) via JConnector.
For starters the jar file mysql-connector-java-5.1.18-bin.jar needs uploading to my web server right?
When running my own database-accessing programs from NetBeans I added this jar file to the Netbeans project "Libraries". When compiling/running from command prompt I used "-classpath".
However, an Applet runs on an html page not Netbeans nor DOS! Therefore: by what mechanism do we convey the location of the Driver .jar file to the Applet?
Many thanks, Robin.
An applet should not be directly accessing the DB. Instead it should be forced to go through server-side functionality (JSP, servlet, ASP etc.) that mediates what it can & cannot do.
For either the applet or the server-side mediated solution, the mysql-connector-java-5.1.18-bin.jar will need to be on the run-time class-path. For a servlet/JSP, that would be by putting it into (from memory) WEB-INF/lib. For an applet, added to the archive attribute of the applet element.