Changing the configuration store location for the OSGi Configuration Admin service? - configuration

Is there a way to change the configuration store location for the OSGi Configuration Admin service? I'd like to have the properties files exist in another bundle so they'd exist in source control & in the deployment rather than the OSGi store.

In the end I decided to use Apache Felix File Install to update the configuration properties of a Configuration Admin ManagedService. This seems to work passably well.
It's a little kludgy because when the files are updated the new configuration properties get pushed to the managed service without regard to their being correct values. This means that on next startup the values will still be bad & need to be set to defaults.
It should work for now.

The Config Admin implementations cannot do this, at least not in a portable way via the specification. Instead you need a "management agent" that pushes configuration data into Config Admin via the API; it can derive that configuration data from any source it wishes.
FileInstall is a very simple example of a management agent. If it does not do exactly what you want then it is not too difficult to write your own.
The ManagedServices will still need to perform validation of incoming configuration data and dynamically react to new configuration data. OSGi is a dynamic platform and Config Admin is designed to allow for on-the-fly reconfiguration of a running system.

Related

Using Configuration File instead of System Registry

The Portal UI React application makes use of the Registry settings instead of a local settings.json file in order to run the application on the local environment.This is a pain for the developer because everytime a Registry is updated the system needs a restart which is not a advisable kind of approach in this fast moving development world. There is less flexibility and more dependency while using the Registry settings instead of a local json based configuration file.
I propose to move all the configuration files into local json file and checkout the file in the applications repository.
If there is any other approach which would make this easy to use scenario then pls share your thoughts.
Thanks
Iftekhar

Service Fabric - can a .net 4.7.2 application running in a container on IIS access the fabric configuration settings?

I have a Fabric application that I need to deploy a SSRS report viewer control to. The control is webforms, and there is no .net core version available yet, so my plan is to host it in a windows container with IIS.
I would like to keep all my configuration in one place, that place being the service fabric manifests. Will it be possible for the webforms pages to read fabric configuration from inside of the container?
when I attempt to get config from fabric I get this error
Unable to load DLL 'FabricRuntime.dll':
I have tried to call SfBinaryLoader as is detailed below, but no change
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-services-inside-containers
You have two options:
Load config via the SF APIs, which is what you are trying to do or
Load config just via reading the file. Some folders are automatically mounted inside the container, which you can access through the file system in the container for example using the Fabric_Folder_Application environment variable (or others depending on where exactly you're keeping your config), or mounting the config path via a volume:
<Volume Source="/mnt/hostfolder/" Destination="/var/containerfolder/" IsReadOnly="false" />
Using the SF APIs from within the container is possible. I would recommend following the guide that you pointed to separately and to ask separate questions if you cannot get that to work as it may help identify other issues.
That said, using the APIs is less common for containerized workloads. Normally people just use Environment Variables to configure services running within Containers, as that is more common/standard, even if less flexible.

Trigger external pipeline / job after Jira in OpenShift startet

I'm running jira in openshift using the basic image from atlassian: https://hub.docker.com/r/atlassian/jira-software
So far most things work fine.
I installed a plugin using the web ui which worked as well.
But now I'm running into an issue when a pod is restarted. The pod uses the image and naturally (as specified) my plugin is not installed anymore. I can install the plugin via webservice calls and register it as an osgi module for jira. But I don't want to do this manually. Building a pipeline or jon for this is quite easy (I'm thinking jenkins or ansible tower). But I so far I didn't find a way to trigger this pipeline after the pod is started (or better after jira is started).
Anyone got an idea how to handle this?
Thanks and best regards. Sebastian
Why not create a custom image based on the Atlassian image with everything you need installed?
As far as I know, there isn't a way to trigger a pipeline when a Pod is started; only Webhook, Image Change, and Config Change triggers are available. You'll need to write a Jenkinsfile to script all of the installation and setup you want, but then that can be triggered in one of the three ways mentioned.
I'm thinking an Image Change trigger would work best for you, so when the latest version of Atlassian's image comes out, you can run your pipeline to set everything up on the latest version.
Also, just curious, but do you have some persistent storage attached to the Jira pod? If not, you'll lose everything in Jira if the Pod dies; that means tickets, boards, comments, everything.
Update:
Looking at this page, it looks like most of the stuff you're trying to persist is stored in jira-home, so maybe mounting that as a persistent volume will be a good solution for you.
You're correct that the tickets are stored in the database, but I'm guessing the database connection settings are getting wiped when the Pod is cycled.
The jira-home directory stores your application and database connection settings, as well as a subdirectory for your plugins.
dbconfig.xml
This file (located at the root of your JIRA home directory) defines
all details for JIRA's database connection. This file is typically
created by running the JIRA setup wizard on new installations of JIRA
or by configuring a database connection using the JIRA configuration
tool.
You can also create your own dbconfig.xml file. This is useful if you
need to specify additional parameters for your specific database
configuration, which are not generated by the setup wizard or JIRA
configuration tool. For more information, refer to the 'manual'
connection instructions of the appropriate database configuration
guide in Connecting JIRA to a database.
jira-config.properties
This file (also located at the root of your JIRA home directory)
stores custom values for most of JIRA's advanced configuration
settings. Properties defined in this file override the default values
defined in the jpm.xml file (located in your JIRA application
installation directory). See Advanced JIRA configuration for more
information.
In new JIRA installations, this file may not initially exist and if
so, will need to be created manually. See Making changes to the
jira-config.properties file for more information. This file is
typically present in JIRA installations upgraded from version 4.3 or
earlier, whose advanced configuration options had been customized
(from their default values).
plugins/
This is the directory where plugins built on Atlassian's Plugin
Framework 2 (i.e. 'Plugins 2' plugins) are stored. If you are
installing a new 'Plugins 2' plugin, you will need to deploy it into
this directory under the installed-plugins sub-directory.
'Plugins 1' plugins should be stored in the JIRA application
installation directory.
This directory is created on JIRA startup, if it does not exist
already.

With Keycloak, can you load an LDAP configuration from a file?

When I run Keycloak, I'd like it to load my LDAP configuration (user federation) automatically when it is run, so I don't have to enter it manually. Is there any way to do this with Keycloak? I'm using the containerized version 7.0.0, if it matters. I am also running in standalone mode. Thanks
You should be able to create your realm from a template that has your LDAP configuration in it.
From what I understand from your question, you want to use LDAP as your user Federation server, so you should have an LDAP up and running before starting your Keycloak container, and the container should start with the LDAP configuration.. to do this, I'll suggest a method that is a bit cumbersome at first, but it will give you a better grasp on how to configure Keycloak in the future.
Start by downloading keycloak from the website and run it without putting it in a container.. set up your Realm, clients and everything apart from the LDAP configuration.
Copy the Keycloak.json file outside of the directory, we're going to use that later
Get back to your web interface, configure your LDAP server, and save the configuration.
Now copy the keycloak.json file again, and place both versions in a text comparison tool, Diffmerge for example, and see the difference in the configuration related to your LDAP, that should be added to your container's keycloak.json.
A good practice using keycloak container is to create your whole configuration, and replace the default one, this way your container will start every time with your Realms, clients and all other pre-configured attributes.
OK so I think I figured it out. In Keycloak I had to export the realm via the standalone.sh script as specified in the documentation. Using the kcadm.sh admin CLI did not export the whole realm. Then I could import the realm using the admin CLI later. Thanks for your help it lead me to this answer.

Can a TeamCity build agent be configured to only run builds with a particular parameter dependency?

I have a TeamCity build agent installed on a machine which in theory is dedicated to running dynamic security scans and I don't want it doing anything else (i.e. running the duplicates finder).
Short of either creating custom agent configuration properties then customising each build's agent dependencies (which perhaps strictly speaking I should be doing anyway) or configuring the agent to only run selected configurations, is there any way to avoid this? Both of these approaches require additional configuration on a per-build basis either on every single build.
In a perfect world, I'd like to be able to tell the agent to only ever run builds which match a particular agent dependency. Is this possible or am I coming at it from the wrong direction?
I'm afraid TeamCity doesn't provide a way to specify that agent can run only configurations with a specific property (and not run other configurations).
So, there are only two ways to specify agents: either with agent requirements, or with configuring the agent to only run selected configurations.
You could probably try to make some batch change in your build configuration properties, because all build configuration settings/properties are stored in XML files on disk.
In current versions of TeamCity (e.g. 8.1) you can create a pool just for your security machine, and only assign the one machine to that pool, remembering to remove it from other pools.
Then you can assign the security project to that pool. That should solve your problem.