OpenShift Enterprise creating directories in root file system - openshift

We have several applications that look in a standard location for configuration on the file system. Something like:
/config/db/
There would simply be too many changes required to many applications to use $OPEN_SHIFT_DATA_DIR instead. Is there a way to put files in arbitrary directories? Do i have to create a custom cartridge that would put the config directory there? Are there any permissions restrictions that I'll run into?

I don't have much experience with OSE, but only OSO. My guess would be that you should package up your configurations as an RPM and install it as root on the system, with permissions that are permissive enough for the gears users to be able to access them. Then release a new RPM for configuration changes. Installing a cartridge to a gear should never allow the data to be accessed by another gear, or be able to install something to anywhere that is not in that users selinux container.

Related

Trigger external pipeline / job after Jira in OpenShift startet

I'm running jira in openshift using the basic image from atlassian: https://hub.docker.com/r/atlassian/jira-software
So far most things work fine.
I installed a plugin using the web ui which worked as well.
But now I'm running into an issue when a pod is restarted. The pod uses the image and naturally (as specified) my plugin is not installed anymore. I can install the plugin via webservice calls and register it as an osgi module for jira. But I don't want to do this manually. Building a pipeline or jon for this is quite easy (I'm thinking jenkins or ansible tower). But I so far I didn't find a way to trigger this pipeline after the pod is started (or better after jira is started).
Anyone got an idea how to handle this?
Thanks and best regards. Sebastian
Why not create a custom image based on the Atlassian image with everything you need installed?
As far as I know, there isn't a way to trigger a pipeline when a Pod is started; only Webhook, Image Change, and Config Change triggers are available. You'll need to write a Jenkinsfile to script all of the installation and setup you want, but then that can be triggered in one of the three ways mentioned.
I'm thinking an Image Change trigger would work best for you, so when the latest version of Atlassian's image comes out, you can run your pipeline to set everything up on the latest version.
Also, just curious, but do you have some persistent storage attached to the Jira pod? If not, you'll lose everything in Jira if the Pod dies; that means tickets, boards, comments, everything.
Update:
Looking at this page, it looks like most of the stuff you're trying to persist is stored in jira-home, so maybe mounting that as a persistent volume will be a good solution for you.
You're correct that the tickets are stored in the database, but I'm guessing the database connection settings are getting wiped when the Pod is cycled.
The jira-home directory stores your application and database connection settings, as well as a subdirectory for your plugins.
dbconfig.xml
This file (located at the root of your JIRA home directory) defines
all details for JIRA's database connection. This file is typically
created by running the JIRA setup wizard on new installations of JIRA
or by configuring a database connection using the JIRA configuration
tool.
You can also create your own dbconfig.xml file. This is useful if you
need to specify additional parameters for your specific database
configuration, which are not generated by the setup wizard or JIRA
configuration tool. For more information, refer to the 'manual'
connection instructions of the appropriate database configuration
guide in Connecting JIRA to a database.
jira-config.properties
This file (also located at the root of your JIRA home directory)
stores custom values for most of JIRA's advanced configuration
settings. Properties defined in this file override the default values
defined in the jpm.xml file (located in your JIRA application
installation directory). See Advanced JIRA configuration for more
information.
In new JIRA installations, this file may not initially exist and if
so, will need to be created manually. See Making changes to the
jira-config.properties file for more information. This file is
typically present in JIRA installations upgraded from version 4.3 or
earlier, whose advanced configuration options had been customized
(from their default values).
plugins/
This is the directory where plugins built on Atlassian's Plugin
Framework 2 (i.e. 'Plugins 2' plugins) are stored. If you are
installing a new 'Plugins 2' plugin, you will need to deploy it into
this directory under the installed-plugins sub-directory.
'Plugins 1' plugins should be stored in the JIRA application
installation directory.
This directory is created on JIRA startup, if it does not exist
already.

WordPress cannot write file to disk

So my website was hacked a few days ago and after that I did a fresh install of wordpress.
I proceeded to install my theme, everything went smooth. But when I wanted to upload files in the media folder via wp, it said cannot write file to disk.
I tried a few more times and it worked, I uploaded like 5 images and it stopped working again same error.
I also cannot install plugins via wp I have to do it manually, and I had to install my theme manually too... I am going crazy called my web hosting they said it's wordpress fault.
You will need to review permissions of wp-content/* folders and files and subfolders. Folders should be 755 to support writing into them by WordPress. You can set this in most FTP client software by use of CHMOD feature.
You may also need to review owning user and group of wp-content/* folders and subfolders and files. Owning user and group should match the Linux user and group that your server software operates under (i.e. user www-data or apache)
wp-content/uploads for media uploads, thumbnail image generation
wp-content/themes for theme installation, automatic updates, and use of Editor
wp-content/plugins for plugins installation, automatic updates, and use of Editor
Do you have SSH access into your server? That's easiest way to verify and change ownership user/group and permissions.

Windows Universal Apps: storing configuration

i come from web development where apps can have multiple config files for storing things like db connection strings, remote server endpoints, passwords and so on
so you have files like base.config, development.config, production.config, local.config and so on
according to the environment the app is running in the correct config file is loaded
is there any such system for Windows Phone and Windows Store apps?
if so, how can i define different configs for diffrent runtimes such as debug and production?
i would really like to avoid storing runtime config in code and then using crazy ifs
There isn't a built-in system for this, but it's pretty easy to mock up. Create and read a file with your config information then create different files for the different configurations. Create a pre-build step which copies the appropriate file for the desired configuration.
I'd probably name the files all the same but put them in different directories named for the $(Configuration) then copy from the $(Configuration) dir in my pre-build.
See Pre-build Event/Post-build Event Command Line Dialog Box on MSDN
There isn't an easy way to switch this at runtime since you can't write to the appx package after it's signed and deployed.

mysqlworkbench without root access

I have a full normal user account for ssh access and I'd like to use the mysql diff capability of mysqlworkbench's mysqldbcompare (or some other cli tool if it exists). The problem is I don't have root access. Can I install this in my normal user account so I can compare my DB's on that server?
The best way to do such things is to install them as root using the system's package manager, but I'm sure you know that :P
I downloaded the source code from their website, and it uses a traditional configure script. I have installed programs, in fact whole ecosystems, in unprivileged user accounts using configure scripts. Usually all it takes is to specify where you want files to be installed:
./configure --prefix=$HOME/eco
Sometimes you have to install dependencies too. Then make sure to set LD_LIBRARY_PATH accordingly.
Depending on the distro you're using, you can even install packages to your home directory: see this question

Is system-wide Mercurial installation enough in a shared enviroment?

I am learning how I can install Mercurial on our team system, but I am not experienced enough to make some decision.
For our team, we have a server machine used as a repository. Every team member also has her/his own Linux RedHat installed machine. However, we do not do anything on our local terminals and we do everything on the server. Every member has a user directory on the server such as /home/Cassie, /home/john, ... and we save all our code and work there. When we turn on the local terminals, the GNOME system shows our personal files on the server not the local machine. Whenever everyone click the terminal application on desktop, it connects to her own home directory. Thus, we do not need to use SSH command to connect to the server. It is like the school multi-users system. Everyone has a user account and she logs into her own account to do her own work. I hope I can install a shared repository on that server and every one can do push, pull, etc. all kind of commands there.
1) Since we use a shared environment, does it mean that I need to install Mercurial on only the server and that is enough for everyone to do "commit", "push", "pull", etc. commands?
2) By installing only system-wide Mercurial, does it eliminate the ability to do local commit? If I would like to let everyone still have the "local commit" ability, how should I do it?
3) I have searched online. Some people mentioned that for a shared network server, it is impossible to have locks for any two users if they are trying to access the same file at the same time. Does it imply my situation?
In sum, we do all the work on the server. I hope to find a plan to have Mercurial control on a repository shared by everyone when everyone still has local commit ability and the repository still has some locks protection if any two users try to access a file at the same time. If this scenario is feasible, can I just install the Mercurial on the server or I need to install Mercurial for both servers and users machines? If it is impossible for the scenario, would someone please suggest me a plan to have version control for our system?
1) Since we use a shared environment, does it mean that I just need to install the Mercurial on the server and it is enough for everyone to do "commit","push","pull"..etc commands ?
If your users are logging into a shell on the server in order to do their work, then yes it is sufficient to have Mercurial installed only on the server.
2) By installing only system-wide Mercurial, does it eliminate the ability to do local commit ? If I would like to let everyone still have the "local commit" ability, how should I do it ?
Your users will presumably checkout from a shared "root" repository into their own home directory in order to work on the code. They will have a "local" copy of the repo in their home directory and will push into the shared root repository.
3) I have searched online. Some people mentioned that for a shared network server, it is impossible to have locks for any two users if they are trying to access the same file at the same time. Does it imply my situation ?
As long as your users are working within their own local copies of the repo, they will not interfere with one another. The only time a conflict may arise is when committing back to the shared root repository -- in which case the user will need to merge their changes and resolve any conflicts.
I would recommend reading carefully through Joel Spolsky's excellent Hg Init tutorial for a better understanding of how Mercurial handles "central" and "local" copies.