I am using fedora 21
I have written a selinux policy moudle for an application. I have defined new types is .te file and created a .fc file in which defines the labelling of files with the types I have created. i can successfully load the policy using "make load". But fiile contexts don't get changed when I check on files and directories using "ls -Z".
Am i missing something.
For clarification: "load the policy" does not set "file contexts" of existing files automatically. Existing files should relabeled according to your new policy. New files will get the new label automatically.
To restore missingmislabeled labels you can use the following command, e.g.:
restorecon -Rv /var/lib/docker/
From the restorecon manual: "It can also be run [...] to add support for newly-installed policy"
The normal way is having a policy activated and afterwards install the program or write files to the hard disk. The files are then automatically labeled with your new context type.
Related
I am trying to configure wso2 by modifing its configuration file named "carbon.xml", but no matter what change I do to "carbon.xml", even adding a single "white space" or modifying a comment it's enough for the wso2 server to reset carbon.xml file to it´s original "out of the box" state.
I tryied to protect the file carbon.xml by dropping write permissions, but in this case wso2 server refuses to start, it aborts execution and displays an error complaining that it was not able to "write new configuration" !!!
Does any one know how to solve this?
I found the answer, In wso2 version 5.9 there is a new centralized configuration file, named "deployment.toml". Configurations must be done in this file and then wso2 propagates changes to the respective configurations files, like carbon.xml or catalina-server.xml, for example.
If you delete "deployment.toml" wso2 will fallback to previos behavior.
With the new 4.5.0 carbon-kernel release, all WSO2 products such as APIM 3.0.0, IS 5.9.0 introduced a new config model. According to the new config model, there is a centralized configuration file (deployment.toml) where users add the configurations, then those configurations will be added to the respective .xml files.
This new config model was introduced in order to simplify the configuration (previously there were a lot of configuration files) and to increase the user experience. Please follow this documentation to refer further information on this new config model
Related documents:
https://wso2.com/blogs/thesource/2019/10/simplifying-configuration-with-WSO2-identity-server
https://is.docs.wso2.com/en/next/references/new-configuration-model/
If you have a deployment.toml file, the changes directly made into the xml files will be overiden during the server startup. Deleting the deployment.toml file will use the old config model. But it is not a recommended approach.
How can I made persistent changes in carbon.xml after server reboot?
No matter what I change, after wso2 restart my modified carbon.xml is sent to a "backup" directory, and replaced with the "out of the box" carbon.xml file.
With the 4.5.0 carbon-kernel release, all WSO2 products such as APIM 3.0.0, IS 5.9.0 introduced a new config model. According to the new config model, there is a centralized configuration file (deployment.toml) where users add the configurations, then those configurations will be added to the respective .xml files.
So if you want to do some changes in the carbon.xml file, you have to add the relevant configs in deployment.toml file according to the new config model. With the new config model, all the changes made by you in the xml config files will be overridden by the toml configs during the server startup.
The previous configurations moved into "backup" folder when you make a new change in the deployment.toml file. This backup folder is used as a backup for the previous configs.
Please follow this documentation to refer further information on this new config model
Related documents:
https://wso2.com/blogs/thesource/2019/10/simplifying-configuration-with-WSO2-identity-server
In my case I was using WSO2 API Manager 3.1.0 and I wanted to update the <XSSPreventionConfig> tag in carbon.xml file. Yes, for every restart, my changes in carbon.xml get overriden by auto-generated carbon.xml file with config values coming from deployment.toml.
Then I found out that there are Jinja2 template files (.j2) in this location used to auto-generate XML files and fill them up with values from deployment.toml. wso2am-3.1.0/repository/resources/conf/templates/repository/conf/
I updated carbon.xml.j2 directly for my <XSSPreventionConfig> changes and it works perfectly fine.
I am using IBM Integration Bus v.9
I try to read configuration from file, like this tutorial.
Based on the documentation, I've already set up my environment variable in Windows like this :
MQSI_FILENODES_ROOT_DIRECTORY to C:\MQSIFileInput
In the File Read Node properties, i set input directory to "config" (without apos), because the file located in C:\MQSIFileInput\config directories.
When I run, i got error "The directory config is not a valid directory name". What am I missing here?
Do I need to set up another configuration to read the file properly?
Thank you.
The MQSI_FILENODES_ROOT_DIRECTORY variable needs to be visible to the ExecutionGroup process at startup, so first thing to check is how did you set the env var and did you restart the broker?
Due to the way that processes are forked on windows the process for setting env vars is usually something like:
Stop broker
Close Broker Command Prompt
Modify mqsiprofile.cmd to include variable
Open new Broker Command Prompt
Verify env var is set ie/ echo %MQSI_FILENODES_ROOT_DIRECTORY%
Start Broker
The directory also needs to be readable by the Broker's process ID (and writable if you will be deleting the file or moving it to a backout dir etc).
I'm trying to create an RPM in Fedora 15 that will install my software, but in order for my software to work correctly once installed, I also need to edit other (configuration) files on the system, add users/groups, etc. Performing some of these tasks is only allowed by the root user. I know to never create an RPM as the root user, and I understand why that is such a bad idea. However, if I add shell script statements to my spec file (%post, %prep... any section) to edit the necessary files, add users/groups, etc., my rpmbuild command fails with message "Permission denied" (not surprisingly).
What's the best way to handle this? Do I have to tell my users to install my package first, and then perhaps run a shell script as root to configure it all? That doesn't seem very elegant. I was hoping to allow a user to do everything with one simple command such as 'yum install mysoftware'.
Much of my research suggests that perhaps this shouldn't even be done via RPM. I've read many parts of Maximum RPM, and lots of other good resources, but haven't found what I'm looking for. I'm new to creating RPMs, but have already been able to successfully create a simple spec file for my software... I just can't get everything configured properly after the package is unzipped and installed to the correct location. Any input is greatly appreciated!
useradd should be run in %pre and shouldn't run during rpmbuild. That's the standard way of doing it. I would recommend the packaging guidelines and specifically the section on users and groups.
The %pre section of your RPM .spec file should check for all the conditions necessary for your software to install.
The %post section of your RPM .spec file should make all the modifications needed for your software to run.
To avoid file permission errors in the %post section of your RPM .spec file, you can set the file permissions and ownership in the %files section. That way, the user who installs the RPM has the appropriate permissions to modify the configuration files.
%install
# Copy files to directories on your installation server
%files
# Set file permissions and ownership on your installation server
%attr(775, myuser, mygroup) /path/to/my/file
%pre
# Check if custom user 'myuser' exists. If not, create it.
# Check if custom group 'mygroup' exists. If not, create it.
# All other checks here
%post
# Perform post-installation steps here, like editing other (configuration) files.
echo "Installation complete."
I currently have Jenkins set up with a number of jobs, but it's proving difficult to back up because the artifacts are stored within the job directory. I'd like to back up the job configurations and artifacts separately. I'm sure I remember reading somewhere that Jenkins now has an option to store them outside the job, but I can't find this.
Is there any configuration option that does this while still making the artifacts visible from within the job on the Jenkins interface? (ie rather than merely an add-in that copies the artifacts elsewhere)
Go to your jenkins configuration page, e.g.
http://mybuildserver.acme.com/configure
At the top of the configuration page there is a "home directory" setting. Click the "advanced..." button below it.
Now set the "Workspace Root Directory" to e:\jenkins-workspaces\${ITEM_FULL_NAME}, and "Build Record Root Directory" to e:\jenkins-builds\${ITEM_FULL_NAME} or something similar.
Warning: I run Jenkins 2.7.2 and noticed that certain features don't work properly after configuring Jenkins like that. I saw problems with folders and problems with the multi-branch project plugin. Check the status of those issues if your rely on these features.
As you can see here, there are many plugins to deploy artifacts anywhere you want/need, on FTP, CIFS, Confluence, Artifactory.... especially the ArtifactsDeployer that will allow you to make a copy of the artifacts in the Jenkins Home.
Thank you Sam, for your post, which directed me into the right direction to solve my problem.
Have been searching for a way on how can I make a symlink to the Job-Archive of a build for multibranch projects. Up to now, we used to manually search for the correct folder basename in the filesystem and added that one to the Jenkinsfile.
Now, I can simply use
jobOutputFolder = currentBuild.rawBuild.artifactsDir.path
and use that in my script.
If security is a concern, I could implement that as a shared library additionally.
Try the Use Custom Workspace build option. From the Jenkins popup help:
For each job on Jenkins, Jenkins allocates a unique "workspace
directory." This is the directory where the code is checked out and
builds happen. Normally you should let Jenkins allocate and clean up
workspace directories, but in several situations this is problematic,
and in such case, this option lets you specify the workspace location
manually.
This option is also available under advanced project properties of multi-configuration project builds.
A groovy script under "Prepare an environment for the run" will always run on the master, and this groovy script can create a symlink to where you really want artifacts archiving to archive_to which SHOULD include the job name and build number:
if (! Files.createSymbolicLink(Paths.get(currentBuild.artifactsDir.path),
Paths.get(archive_to.getCanonicalPath()))) {
throw new RuntimeException("Can't create symlink to archive dir")
}
Of course (sadly) when old builds are purged by Jenkins the old artifacts are left because jenkins will not follow a symlink when purging, even if jenkins owns the symlink and the target (shame).
I workaround for that may be to point a symlink back from the new archive dir, then, when jenkins purges it's archive dir, the new symlink will dangle and a cron job can then later delete the new job archive dir
Copy Artifact Plugin (https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin) adds a build step for retrieving files from another project's workspace to current and work from there.