I am working with vagrant and chef-solo, which works really well so far. I do have a common directory for the chef-solo cookbooks, as I want to be able to simply clone an opscode cookbook from their git repository and later being able to pull updates. In order to do that, I would like to keep the cookbooks as intact as possible to fast-forward any updates.
The cookbooks of opscode are easily configurable with the module.json = {} option in the Vagrantfile. However, some options are not configurable, e.g. the ability to set the AllowOverride None-Setting or similar.
Therefore, I would like to be able to overwrite files in subsequent directories, similar to overriding a function in OOP. Example: The cookbooks for apache2 contains the default site template in apache2/templates/default/default-site.erb with the aformentioned AllowOverride None-Option set. In ProjectA, I would have a default-site.erb overriding the global template with the project specific settings of ProjectA, while ProjectB has no such file and thus will use the global template.
Does anyone have such a setup running or an idea on how to achieve this or a similar good setup?
Chef Solo supports site-cookbooks. For it to work with vagrant, you will need to set it up manually. This is how I define my cookbooks paths in my Vagrantfile
chef.cookbooks_path = ["kitchen/cookbooks", "kitchen/site-cookbooks"]
So anything in site-cookbooks will override those in cookbooks. For example, kitchen/site-cookbooks/apache2/templates/default/default-site.erb will be used instead of kitchen/cookbooks/apache2/templates/default/default-site.erb.
Related
Where is the Openshift Master and Node Host Files in v4.6
Previously hosted below in v3
Master host files at /etc/origin/master/master-config.yaml
Node host files at /etc/origin/node/node-config.yaml
You can check your current kubelet configuration using the following procedures instead of the configuration file on the node hosts like OCPv3. Because the kubelet configuration was managed dynamically as of OCPv4.
Further information is here, Generating a file that contains the current configuration.
You can check it using above reference procedures(Generate the configuration file) or oc CLI as follows.
$ oc get --raw /api/v1/nodes/${NODE_NAME}/proxy/configz | \
jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
These files no longer exist in the same for as in OCP 3. To change anything on the machines themselves, you'll need to create MachineConfigs, as CoreOS is an immutable Operating System. If you change anything manually on the filesystem and reboot the machine, your changes will typically be reset.
To modify Worker Nodes, often the setting you are looking for can be configured via a kubeletConfig: Managing nodes - Modifying Nodes. Note that only certain settings can be changed, others cannot be changed at all.
For the Master Config, it depends on what you want to do, as you will potentially change the setting via a machineConfigPool or for example edit API Server setting via oc edit apiserver cluster. So it depends on what you actually want to change.
I use the oc tool for several different clusters.
Since I am usually keeping local yaml files for any OpenShift objects I view/modify, either ad hoc or due to some config management scheme of the individual cluster, I have a separate directory on my machine for each cluster (which, in turn, is of coursed versioned in git). Let's call them ~/clusters/a/, ~/clusters/b/ etc.
Now. When I cd around on my local machine, the oc command uses the global ~/.kube/config to find the cluster I logged in last, to. Or in other words, oc does not care at all about which directory I am in.
Is there a way to have oc store a "local" configuration (i.e. in ~/clusters/a/.kube_config or something like that), so that when I enter the ~/clusters/a/ directory, I am automatically working with that cluster without having to explicitely switch clusters with oc login?
You could set the KUBECONFIG environment variable to specify different directories for configuration for each cluster. You would need to set the environment variable to respective directories in each separate terminal session window.
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable
To expand on Graham's answer, KUBECONFIG can specify a list of config files which will be merged if more than one exist. The first to set a particular value wins, as described in the merging rules.
So you can add a local config with just the current-context, e.g. ~/clusters/a/.kube_config could be
current-context: projecta/192-168-99-100:8443/developer
and ~/clusters/b/.kube_config:
current-context: projectb/192-168-99-101:8443/developer
Obviously need to adjust this for your particular cluster using the format
current-context: <namespace>/<cluster>/<user>
Then set KUBECONFIG with a relative path and the global config
export KUBECONFIG=./.kube_config:~/.kube/config
Note that if ./.kube_config does not exist it will be ignored.
The current-content will then be overridden by the one defined in the local .kube_config, if one exists.
I tested this locally with 2 minishift clusters and it seemed to work ok. Have not tested what the behaviour is when setting config though.
I am working on MySQL optimization with another researcher and we are using git for version control. The problem is that each of us has to compile those sources on separate machines and running cmake . generates different versions of makefile on our machine. If we think about the following cases
1. A changes source
2. A runs cmake, builds the source, and test performance
3. B pulls the code change
4. B changes source, runs cmake and builds the source
After the step 4, B will have a different version of Makefile and files such as cmake_install.cmake that depend on users and user paths.
For example, some of the files have the following diffs.
# The program to use to edit the cache.
-CMAKE_EDIT_COMMAND = /usr/local/bin/ccmake
+CMAKE_EDIT_COMMAND = /usr/bin/ccmake
# The top-level source directory on which CMake was run.
-CMAKE_SOURCE_DIR = /home/dcslab/userA/mysql/mysql-5.6.21-original
+CMAKE_SOURCE_DIR = /home/dcslab/userB/mysql-5.6.21-original
# The top-level build directory on which CMake was run.
-CMAKE_BINARY_DIR = /home/dcslab/userA/mysql/mysql-5.6.21-original
+CMAKE_BINARY_DIR = /home/dcslab/userB/mysql-5.6.21-original
These are all user-dependent paths generated by cmake commands. The direct way to resolve this conflict is to untrack Makefiles or any file generated by cmake after initially committing them. I am wondering if there is any better and legit way of managing projects using cmake for more than one user. Thanks for your help in advance!
Files generated by CMake are machine-dependent, so they will not work on any machine except one where they has been generated. Because of that, they are useless on for others and there is no needs to track them in git. There are two ways for achive this:
Tune gitignore for ignore files, generated by CMake, on commit. Patterns for such files are relatively simple and can be found by googling. Disadvantage of this approach is that files, generated by project's CMake scripts (configure_file, add_custom_command) will not be automatically ignored and will require explicit notion in gitignore
Perform out-of-source builds, that is not run cmake from source directory. CMake generates additional files only in build directory, correct project's CMake scripts also should follow this rule. So git repo will be clean without any gitignore patterns.
It is common practice to perform out-of-source build in ./build subdirectory of source directory. In this case you can add /build/** to gitignore, and everything will work.
An important part of good engineering -- and especially in research -- is reproducability. It is unfortunate that the code you are working on can be influenced by the environment in which it is built (you may want to look at the Bazel for future projects to reduce external dependencies). Given that this code already has this external dependency problem, you can at least counter the effects by using a consistent environment via virtualization. In particular, you may want to take a look at Docker, which would allow you and your collaborators to build/run code using a common system image, thereby ensuring that all builds and executions are derived from a predictable, consistent environment.
I would like to setup PhpUnit in PhpStorm. I press 1. Edit Configurations... and would like to enter this parameter in field 2.
I am using phpunit.xml as configuration file and all want to use a relative path like:
phpunit.xml
or use project root variable like
$PROJECT_ROOT/phpunit.xml
But both options are not working for me.
Based on your screenshot (the place where you want to use it): use full path -- in project settings such path is stored relative to the project root anyway (unless you specify some file which is outside of the project, of course) and the full path then reconstructed when needed (e.g. when shown to you or when used as a parameter during tests execution).
I don't think you'll be able to achieve what you want via the project's Run/Debug configurations. What might help you is the Default configuration file setting in your default project settings, which can be used to define the PHPUnit configuration file to use by default, so you don't need to specify it via the Use alternative configuration file option in your Run/Debug configuration.
To set this, open your Default Settings window, then navigate to Languages & Frameworks -> PHP -> PHPUnit. In the Test Runner section tick the Default configuration file checkbox and specify the location where you keep your configuration file. If this file will always be in the same path relative to your project root, you can use the $PROJECT_DIR$ variable to define the project root. So if your PHPUnit configuration file is always in the root of your project, you might set this to something like $PROJECT_DIR$/phpunit.xml. When you create a new project, its Default configuration file variable will be set to the file offset from your project root, and you won't need to use the Use alternative configuration file option in your Run/Debug configuration.
If you're opening the same project in different locations on the same machine this should work for new projects without any problem, if you want to share this configuration across machines, you might need to try PHPStorm's Exporting and Importing Settings functionality.
I'm not sure if this directly solves your problem, and it's a few months late anyway, but maybe this will be useful for someone else who stumbles across this question... The above instructions were correct for my 8.0.3 installation on Linux.
I am currently using asset versioning on my symfony2 projects when ever I have a new update to the site, before doing assetic dump I changed the asset number first and then I ran
sudo php app/console assetic:dump --env=prod
then I cleared the cache. However on my windows machine when I tried it still uses the old assets before the update and hence messing a lot of the layout. What is the best way to prevent this from happening?
I think that you you messed with assets and Assetic library. Assetic library gives you ability to process your css and js resources. And so assetic:dump is just processing your js and css files (minimizing it, compile many files in one or any other processing).
To make your assets be accessible you need to run php app/console assets:install. If you want it be always up to date with your Resources folder you can just add --symlink option to this command. It will create symlink web/bundles/yourbundle pointing to your src/YourBundle/Resources/public.