The other day i was looking if it is possible to sync my chef workstation/server for easy manage/visualization of all cookbook components. I already tried looking for one solution and I didn't found any good info about this topic. So my questions are:
Is it posible to do?
Is it a good solution? And if not recommend one better?
If it's viable explain how can i do it?
Normally cookbooks already live in source control so it's not really a normal request. You can use the knife download command to pull back cookbook data but probably not in a format you want. tl;dr go the other direction, git -> chef server.
Related
We've been using Gitlab CI for some months, and in the last 1 week, we've been using the specific runner installed on a VPS. Currently, we are using "shell" as the executor.
Today our pipeline got stuck out of sudden, when we looked into the server free RAM, it's only 48MB out of 996 MB, FYI, we're using CentOS 6.
We've been struggling to get the answers, but we're stuck at the moment, and would like to know :
What's causing the pipeline from getting stuck?
is it true because of low free RAM?
Should we use another executor, perhaps SSH or even docker?
What is the best practices to deal with this kind of problem?
We would appreciate any kind of help or directions.
In my case the pipeline was stuck because the only available runner had the option "Can run untagged jobs" set to "No" and the job was really untagged. One can fix this issue by changing the "Can run untagged jobs" option or by adding a tag to the appropriate section of the ".gitlab-ci.yml" file in the repository. In my case it was section default:tags:.
(It seems that your case is much more complicated. However I've came across this issue twice a month, and I've forgotten the decision at the second time. Thus I've came to this page which looks appropriate to save the decision. Hope the answer will help someone else.)
In my case, the pipeline was stuck because of two things:
The tags specified in the .gitlab-ci.yml do not match those in the runner configuration.
If you specify the simulator in the build command, ensure that you write the right version of the simulator.
Once I did these changes, everything worked well!
Good luck.
In my case, It happened on a container that was off for a couple of days for maintenance reasons, I had to clear runner caches, and it worked!
In my case the windows gitlab-runner service was not running. Starting it solved it.
What is the best way to create a custom OpenShift cartridge?
Looking at documentation and examples, I am seeing a lot of old-school compiling from source installation of the component that the cartridge needs to run.
Some examples https://www.openshift.com/blogs/lightweight-http-serving-using-nginx-on-openshift https://github.com/boekkooi/openshift-diy-nginx-php/blob/master/.openshift/action_hooks/build_nginx https://github.com/razorinc/redis-openshift-example/blob/master/.openshift/action_hooks/build and a ton of others are compiling from source..
I need to create some custom cartridges on my project, but doing it this way feels wrong.
Is there any reason I cant use yum and puppet/augeas to do the building, instead of curl, make and sed?
Or is this the best practice? In that case, why are we doing this 2000 style?
I'll do my best to explain this the best way I can. Feel free to let me know If I need to explain anything in more detail.
I'm assuming you're creating a custom binary cartridge (ie. you're creating a language cartridge such as ruby, python, etc.). Since none of the nodes have that binary installed on the system the custom cartridge you're creating will need to provide that binary and its libraries.
When you install a package with yum its going to install items in several different directories (/etc, /usr/, /var, etc). Since you're creating cartridge that will be copied over to several nodes you'll need to package all these items in a way that can be copied over to a node and then be executed without having to install them to the system.
As for doc's, I would suggest taking a look at these:
https://www.openshift.com/developers/download-cartridges
https://www.openshift.com/blogs/new-openshift-cartridge-format-part-1
https://www.openshift.com/blogs/new-openshift-cartridge-format-part-2
12-Factor Apps suggest that you configure your application using environment variables. So far, so good. I can easily imagine that this is a good way to do it if you need to set a connection string, e.g.
But what if you have more complex configuration with lots and lots of values? I for sure do not want to have 50+ environment variables, do I?
How could I solve this, and still be compliant to the idea of 12-Factor Apps?
From a quick read of the configure link you provided, I agree with the author's claim that there is a widespread problem, but I am not convinced that their proposed solution is going to always be best. Like you, I don't relish the idea of having to define dozens of environment variables to configure an application. So here are some alternative ideas.
First, read Chapter 2 of the Config4* Getting Started Guide (disclaimer: I am the main author of that software). In particular, notice that its support for what I call adaptive configuration can go a long way towards addressing the concern that you ask about. Is Config4* the ultimate solution? Possibly not, but I think it is a good step in the right direction.
Second, the chances are that whatever application you are developing/maintaining has already settled on a particular configuration technology, such as XML files or Java property files, and it won't be feasible to migrate to using Config4*. This raises the question: is there anything you can do to avoid having a proliferation of, say, XML-based configuration files when you have multiple environments (such as dev, UAT, staging and production) in which the application will be deployed? I have outlined an approach for dealing with this issue in another StackOverflow article.
I want to write extensions for Mercurial. What are good resources such as tutorials, guides, API reference or maybe even a existing extension that is well commented and easy to lean from the source.
So far, I have only found the short MercurialApi and WritingExtensions wiki pages.
Mercurial The Definitive Guide, also known as the hg book, contains a section on writing extensions for Mercurial. The book is available to view for free at http://hgbook.red-bean.com/.
Edit: My apologies, the hg book did only describe using extensions not writing them. The section on writing hooks in the book may still be useful though.
The best way to learn how to write an extension is probably going to be reading extension code. Focus the most attention on extensions that perform functions similar to what you want to implement.
e.g. If your interested in converting from one SCM system to another take a look at the hg-git extension.
As far as I know, there isn't a lot in the way of 'learning materials' for writing extensions. Your best bet is probably to find an extension that does something similar to the one you want to write, read the source and figure out how it works. You can try contacting that extension's author if you get stuck.
I have an subversion server running with Apache mod_dav_svn and it works nicely but the browsing ability via HTML is a bit spartan. Is there a way to customize it at all?
There's two things I'd like to do to make a huge difference:
separate the directories from the files so all the directories are at the top. Right now everything is in alphabetical order. (the picture above happens to have all the directories preceding files in alphabetical order, but trust me, that's not the normal case)
List the basic file statistics (file size, mod time, last updated version, etc)
Is it posssible to do either of these with mod_dav_svn?
In a vanilla Subversion install, the web interface is very spartan by design. (Remember the HTTP interface is designed for SVN clients, not human beings.)
You can customize the display somewhat via the SVNIndexXSLT directive. (Here is a good place to start).
If you want something richer (with logs and diff features), you will need to install a special front end. WebSVN and ViewVC are very popular. There is also Trac, but this is a higher-level tool.
A list of other repo browsing tools.
Just FYI, we use WebSVN for our repo instance. It took some effort to get it up and running, but once it is setup you can pretty much leave it alone.
WebSvn looks like it might help you. I tried trac and it is very slick but I found it to be complicated and seems overkill for what you're looking for, imo.
Not out of the box - that is, without modifying the source code. You might be interested in tools like ViewSVN or the more sophisticated trac or redmine.