What is "vendoring" exactly? How would you define this term?
Does it mean the same thing in different programming languages? Conceptually speaking, not looking at the exact implementation.
Based on this answer
Defined here for Go as:
Vendoring is the act of making your own copy of the 3rd party packages
your project is using. Those copies are traditionally placed inside
each project and then saved in the project repository.
The context of this answer is in the Go language, but the concept still applies.
If your app depends on certain third-party code to be available you could declare a dependency and let your build system install the dependency for you.
If however the source of the third-party code is not very stable you could "vendor" that code. You take the third-party code and add it to your application in a more or less isolated way. If you take this isolation seriously you should "release" this code internally to your organization/working environment.
Another reason for vendoring is if you want to use certain third-party code but you want to change it a little bit (a fork in other words). You can copy the code, change it, release it internally and then let your build system install this piece of code.
Vendoring means putting a dependency into you project folder (vs. depending on it globally) AND committing it to the repo.
For example, running cp /usr/local/bin/node ~/yourproject/vendor/node & committing it to the repo would "vendor" the Node.js binary – all devs on the project would use this exact version. This is not commonly done for node itself but e.g. Yarn 2 ("Berry") is used like this (and only like this; they don't even install the binary globally).
The committing act is important. As an example, node_modules are already installed in your project but only committing them makes them "vendored". Almost nobody does that for node_modules but e.g. PnP + Zero Installs of Yarn 2 are actually built around vendoring – you commit .yarn/cache with many ZIP files into the repo.
"Vendoring" inherently brings tradeoffs between repo size (longer clone times, more data transferred, local storage requirements etc.) and reliability / reproducibility of installs.
Summarizing other, (too?) long answers:
Vendoring is hard-coding the often forked version of a dependency.
This typically involves static linking or some other copy but it doesn't have to.
Right or wrong, the term "hard-coding" has an old and bad reputation. So you won't find it near projects openly vendoring, however I can't think of a more accurate term.
As far as I know the term comes from Ruby on Rails.
It describes a convention to keep a snapshot of the full set of dependencies in source control, in directories that contain package name and version number.
The earliest occurrence of vendor as a verb I found is the vendor everything post on err the blog (2007, a bit before the author co-founded GitHub). That post explains the motivation and how to add dependencies. As far as I understand the code and commands, there was no special tool support for calling the directory vendor at that time (patches and code snippets were floating around).
The err blog post links to earlier ones with the same convention, like this fairly minimal way to add vendor subdirectories to the Rails import path (2006).
Earlier articles referenced from the err blog, like this one (2005), seemed to use the lib directory, which didn't make the distinction between own code and untouched snapshots of dependencies.
The goal of vendoring is more reproducibility, better deployment, the kind of things people currently use containers for; as well as better transparency through source control.
Other languages seem to have picked up the concept as is; one related concept is lockfiles, which define the same set of dependencies in a more compact form, involving hashes and remote package repositories. Lockfiles can be used to recreate the vendor directory and detect any alterations. The lockfile concept may have come from the Ruby gems community, but don't quote me on that.
The solution we’ve come up with is to throw every Ruby dependency in vendor. Everything. Savvy? Everyone is always on the same page: we don’t have to worry about who has what version of which gem. (we know) We don’t have to worry about getting everyone to update a gem. (we just do it once) We don’t have to worry about breaking the build with our libraries. […]
The goal here is simple: always get everyone, especially your production environment, on the same page. You don’t want to guess at which gems everyone does and does not have. Right.
There’s another point lurking subtlety in the background: once all your gems are under version control, you can (probably) get your app up and running at any point of its existence without fuss. You can also see, quite easily, which versions of what gems you were using when. A real history.
Yesterday I tried updating from MATE 1.4 to MATE 1.6. I didn't like some things about it, and I decided to switch back, at least for now. One of the changes was a switch from the mateconf configuration system to GNOME 3's GSettings. As I understand this is a frontend to a system called dconf (or connected some other way).
This rendered many of my settings viod. I figured I could try to migrate them, but unlike gconf and mateconf, which created convenient folders in my home directory and filled them with XML I could edit or copy, I wasn't able to find any trace of dconf's settings storage.
A new Control Center is provided (and mandatory to install) but I don't want to be clicking through dozens of dialogs just to restore settings I already have. The Configuration Editor utility might be okay, but it only works with mateconf.
So what I want to know is where I can find the files created by dconf and how I can modify them directly, without relying on special tools.
I almost forgot that I asked this, until abo-abo commented on it. I now see that this is a SuperUser question, but for some reason I can't flag it. I would if I was able to.
The best solution I found was to install dconf-tools, which is like the old conf-editors.
As for the actual location of the data on disk, it seems to be stored in /var/etc/dconf as Gzipped text files, but I'm not entirely sure because I'm not using Mate 1.6 right now. I wouldn't advise editing them directly.
I've been having another issue with dconf, and I checked the folder that I mentioned above. It doesn't even exist. There now seems to be a single configuration file at ~/.config/dconf/[USERNAME]. It isn't in text format, so special tools are required to edit it.
This might be the result to an update to dconf.
I had a similar problem (was trying to back up keyboard custom shortcuts). The path for that was:
dconf dump /org/gnome/desktop/wm/keybindings/ > wm-keybindings.dconf.bak
dconf dump /org/gnome/settings-daemon/plugins/media-keys/ > media-keys-keybindings.dconf.bak
This thanks to redionb's answer on Reddit.
In the company I am I was asked to write an autoupdate function a la chrome. I.e. It should check periodically whether a new version is available, download the new version and apply it silently the next time the application starts.
I already have something up and running but it is more like a dirty hack than something I feel happy about it. So, I would like to know how to design and implement such a solution. My horrible hack works as this:
Have a mechanism to check whether a new version exists (a database query or a web service)
Download a full zip with the whole new version.
Check file signature. If everything went alright, set a registry value: must update to true.
When the application restarts, if the must update value is true, launch an update program and exist.
The update deletes the contents of the application folder, unzips the update and replaces the old contents, launches the application and exits.
Now, I would like to change it, so it works cleaner. I am planning to send the update as a bsdiff file. It gets downloaded. But the question is, what happens next?
When do apply the update?
Who is in charge of applying the patch? is it the program itself or is it a third program, as I did, which is in charge of applying the patch and relaunch the application?
If your going down the C++ route you can go to chromium and download the Chrome source code and dig around to see how the update is done, this might give you a better idea on how to approach it. Here's an article that might help.
If your familiar with .NET the recently release nuget also has an auto update feature that might be useful to look at, you can get the source code from here. David Ebbo has a blog about how its done here.
I'm not up to date on Delphi but you might be able to use either of the above options.
The workflow you proposed is more or less like it should work, but there's no need to re-invent the wheel - there are plenty libraries out there that will do this for you. Using a 3rd party library has the benefit of keeping your code cleaner while making sure the dirty process of auto-update is contained and working flawlessly.
Trust me, I know. I'm the author of NAppUpdate, an app update framework for .NET (which you might want to try out or learn from).
So, after giving it a lot of though, this is what I came with (for active directory I will refer to the directory where the main program lies, active program is the main program and update program is the one that replaces the active program and its resource files):
The active program checks if there is a new version every certain amount of time. If so, download it
Prepare new version in a separate folder (this can be done by copying the contents of the directory with the program to a subdirectory and applying a binary patch, or simply unziping the new version).
Set a flag that indicates that a new version is ready.
When a program is exiting (and one has to control for different interrupts here):
The active program checks the new version ready flag. Launch the update program and exit.
The update program checks if it can write in the active directory. If so, replaces the contents with the prepared version.
The update program has to recheck links and update them accordingly.
So guys, if you have a better workflow, please tell me.
You could literally use the Google Chrome update workflow by using the Google Chrome updater:
http://code.google.com/p/omaha/
They open sourced it Feb 2009.
It would be nice to have a more or less complete list over what files and/or directories that shouldn't (in most cases) be under source control. What do you think should be excluded?
Suggestion so far:
In general
Config files with sensitive information (passwords, private keys etc.)
Thumbs.db, .DS_Store and desktop.ini
Editor backups: *~ (emacs)
Generated files (for instance DoxyGen output)
C#
bin\*
obj\*
*.exe
Visual Studio
*.suo
*.ncb
*.user
*.aps
*.cachefile
*.backup
_UpgradeReport_Files
Java
*.class
Eclipse
I don't know, and this is what I'm looking for right now :-)
Python
*.pyc
Temporary files
- .*.sw?
- *~
Anything that is generated. Binary, bytecode, code/documents generated from XML.
From my commenters, exclude:
Anything generated by the build, including code documentations (doxygen, javadoc, pydoc, etc.)
But include:
3rd party libraries that you don't have the source for OR don't build.
FWIW, at my work for a very large project, we have the following under ClearCase:
All original code
Qt source AND built debug/release
(Terribly outdated) specs
We do not have built modules for our software. A complete binary is distributed every couple weeks with the latest updates.
OS specific files, generated by their file browsers such as
Thumbs.db and .DS_Store
Some other Visual Studio typical files/folders are
*.cachefile
*.backup
_UpgradeReport_Files
My tortoise global ignore pattern for example looks like this
bin obj *.suo *.user *.cachefile *.backup _UpgradeReport_Files
files that get built should not be checked in
I would approach the problem a different way; what things should be included in source control? You should only source control those files that:
( need revision history OR are created outside of your build but are part of the build, install, or media ) AND
can't be generated by the build process you control AND
are common to all users that build the product (no user config)
The list includes things like:
source files
make, project, and solution files
other build tool configuration files (not user related)
3rd party libraries
pre-built files that go on the media like PDFs & documents
documentation
images, videos, sounds
description files like WSDL, XSL
Sometimes a build output can be a build input. For example, an obfuscation rename file may be an output and an input to keep the same renaming scheme. In this case, use the checked-in file as the build input and put the output in a different file. After the build, check out the input file and copy the output file into it and check it in.
The problem with using an exclusion list is that you will never know all the right exclusions and might end up source controlling something that shouldn't be source controlled.
Like Corey D has said anything that is generated, specifically anything that is generated by the build process and development environment are good candidates. For instance:
Binaries and installers
Bytecode and archives
Documents generated from XML and code
Code generated by templates and code generators
IDE settings files
Backup files generated by your IDE or editor
Some exceptions to the above could be:
Images and video
Third party libraries
Team specific IDE settings files
Take third party libraries, if you need to ship or your build depends on a third party library it wouldn't be unreasonable to put it under source control, especially if you don't have the source. Also consider some source control systems aren't very efficient at storing binary blobs and you probably will not be able to take advantage of the systems diff tools for those files.
Paul also makes a great comment about generated files and you should check out his answer:
Basically, if you can't reasonably
expect a developer to have the exact
version of the exact tool they need,
there is a case for putting the
generated files in version control.
With all that being said ultimately you'll need to consider what you put under source control on a case by case basis. Defining a hard list of what and what not to put under it will only work for some and only probably for so long. And of course the more files you add to source control the longer it will take to update your working copy.
Anything that can be generated by the IDE, build process or binary executable process.
An exception:
4 or 5 different answers have said that generated files should not go under source control. Thats not quite true.
Files generated by specialist tools may belong in source control, especially if particular versions of those tools are necessary.
Examples:
parsers generated by bison/yacc/antlr,
autotools files such as configure or Makefile.in, created by autoconf, automake, libtool etc,
translation or localization files,
files may be generated by expensive tools, and it might be cheaper to only install them on a few machines.
Basically, if you can't reasonably expect a developer to have the exact version of the exact tool they need, there is a case for putting the generated files in version control.
This exception is discussed by the svn guys in their best practices talk.
Temp files from editors.
.*.sw?
*~
etc.
desktop.ini is another windows file I've seen sneak in.
Config files that contain passwords or any other sensitive information.
Actual config files such a web.config in asp.net because people can have different settings. Usually the way I handle this is by having a web.config.template that is on SVN. People get it, make the changes they want and rename it as web.config.
Aside from this and what you said, be careful of sensitive files containing passwords (for instance).
Avoid all the annoying files generated by Windows (thumb) or Mac OS (.ds_store)
*.bak produced by WinMerge.
additionally:
Visual Studio
*.ncb
The best way I've found to think about it is as follows:
Pretend you've got a brand-new, store-bought computer. You install the OS and updates; you install all your development tools including the source control client; you create an empty directory to be the root of your local sources; you do a "get latest" or whatever your source control system calls it to fetch out clean copies of the release you want to build; you then run the build (fetched from source control), and everything builds.
This thought process tells you why certain files have to be in source control: all of those necessary for the build to work on a clean system. This includes .designer.cs files, the outputs of T4 templates, and any other artifact that the build will not create.
Temp files, config for anything other than global development and sensitive information
Things that don't go into source control come in 3 classes
Things totally unrelated to the project (obviously)
Things that can be found on installation media, and are never changed (eg: 3rd-party APIs).
Things that can be mechanically generated, via your build process, from things that are in source control (or from things in class 2).
Whatever the language :
cache files
generally, imported files should not either (like images uploaded by users, on a web application)
temporary files ; even the ones generated by your OS (like thumbs.db under windows) or IDE
config files with passwords ? Depends on who has access to the repository
And for those who don't know about it : svn:ignore is great!
If you have a runtime environment for your code (e.g. dependency libraries, specific compiler versions etc.) do not put the packages into the source control. My approach is brutal, but effective. I commit a makefile, whose role is to downloads (via wget) the stuff, unpack it, and build my runtime environment.
I have a particular .c file that does not go in source control.
The rule is nothing in source control that is generated during the build process.
The only known exception is if a tool requires an older version of itself to build (bootstrap problem). In that case you will need a known good bootstrap copy in source control so you can build from blank.
Going out on a limb here, but I believe that if you use task lists in Visual Studio, they are kept in the .suo file. This may not be a reason to keep them in source control, but it is a reason to keep a backup somewhere, just in case...
A lot of time has passed since this question was asked, and I think a lot of the answers, while relevant, don't have hard details on .gitignore on a per language or IDE level.
Github came out with a very useful, community collaborated list of .gitignore files for all sorts of projects and IDEs that is worth taking a look.
Here's a link to that git repo: https://github.com/github/gitignore
To answer the question, here are the related examples for:
C# -> see Visual Studio
Visual Studio
Java
Eclipse
Python
There are also OS-specific .gitignore files. Following:
Windows
OS X
Linux
So, assuming you're running Windows and using Eclipse, you can just concatenate Eclipse.gitignore and Windows.gitignore to a .gitignore file in the top level directory of your project. Very nifty stuff.
Don't forget to add the .gitignore to your repo and commit it!
Chances are, your IDE already handles this for you. Visual Studio does anyway.
And for the .gitignore files, If you see any files or patterns missing in a particular .gitignore, you can open a PR on that file with the proposed change. Take a look at the commit and pull request trackers for ideas.
I am always using www.gitignore.io to generate a proper one .ignore file.
Opinion: everything can be in source control, if you need to, unless it brings significant repository overhead such as frequently changing or large blobs.
3rd party binaries, hard-to-generate (in terms of time) generated files to speed up your deployment process, all are ok.
The main purpose of source control is to match one coherent system state to a revision number. If it would be possible, I'd freeze the entire universe with the code - build tools and the target operating system.
I have an subversion server running with Apache mod_dav_svn and it works nicely but the browsing ability via HTML is a bit spartan. Is there a way to customize it at all?
There's two things I'd like to do to make a huge difference:
separate the directories from the files so all the directories are at the top. Right now everything is in alphabetical order. (the picture above happens to have all the directories preceding files in alphabetical order, but trust me, that's not the normal case)
List the basic file statistics (file size, mod time, last updated version, etc)
Is it posssible to do either of these with mod_dav_svn?
In a vanilla Subversion install, the web interface is very spartan by design. (Remember the HTTP interface is designed for SVN clients, not human beings.)
You can customize the display somewhat via the SVNIndexXSLT directive. (Here is a good place to start).
If you want something richer (with logs and diff features), you will need to install a special front end. WebSVN and ViewVC are very popular. There is also Trac, but this is a higher-level tool.
A list of other repo browsing tools.
Just FYI, we use WebSVN for our repo instance. It took some effort to get it up and running, but once it is setup you can pretty much leave it alone.
WebSvn looks like it might help you. I tried trac and it is very slick but I found it to be complicated and seems overkill for what you're looking for, imo.
Not out of the box - that is, without modifying the source code. You might be interested in tools like ViewSVN or the more sophisticated trac or redmine.