Trying to find a workable workflow for multiple developers in our Coldfusion shop before we implement.
Currently, most of us (still) work directly in production. I want to change that.
If each developer has their own repo and there are repo's on the test and prod web servers, what is the value in a 'central' repository? What value does something like BitBucket add in this scenario?
If you use a central repository, you can put the development into a dev branch and leave the production branch for only bugfixes. Also, I think it is a bad idea to run off a mercurial repo in a production environment. Think about a regular deploy strategy to decouple the production server from the repository.
But I must admit, that I have no experience in Coldfusion, maybe there it is totally okay to run directly from a repository.
The chief advantages of bitbucket are the lack of server setup/maintenance/backup, you can get to it anywhere you have internet access.
#luksch was right to question running prod directly from a cloned repo. At a minimum you want to make sure you're not serving up the .hg directory. I'd encourage you to use some kind of deployment script that would grab the source at a tag from mercurial package it, place it on the repo and also restart or do anything the CF server needs.
The best experience I had with cold fusion (much like I have a best root canal) was when we ditched adobe's server and used railo, this freed us up from paying adobe for licenses on all our servers as well as made it easy to package the app and it's runtime in a war - thus making deployment super easy.
A central repo hosted on BitBucket or your company's servers (assuming they are physically protected and backed up) gives you the advantage of reliably having them available and business continuity in case of something really bad happening.
Multiple copies of the repository are good if a hard drive with one of them crashes. But if fire or theft wipes out all of the hard drives with the repos (which I have read one case of that happening) it leaves you with nothing.
One of the best things about distributed version control is that it allows you to design your workflow around your own development processes. You do not need a central repository but many projects end up using one.
A central repository is a great way of keeping track of the latest version of the code base. This means that when a developer wants to get a copy of the latest code they always know where to pull / clone from rather than having to ask around the team.
Having a central repository doesn't limit you in any way, you can still use other workflows along side it. For example if a few members of the team are working on a feature they can push and pull between their development repositories without pushing to the central repository.
Related
Is it good idea to show/hide React component using window.env
for example we have feature which we are not ready to release yet,so we are thinking of hiding it using window.env.FEATURE_ENABLED=0 (these vars will be picked by api call to service that serves bundle to browser)
But,I am thinking its risky since user can look at windows.env and set window.env.FEATURE_ENABLED=1 and start seeing the workflow which we intend to hide.
Could anyone please provide their take on this.
Yes, it could potentially be risky for the reason you say.
A better approach would be to only include finished features in the production build - unfinished features that are still in testing should not be sent to the client. For such features, have a separate build. Host it:
On a local dev server (usually one running on the developer's personal machine) (great when one is making rapid changes), or
On a staging server - one that's accessible to all developers, and works similarly to the live site, but isn't the same as the production URL
A staging server is the professional approach when multiple devs need access to it at once. It can take some work at first to integrate it into your build process, but it's worth it for larger projects.
I'm playing around in .netcore and attempting to make use of the user secret store, some details are here: https://docs.asp.net/en/latest/security/app-secrets.html
I'm getting along with it well enough when working locally, but I'm having trouble understanding how this could be utilized effectively in a team environment, and if I wanted to work on this project from more than one computer.
The store itself (at least by default) keeps its configuration json file within the users/appdata (on windows). This feature is good to use if you're uploading the project to github, to hide your API keys, connection strings etc. This is all great when it's just me, on one machine working on a project. But how does this work when working in a team environment, or on multiple machines? The only thing I can think of is to find the configuration file, check it into a private repo, and make sure to replace it in the correct directory when changes occur.
Is there another way to manage this that I'm not aware of?
As you already know, the Secret Manager tool is providing another method to avoid checking sensitive data into source control by adding this layer of control.
So, where should we store sensitive configuration instead? The location should obviously be separate from your source code and, more importantly, secure. It could be in a separate private repository, protected fileshare, document management system, etc.
Rather than finding and sharing the exact configuration file, however, I would suggest keeping a script (e.g. .bat file) that you would run on each machine to set your secrets. For example:
dotnet user-secrets set MySecret1 ValueOfMySecret1 --project c:\work\WebApp1
dotnet user-secrets set MySecret2 ValueOfMySecret2 --project c:\work\WebApp1
This would be more portable between machines and avoid the hassle of knowing where to find and copy the config files themselves.
Also, for these settings, consider whether you need them to be the same across all developers in your team. For local development, I would normally want to have control to install, use, and name resources differently than others in my team. Of course, this depends on your situation and preferences, and I see reasons to share them too.
As our systems grow, there are more and more servers and services (different types and multiple instances of the same type that require minor config changes). We are looking for a "cetralized configuration" solution, preferably existing and nothing we need to develop from scrtach.
The idea is something like, service goes up, it knows a single piece of data (its type+location+version+serviceID or something like that) and contacts some central service that will give it its proper config (file, object or whatever).
If the service that goes online can't find the config service it will either use a cached config or refuse to initialize (behavior should probably be specified in the startup parameters it's getting from whom or whatever is bringing it online)
The config service should be highly avaiable i.e. a cluster of servers (ZooKeeper keeps sounding like a perfect candidate)
The service should preferably support the concept of inheritence, allowing a global configuration file for the type of service and then specific overrides or extensions for each instance of the service by its ID. Also, it should support something like config versioning, allowing to keep different configurations of the same service type for different versions since we want to rely more on more on side by side rollout of services.
The other side of the equation is that there is a config admin tool that connects to the same centralized config service, and can review and update all the configurations based on the requirements above.
I know that if I modify the core requirement from serivce pulling config data to having the data pushed to it I can use something like puppet or chef to manage everything. I have to be honest, I have little experience with these two systems (our IT team has more), but from my investigations I can say it seemed they are NOT the right tools for this job.
Are there any systems similar to the one I describe above that anyone has integrated with?
I've only had experience with home grown solutions so my answer may not solve your issue but may help someone else. We've utilized web servers and SVN robots quite successfully for configuration management. This solution would not mean that you would have to "develop from scratch" but is not a turn-key solution either.
We had multiple web-servers each refreshing its configurations from a SVN repository at a synchronized minute basis. The clients would make requests of the servers with the /type=...&location=...&version=... type of HTTP arguments. Those values could then be used in the views when necessary to customize the configurations. We did this both with Spring XML files that were being reloaded live and standard field=value property files.
Our system was pull only although we could trigger a pull via JMX If necessary.
Hope this helps somewhat.
Config4* (of which I am the maintainer) can provide you with most of the capabilities you are looking for out-of-the-box, and I suspect you could easily build the remaining capabilities on top of it.
Read Chapters 2 and 3 of the "Getting Started" manual to get a feel for Config4*'s capabilities (don't worry, they are very short chapters). Doing that should help you decide how well Config4* meets your needs.
You can find links to PDF and HTML versions of the manuals near the end of the main page of the Config4* website.
I'm working at a SaaS company who releases new features and bug fixes to our customers every six weeks. When we write code changes they pass through different steps (like a state machine) before reaching the production servers. The steps are different depending on if the change is done in the regular development cycle or as an emergency fix. We're currently using Harvest to manage the steps and track what code (features and bug fixes through packages) is being released to the customers and in that sense it's working well.
Unfortunately Harvest is both expensive and a pain to use from a programmer's point of view. Branching and merging is a nightmare. So we're looking into switching to Mercurial. Mercurial seems to excel in those areas. However, Mercurial doesn't seem to be made for tracking changes or manage the above mentioned process, it only does SCM.
Q: What options do we have when it comes to the release process, surely there are other SaaS companies (e.g. Google, Flickr, Facebook, LinkedIn) out there who wants quality control before releasing code to production servers?
Q: Is it a bad idea to try and build the process in Mercurial or are there other tools that we need to use together with Mercurial?
[Edit]
To clarify, this is our (suggested) branch structure.
Here's the process flow we currently have in Harvest:
Hotfix <--> Test Level 1 <--> Test Level 2 <--> Master (Production)
Feature <--> Test <--> Release Test <--> Master (Production)
I'm not looking for a bug tracker, but rather a deployment tool that helps us track and deploy code that has been verified by our testers (code in release branch). If there is more than one hotfix being worked on at the same time we need to be able to test them together and if one breaks the code we need to be able to "demote" the code breaking changes back one step in the process flow. Today it's enough for the two developers to "promote" their changes to Test Level 1 and the system can be tested with both changes together. If one developer's changes breaks anything only when it's together with the other developer's code it can easily be demoted back from Test Level 1.
However, Mercurial doesn't seem to be made for tracking changes or manage the above mentioned process, it only does SCM.
It's best to use a separate tool for issue tracking. That way you can use the best of breed for each task. Just makes sure that you select one which integrates well with your version control system.
To give some examples: jira (commercial) and trac (free) both have mercurial integration plugins. They also have customizable workflow states, allowing you to model your process.
I've come to realize that what we're looking for is not an issue tracker, but rather a deployment tool to replace that aspect of Harvest. Go and Anthill Pro are two candidates.
Our team uses Perforce for revision control. We'd like to be able to accept patches from folks outside our team (e.g. support engineers) without giving them full privileges to check in code, like the way that open-source projects are willing to accept code from anyone but give full commit privileges to only a few people.
Other source-control systems (e.g. SVN, GIT) make this pretty easy because anyone can create a local branch, make changes, and generate a patch using basic command-line or GUI tools (e.g. Tortoise).
But I'm new to Perforce and don't know if there's an analogous way to do this.
Can anyone recommend a best practice? (ideally it would work with P4V on Windows since that's what our external contributors are likely to be using)
You could setup a contributor branch on your server with the correct access rights. Then when the patch is committed to perforce, you integrate to your main branch. The commit rights to the branch isolate you from the main branch.
Of course, this means you have to maintain a branch for the support engineers, and give them external access to the perforce server.
There may be another solution in Remote Depots, but have not checked into that.