Number of configurations in a project for build and install - teamcity-8.0

In our project, we currently have two different configurations. The first one builds the assemblies. The other packages (including moving stuff to the right directories etc.) everything for InstallShield.
Now, we can't agree if it's better to move all the build steps into a single configuration and run it as a whole chain or if it's better to keep the build process separate from creating installation package.
Googling results in guides on how to do that but not in what way to do that (and our confusion is mainly due to the architecture of the configurations' order). We'll be using a few steps from PowerShield in order to move a number of files between different directories due to certain local considerations. The total number of steps will land on 5 or less.
The suggestion that I have is the following three configurations. They run separately, independently and their build steps overlap (being super sets of each other, consecutively regarded).
Configuration Build.
Configuration Build and test.
Configuration Build, test and package.
The main point of my suggestion is that e.g. the step that compiles the software is implemented in each configuration (as opposed to reusing the artifacts from an independent run of other configuration).

I would argue like this:
if you ever need to perform just one of the two steps - then leave them as separate steps.
This gives you the flexibility to run one, or the other, or both steps. E.g. could it be that you need to just build the solution, but not create the final installation package? E.g. for local testing?
However, if you never ever use one of the steps separately (you always run both together), then I'd probably just merge them together into one - having two separate steps doesn't make much sense to me

Related

running two GitLab pipelines with the same runner

I have two pipelines that I want to run with the same runner, is it possible?
my single runner installed on a Linux virtual machine and I want to use it to run all my pipelines.
If the pipelines are for different projects you will need to make sure the runner is accessible to each project.
Depending on the level of control you want, you can utilise gitlab cis keyword for tags, this will then enable you to determine which runner handles which pipeline.
If you want to run the jobs in parallel you will need to make sure the runner is enabled for concurrent running and also that the jobs are in the same stages within the pipelines.
The only way to do this at the moment is to define one tag for one runner only and user this tag for your project.
This way everything is run on this single runner.
This has of course the disadvantage that the load is not spread to different runners, so be careful.
You could improve this solution if you a child pipeline to get a new free runner tag and create a child pipeline using it.
There are active issues about this problem in gitlab, see this one and this.
There is a forum entry about it as well.

How to build Chromium faster?

Following only the instructions here - https://www.chromium.org/developers/how-tos/get-the-code I have been able to successfully build and get a Chromium executable which I can then run.
So, I have been playing around with the code (adding new buttons to the browser etc.) for learning purposes. So each time I make a change (like adding a new button in the settings toolbar) and I use the ninja command to build it takes over 3 hours to finish before I can run the executable. It builds each and every file again I guess.
I have a decently powerful machine (i7, 8GB RAM) running 64-bit Ubuntu. Are there ways to speed up the builds? (At the moment, I have literally just followed the instructions in the above mentioned link and no other optimizations to speed it up.)
Thank you very very much!
If all you're doing is modifying a few files and rebuilding, ninja will only rebuild the objects that were affected by those files. When you run ninja -C ..., the console displays the number of targets that need to be built. If you're modifying only a few files, that should be ~2000 at the high end (modifying popular header files can touch lots of objects). Modifying a single .cpp would result in rebuilding just that object.
Of course, you still have to relink which can take a very long time. To make linking faster, try using a component build, which keeps everything in separate shared libraries rather than one big onw that needs to be relinked for any change. If you're using GN, add is_component_build=true to gn args out/${build_dir}. For GYP, see this page.
You can also peruse faster linux builds and see if any of those tips apply to you. Unfortunately, Chrome is a massive project so builds will naturally be long. However, once you've done the initial build, incremental builds should be on the order of minutes rather than hours.
Follow the recently updated instructions here:
https://chromium.googlesource.com/chromium/src/+/HEAD/docs/windows_build_instructions.md#Faster-builds
In addition to using component builds you can disable nacl, use jumbo builds, turn off symbols for webcore, etc. Jumbo builds are still experimental at this point but they already help build times and they will gradually help more.
Full builds will always take a long time even with jumbo builds, but component builds should let incremental builds be quite fast in many cases.
For building on Linux, you can see how to build faster at: https://chromium.googlesource.com/chromium/src/+/master/docs/linux_build_instructions.md#faster-builds
Most of them require add build argments. To edit build arguments, you can see GN build configuration at: https://www.chromium.org/developers/gn-build-configuration.
You can edit the build arguments on a build directory by:
$ gn args out/mybuild

Composer dependency with minor differences in composer.json file

I'm creating an application that is going to work on two separate versions of the same software. These frameworks will have completely different modules with a shared dependency to a JavaScript framework I've built.
Let's say
dave/version1
dave/version2
Which both share the dependency via a require of
dave/framework
I want to maintain this framework (dave/framework) in one repository that both of the parent modules can require. However the location that these framework files need to be placed are slightly different between the two modules, along with slightly different requirements for the composer.json files to ensure everything gets moved around correctly (the two versions of these software implement composer rather differently).
With my limited knowledge of composer and Git I've formulated a couple of solutions:
Create three repositories, two wrapper repositories with specific composer.json files to support each different version of the software. With another dependency to a third repository which contains the actual framework. I'm unsure if this would ever work outside of a theory. Also ends up being a little messy.
Use some form of clever tagging and have version1 and version2 depend on separate versions of the framework which would in turn have slightly different makeup. Composer would then struggle with pulling the latest version of the module as we'd be running two ever so slightly different code bases in weird versions.
However both of these seem potentially messy and the incorrect way of structuring what I'm trying to achieve.
Is there a nice way to achieve this? or am I better of maintaining two separate repositories for the framework?
With some correct planning, your first option is going to be easier than you think.
Git has a system called Submodules built right in.
Version1 and Version2 repos would include the Frameworks repo within them. Then when you update Framework, you can pull those updates no problem in Version1/2. You control when and how to, so you can make sure you don't introduce a bug down the road either.
Here is some documentation for submodules: https://git-scm.com/book/en/v2/Git-Tools-Submodules
https://git-scm.com/docs/git-submodule

How to prevent tclIndex collisions?

If multiple tcl scripts are running in the same directory, they can crash if one tries to auto_mkindex at the same exact time as another.
How can I prevent this properly? I do not want to just place catch around auto_mkindex, nor do I want to implement a semaphore system for this simple problem.
Why would you be building the tclIndex files at the same time in the first place? That's a step that I would expect as part of installation (i.e., something done once as a special action) and not as part of operation (i.e., many times, in parallel potentially). If it's part of installation, it's entirely your own problem if you try to run the code while you're installing it.
I also wouldn't tend to use tclIndex for anything shared between applications, as that's optimized for simple scripts. Shared components are better off made into packages, especially as they're versioned entities, and they have their own indexing mechanism (the pkgIndex.tcl). (Having the same version of the same package installed twice in such a way that things interfere… well, that wouldn't be sensible, would it?)

How about an Application Centralized Configuration Management System?

We have a build pipeline to manage the artifacts' life cycle. The pipline is consist of four stages:
1.commit(runing unit/ingetration tests)
2.at(deploy artifact to at environment and runn automated acceptance tests)
3.uat(deploy artifact to uat environment and run manual acceptance tests)
4.pt(deploy to pt environment and run performance tests)
5.//TODO we're trying to support the production environment.
The pipeline supports environment varialbles so we can deploy artifacts with different configurations by triggerting with options. The problem is sometimes there are too many configuration items making the deploy script contains too many replacement tasks.
I have an idea of building a centralized configuration managment system (ccm for short name), so we can maintain the configuration items over there and leave only a url(connect to the ccm) replacement task (handling different stages) in the deploy script. Therefore, the artifact doesnt hold the configuration values, it asks the ccm for them.
Is this feasible or a bad idea of the first place?
My concern is that the potential mismatch between the configuration key (defined in the artifact) and value (set in the ccm) is not solved by this solution and may even worse.
Configuration files should remain with the project or set as configuration variables where they are run. The reasoning behind this is that you're adding a new point of failure in your architecture, you have to take into account that your configuration server could go down thus breaking everything that depends on it.
I would advice against putting yourself in this situation.
There is no problem in having a long list of environment variables defined for a project, besides that could even mean you're doing things properly.
If for some reason you find yourself changing configuration files a lot (for ex. database connection strings, api ednpoints, etc...) then the problem might be this need to change a lot configurations, which should stay almost always the same.