How to build Chromium faster? - google-chrome

Following only the instructions here - https://www.chromium.org/developers/how-tos/get-the-code I have been able to successfully build and get a Chromium executable which I can then run.
So, I have been playing around with the code (adding new buttons to the browser etc.) for learning purposes. So each time I make a change (like adding a new button in the settings toolbar) and I use the ninja command to build it takes over 3 hours to finish before I can run the executable. It builds each and every file again I guess.
I have a decently powerful machine (i7, 8GB RAM) running 64-bit Ubuntu. Are there ways to speed up the builds? (At the moment, I have literally just followed the instructions in the above mentioned link and no other optimizations to speed it up.)
Thank you very very much!

If all you're doing is modifying a few files and rebuilding, ninja will only rebuild the objects that were affected by those files. When you run ninja -C ..., the console displays the number of targets that need to be built. If you're modifying only a few files, that should be ~2000 at the high end (modifying popular header files can touch lots of objects). Modifying a single .cpp would result in rebuilding just that object.
Of course, you still have to relink which can take a very long time. To make linking faster, try using a component build, which keeps everything in separate shared libraries rather than one big onw that needs to be relinked for any change. If you're using GN, add is_component_build=true to gn args out/${build_dir}. For GYP, see this page.
You can also peruse faster linux builds and see if any of those tips apply to you. Unfortunately, Chrome is a massive project so builds will naturally be long. However, once you've done the initial build, incremental builds should be on the order of minutes rather than hours.

Follow the recently updated instructions here:
https://chromium.googlesource.com/chromium/src/+/HEAD/docs/windows_build_instructions.md#Faster-builds
In addition to using component builds you can disable nacl, use jumbo builds, turn off symbols for webcore, etc. Jumbo builds are still experimental at this point but they already help build times and they will gradually help more.
Full builds will always take a long time even with jumbo builds, but component builds should let incremental builds be quite fast in many cases.

For building on Linux, you can see how to build faster at: https://chromium.googlesource.com/chromium/src/+/master/docs/linux_build_instructions.md#faster-builds
Most of them require add build argments. To edit build arguments, you can see GN build configuration at: https://www.chromium.org/developers/gn-build-configuration.
You can edit the build arguments on a build directory by:
$ gn args out/mybuild

Related

Injecting into an EXE

I really want to inject my C++ program into another (compiled) program. The way I want to do this is changing the first part of bytes (where the program starts) to goto the binary of my program (pasted into an codecave for example) and when it is finished running to goto back where it went before the injected program started running.
Is this is even possible? and if it is, is it a good/smart idea todo so?
Are there other methods of doing so?
For example:
I wrote a program that will write the current time to a file and then terminates, so if i inject it to Internet Explorer and launch it, it will first write its current time to a file and then start Internet Explorer.
In order to do this, you should start reading the documentation for PE files, which you can download at microsoft.
Doing this takes a lot research and experimenting, which is beyond the scope of stackoverflow. You should also be aware that doing this depends heavily on the executable you try to patch. It may work with your version, but most likely not with another version. There are also techniques against this kind of attack. May be built into the executable as well as in the OS.
Is it possible?
Yes. Of course, but it's not trivial.
Is it smart?
Depends on what you do with it. Sometimes it may be the only way.

Number of configurations in a project for build and install

In our project, we currently have two different configurations. The first one builds the assemblies. The other packages (including moving stuff to the right directories etc.) everything for InstallShield.
Now, we can't agree if it's better to move all the build steps into a single configuration and run it as a whole chain or if it's better to keep the build process separate from creating installation package.
Googling results in guides on how to do that but not in what way to do that (and our confusion is mainly due to the architecture of the configurations' order). We'll be using a few steps from PowerShield in order to move a number of files between different directories due to certain local considerations. The total number of steps will land on 5 or less.
The suggestion that I have is the following three configurations. They run separately, independently and their build steps overlap (being super sets of each other, consecutively regarded).
Configuration Build.
Configuration Build and test.
Configuration Build, test and package.
The main point of my suggestion is that e.g. the step that compiles the software is implemented in each configuration (as opposed to reusing the artifacts from an independent run of other configuration).
I would argue like this:
if you ever need to perform just one of the two steps - then leave them as separate steps.
This gives you the flexibility to run one, or the other, or both steps. E.g. could it be that you need to just build the solution, but not create the final installation package? E.g. for local testing?
However, if you never ever use one of the steps separately (you always run both together), then I'd probably just merge them together into one - having two separate steps doesn't make much sense to me

How to prevent tclIndex collisions?

If multiple tcl scripts are running in the same directory, they can crash if one tries to auto_mkindex at the same exact time as another.
How can I prevent this properly? I do not want to just place catch around auto_mkindex, nor do I want to implement a semaphore system for this simple problem.
Why would you be building the tclIndex files at the same time in the first place? That's a step that I would expect as part of installation (i.e., something done once as a special action) and not as part of operation (i.e., many times, in parallel potentially). If it's part of installation, it's entirely your own problem if you try to run the code while you're installing it.
I also wouldn't tend to use tclIndex for anything shared between applications, as that's optimized for simple scripts. Shared components are better off made into packages, especially as they're versioned entities, and they have their own indexing mechanism (the pkgIndex.tcl). (Having the same version of the same package installed twice in such a way that things interfere… well, that wouldn't be sensible, would it?)

how to create applications with Clozure Common Lisp (on Microsoft Windows)

I am a new one to Common Lisp (using Clozure Common Lisp under Microsoft Windows), who is familiar with c and python before. So maybe the questions are stupid here, but be patient to give me some help.
1) What's is the usual way to run a common lisp script?
Now, I wrote a bat file under windows to call ccl exe(wx86cl.exe) and evaluate (progn (load "my_script_full_path") (ccl:quit)) every time when I want to "run" my script. Is this a standard way to "run" a script for common lisp?
Any other suggestion about this?
2) What's the difference between (require 'cxml) and (asdf:operate 'asdf:load-op :cxml)?
They are seems to be the same for my script, which one should I use?
3) ignore it, not a clear question
4) When I want to load some library (such as require 'cxml), it always takes time(3s or even 5s) to load cxml every time when I "run" my script, there is also much log to standard output I show below, it seems like checking something internal. Does it means I have to spent 3-5s to load cxml every time when I want to run a simple test? It seems like a little inefficient and the output is noisy. Any suggestion?
My Script
(require 'cxml) (some-code-using-cxml)
And the output
; Loading system definition from D:/_play_/lispbox-0.7/quicklisp/dists/quicklisp/software/cxml-20101107-git/cxml.asd into #<Package "ASDF0">
;;; Checking for wide character support... yes, using code points.
; Registering #<SYSTEM "cxml-xml">
......
some my script output
---EDIT TO ADD MORE----
5) I must say that I almost forget the way of dumping image to accelerate the loading speed of lisp library. So, what is the normal process for us to develop a (maybe very simple) lisp script?
Base on the answer of what I got now, I guess maybe
a) edit your script
b) test it via a REPL environment, SLIME is a really good choice, and there should be many loop between a <==> b
c) dump the image to distribute it?( I am no sure about this)
6) Furthermore, what is the common way/form for us to release/distribute the final program?
For a lisp library, we just release our source code, and let someone else can "load/require" them.
For a lisp program, we dump a image to distribute it when we confirm that all functions go well.
Am I right?
What form do we use in a real product? Do we always dump all the thing into a image at final to speed up the loading speed?
1) Yes, the normal way to run a whole programme is to use a launcher script. However, windows has much, much better scripting support these days than just the bat interpreter. Windows Scripting Host and PowerShell ship as standard.
1a) During development, it is usual to simply type things in a the REPL (Read-Eval-Print-Loop, i.e. the lisp command line), or to use something like SLIME (for emacs or xemacs) as a development environment. If you don't know what they are, look them up. You may wish to use Cygwin to install xemacs, which will give you access to a range of linux-ish tools.
2) Require is, IIRC, a part of the standard. ASDF is technically not, it is a library that operates to make libraries work more conveniently. ASDF has a bunch of features that you will eventually want if you really get into writing large Lisp programmes.
3) Question unclear, pass.
4) See 1a) - do your tests and modifications in a running instance, thus avoiding the need to load the library more than once (just as you would in Python - you found the python repl, right?). In addition, when your programme is complete, you can probably dump an image which has all of your libraries pre-loaded.
Edit: additional answers:
5) Yes
6) Once you have dumped the image, you will still need to distribute the lisp binary to load the memory image. To make this transparent to the user, you will also have to have a loader script (or binary) to run the lisp binary with the image.
You don't have to start the lisp from scratch and load everything over again each time you want to run a simple test. For more efficient development, interactively evaluate code in the listener (REPL) of a running lisp environment.
For distribution, I use Zachary Beane's Buildapp tool. Very easy to install and use.
Regarding distribution -
I wrote a routine (it's at home and unavailable at the moment) that will write out the current image as a standard executable and quit. It works for both CLISP and SBCL.
I can rummage it up if you like.

How to automate the tasks for releasing open-source-software?

Everyone managing open-source-software runs into the problem, that with the time the process of releasing a new version gets more and more work. You have to tag the release in your version-control, create the distributions (that should be easy with automated builds), upload them to your website and/or open-source-hoster. You have to announce the new release with nearly the same message on chosen web-forums, the news-system on sourceforge, mailinglists and your blog or website. And you have to update the entry of your software on freshmeat. Possible more tasks have to be done for the release.
Do you developed techniques to automate some of these tasks? Does software exist that supports you with this?
Pragmatic Project Automation shows how to do all of that. They use Ant for practically everything in the book, so if you know Ant you can make different targets to do any step in the build-release cycle.
For my Perl stuff, I wrote Module::Release. In the top-level directory I type a single command:
% release
If checks several things and dies if anything is wrong. If everything checks out, it uploads the distribution.
It automates my entire process:
Test against multiple versions of Perl
Test distribution files
Check the status of source control
Check for code and distribution quality metrics
Update changes file
Determine new version number
Release code to multiple places
Tag source control with new version number
Everyone seems to write their own release automator though. Most people like their process how they like their process, so general solutions don't work out that well socially.
Brad Fitzpatrick has ShipIt which is a Perl program to automate releases. There's slightly more info in his original announcement.