Building MySQL from source changes Makefile due to cmake - mysql

I am working on MySQL optimization with another researcher and we are using git for version control. The problem is that each of us has to compile those sources on separate machines and running cmake . generates different versions of makefile on our machine. If we think about the following cases
1. A changes source
2. A runs cmake, builds the source, and test performance
3. B pulls the code change
4. B changes source, runs cmake and builds the source
After the step 4, B will have a different version of Makefile and files such as cmake_install.cmake that depend on users and user paths.
For example, some of the files have the following diffs.
# The program to use to edit the cache.
-CMAKE_EDIT_COMMAND = /usr/local/bin/ccmake
+CMAKE_EDIT_COMMAND = /usr/bin/ccmake
# The top-level source directory on which CMake was run.
-CMAKE_SOURCE_DIR = /home/dcslab/userA/mysql/mysql-5.6.21-original
+CMAKE_SOURCE_DIR = /home/dcslab/userB/mysql-5.6.21-original
# The top-level build directory on which CMake was run.
-CMAKE_BINARY_DIR = /home/dcslab/userA/mysql/mysql-5.6.21-original
+CMAKE_BINARY_DIR = /home/dcslab/userB/mysql-5.6.21-original
These are all user-dependent paths generated by cmake commands. The direct way to resolve this conflict is to untrack Makefiles or any file generated by cmake after initially committing them. I am wondering if there is any better and legit way of managing projects using cmake for more than one user. Thanks for your help in advance!

Files generated by CMake are machine-dependent, so they will not work on any machine except one where they has been generated. Because of that, they are useless on for others and there is no needs to track them in git. There are two ways for achive this:
Tune gitignore for ignore files, generated by CMake, on commit. Patterns for such files are relatively simple and can be found by googling. Disadvantage of this approach is that files, generated by project's CMake scripts (configure_file, add_custom_command) will not be automatically ignored and will require explicit notion in gitignore
Perform out-of-source builds, that is not run cmake from source directory. CMake generates additional files only in build directory, correct project's CMake scripts also should follow this rule. So git repo will be clean without any gitignore patterns.
It is common practice to perform out-of-source build in ./build subdirectory of source directory. In this case you can add /build/** to gitignore, and everything will work.

An important part of good engineering -- and especially in research -- is reproducability. It is unfortunate that the code you are working on can be influenced by the environment in which it is built (you may want to look at the Bazel for future projects to reduce external dependencies). Given that this code already has this external dependency problem, you can at least counter the effects by using a consistent environment via virtualization. In particular, you may want to take a look at Docker, which would allow you and your collaborators to build/run code using a common system image, thereby ensuring that all builds and executions are derived from a predictable, consistent environment.

Related

Can I use opam to make a package out of a local file and install it?

I'm new to opam and trying to figure out how to use it properly. For a class, I want to set up students with an environment that has some custom packages installed. (The package will consist of some raw .ml files that I got from a colleague at another school; the files are on their github but there's no .opam file that I can see, and as far as I know they're not in any official package release.)
Can I somehow call these local .ml files a package and ask opam to install it? Do the files have to be on github first, and if so can I use my colleague's existing repository as the source? I don't want to make any of this public, since it is not my own work; I just want to configure my local environment so that the code in the files can be included easily as a package. Basically I don't know the best way to proceed so I'm happy for any advice.
You can add a custom opam file in the base directory of the project. See the documentation for how to create that file.
Then you can enter opam pin add . in the base directory and your project will be installed as if it was an opam package. Check opam pin --help for more info (you can also pin to a remote git project for instance).
Note that though the default repository is hosted on github, this is in no way a requirement for opam. Opam is dependent on git but you can absolutely use it with a private git repository. If you want to use your colleague's repository as the source, that is totally doable though it is often preferable to have the opam file at the root of the directory (you can do a PR on their repository or make your own fork of it on github, the site makes it clear you copied the code).
If pinning is not to your taste, you can also create your own repository though this is probably a bit too heavyweight for your needs.
Good luck!

Managing composer and deployment

So, I'm enjoying using composer, but I'm struggling to understand how others use it in relation to a deployment service. Currently I'm using deployhq, and yes, I can set it to deploy and run composer when there is an update to the repo, but this doesn't make sense to me now.
My main composer repo, containing just the json file of all of the packages I want to include in my build, only gets updated when I add a new package to the list.
When I update my theme, or custom extension (which is referenced in the json file), there is no "hook" to update my deployment service. So I have to log in to my server and manually run composer (which takes the site down until it's finished).
So how do others manage this? Should I only run composer locally and include the vendor folder in my repo?
Any answers would be greatly appreciated.
James
There will always be arguments as to the best way to do things such as this and there are different answers and different options - the trick is to find the one that works best for you.
Firstly
I would first take a step back and look at how you are managing your composer.json
I would recommend that all of your packages in composer.json be locked down to the exact version number of the item in Packagist. If you are using github repo's for any of the packages (or they are set to dev-master) then I would ensure that these packages are locked to a specific commit hash! It sounds like you are basically there with this as you say nothing updates out of the packages when you run it.
Why?
This is to ensure that when you run composer update on the server, these packages are taken from the cache if they exist and to ensure that you dont accidentally deploy untested code if one of the modules happens to get updated between you testing and your deployment.
Actual deployments
Possible Method 1
My opinion is slightly controversial in that when it comes to Composer for many of my projects that don't go through a CI system, I will commit the entire vendor directory to version control. This is quite simply to ensure that I have a completely deployable branch at any stage, it also makes deployments incredibly quick and easy (git pull).
There will already be people saying that this is unnecessary and that locking down the version numbers will be enough to ensure any remote system failures will be handled, it clogs up the VCS tree etc etc - I won't go into these now, there are arguments for and against (a lot of it opinion based), but as you mentioned it in your question I thought I would let you know that it has served me well on a lot of projects in the past and it is a viable option.
Possible Method 2
By using symlinks on your server to your document root you can ensure that the build completes before you switch over the symlink to the new directory once you have confirmed the build completed.
This is the least resistance path towards a safe deployment for a basic code set using composer update on the server. I actually use this method in conjunction with most of my deployments (including the ones above and below).
Possible Method 3
Composer can use "artifacts" rather than a remote server, this will mean that you will basically be creating a "repository folder" of your vendor files, this is an alternative to adding the entire vendor folder into your VCS - but it also protects you against Github / Packagist outages / files being removed and various other potential issues. The files are retrieved from the artifacts folder and installed directly from the zip file rather than being retrieved from a server - this folder can be stored remotely - think of it as a poor mans private packagist (another option btw).
IMO - The best method overall
Set up a CI system (like Jenkins), create some tests for your application and have them respond to push webhooks on your VCS so it builds each time something is pushed. In this build you will set up the system to:
run tests on your application (If they exist)
run composer update
generate an artifact of these files (if the above items succeed)
Jenkins can also do an actual deployment for you if you wish (and the build process doesn't fail), it can:
push the artifact to the server via SSH
deploy the artifact using a script
But if you already have a deployment system in place, having a tested artifact to be deployed will probably be one of its deployment scenarios.
Hope this helps :)

Trying to get RmySQL to work but not understanding bash's export or filesystem conventions

I am trying to install RMySQL on my mac (mavericks) and it errors out when I try to build it from source, saying:
Configuration error: could not find the MySQL installation include
and/or library directories. Manually specify the location of the
MySQL libraries and the header files and re-run R CMD INSTALL.
INSTRUCTIONS:
Define and export the 2 shell variables PKG_CPPFLAGS and PKG_LIBS to include the directory for header files (*.h) and
libraries, for example (using Bourne shell syntax):
export PKG_CPPFLAGS="-I"
export PKG_LIBS="-L -lmysqlclient"
Re-run the R INSTALL command:
R CMD INSTALL RMySQL_.tar.gz
I tried to follow the instructions by entering:
export PKG_CPPFLAGS="-I/usr/local/mysql/include" export
PKG_LIBS="-L/usr/local/mysql/lib -lmysqlclient"
but when I re-run RMySQL it still doesn't work. Moreover, if I type
$PKG_LIBS
to see what that variable holds, I get
-bash: -L/usr/local/mysql/lib: No such file or directory'
I know that /usr/local/mysql/lib exists and it does contain a mySQL header. Am I misunderstanding the instructions?
I'm asking here only after a lot of effort to find solutions and/or work arounds. Sucks being a noob sometimes.
I am going to assume you're trying to get RmySQL to run on R 3.1.0 on Mavericks? Rather than worry about exporting variables etc, here is a simple clean solution for you that should avoid the headaches.
The RMySQL install link Pascal provided above really is your solution. You're probably just stumbling on syntax, or getting things to work from the terminal.
Even if you're a "noob", you should be able to get this working. I'll try to offer a "dummy's guide" walk through here, as I bet there are many others who have this problem too, even after trying to read the RMySQL installation readme.
I would bet with very high confidence the problem is just that you aren't specifying correctly the locations of the library and header folders for compiling. Read the errors carefully when you try to compile... the errors will probably tell you a file/header is missing, or some .so file (shared object) is missing.
One simple way compile RMySQL from source on R 3.1.0, mavericks is as follows (this does not require you to set any environmental variables, no editing of the Renviron file, etc):
Does MySQL work by itself? i.e. Can you open/run it no problems? If not, fix that first.
Find the precise location of your mysql installation. For me, on Mavericks, I see mysql installed at /usr/local/mysql-5.6.17-osx10.7-x86_64 (your version number may be different). There is also another folder /usr/local/mysql which is an alias to /usr/local/mysql-5.6.17-osx10.7-x86_64 (/usr/local/mysql finds the current version of mysql you are using, if multiple mysql file folders exist, I think). In this directory, I see two sub directories (among many) called "include" and "lib". Take a look; "include" will contain header files (include as in #include , etc, in simple C++ programs). The "lib" folder contains compiled source code of the mysql library.
An easy way to compile and install RMySQL which doesn't exactly follow the suggested way to do it in the installation guide is this. Note that this is doing the same thing as in the installation guide, just a little easier as it's one command line from the terminal, once you know where your mysql install folder is. Go to the terminal, and type the following exactly, with one space between each chunk (with your mysql folder name adjusted appropriately for the version number):
PKG_CPPFLAGS="-I/usr/local/mysql/include/" PKG_LIBS="-L/usr/local/mysql/lib/ -lmysqlclient" R CMD INSTALL RMySQL_0.9-3.tar.gz
OR (the same thing, just more typing)
PKG_CPPFLAGS="-I/usr/local/mysql-5.6.17-osx10.7-x86_64/include/" PKG_LIBS="-L/usr/local/mysql-5.6.17-osx10.7-x86_64/lib/ -lmysqlclient" R CMD INSTALL RMySQL_0.9-3.tar.gz
Note for dummies: Make sure when you run this command, that you are doing it from the terminal in the directory that contains the RMySQL_0.9-3.tar.gz file (or whatever the name of your folder is that contains the RMySQL source code)
and RMySQL compiles!
Don't be afraid about trying to compile source code -- it's not just for 'compiled language programmers' or 'computer science graduates'. Most of the time when compiling fails it's just because files are "missing" (there is no corruption on the source code) -- the user hasn't properly specified the locations of the header and libraries (shared objects). Now pull your big boy/girl panties up and just do it .... it's easy.
Notes for people clueless about compiling source code for packages in R:
a) pay special attention to the spacing in the above, otherwise it may not work. Do not have any spaces between the = and the variable/file names (e.g. don't try and have in the above PKG_CPPFLAGS ="-I/usr/local/mysql/include/" as it won't work)
b) When compiling, you want to specify the locations of the header files and the library files and this is what the "-I/ .... " and "-L/ ...." are doing. The -I directory specifies the location of the header files, and the -L the location of the library files. The library files also require the -l[name of library] extension (the -l is short for -lib in the library object names).
c) Note that in the directory /usr/local/mysql-5.6.17-osx10.7-x86_64/lib/ I do not see a file called "lmysqlclient", or even "libmysqlclient", but I do see files named (among others) "libmysqlclient.a" and "libmysqlclient.18.dylib". So don't worry about your MySQL installation not being correct if you don't see a file just called "libmysqlclient" in the lib folder.

Can Jenkins store artifacts outside the job directory?

I currently have Jenkins set up with a number of jobs, but it's proving difficult to back up because the artifacts are stored within the job directory. I'd like to back up the job configurations and artifacts separately. I'm sure I remember reading somewhere that Jenkins now has an option to store them outside the job, but I can't find this.
Is there any configuration option that does this while still making the artifacts visible from within the job on the Jenkins interface? (ie rather than merely an add-in that copies the artifacts elsewhere)
Go to your jenkins configuration page, e.g.
http://mybuildserver.acme.com/configure
At the top of the configuration page there is a "home directory" setting. Click the "advanced..." button below it.
Now set the "Workspace Root Directory" to e:\jenkins-workspaces\${ITEM_FULL_NAME}, and "Build Record Root Directory" to e:\jenkins-builds\${ITEM_FULL_NAME} or something similar.
Warning: I run Jenkins 2.7.2 and noticed that certain features don't work properly after configuring Jenkins like that. I saw problems with folders and problems with the multi-branch project plugin. Check the status of those issues if your rely on these features.
As you can see here, there are many plugins to deploy artifacts anywhere you want/need, on FTP, CIFS, Confluence, Artifactory.... especially the ArtifactsDeployer that will allow you to make a copy of the artifacts in the Jenkins Home.
Thank you Sam, for your post, which directed me into the right direction to solve my problem.
Have been searching for a way on how can I make a symlink to the Job-Archive of a build for multibranch projects. Up to now, we used to manually search for the correct folder basename in the filesystem and added that one to the Jenkinsfile.
Now, I can simply use
jobOutputFolder = currentBuild.rawBuild.artifactsDir.path
and use that in my script.
If security is a concern, I could implement that as a shared library additionally.
Try the Use Custom Workspace build option. From the Jenkins popup help:
For each job on Jenkins, Jenkins allocates a unique "workspace
directory." This is the directory where the code is checked out and
builds happen. Normally you should let Jenkins allocate and clean up
workspace directories, but in several situations this is problematic,
and in such case, this option lets you specify the workspace location
manually.
This option is also available under advanced project properties of multi-configuration project builds.
A groovy script under "Prepare an environment for the run" will always run on the master, and this groovy script can create a symlink to where you really want artifacts archiving to archive_to which SHOULD include the job name and build number:
if (! Files.createSymbolicLink(Paths.get(currentBuild.artifactsDir.path),
Paths.get(archive_to.getCanonicalPath()))) {
throw new RuntimeException("Can't create symlink to archive dir")
}
Of course (sadly) when old builds are purged by Jenkins the old artifacts are left because jenkins will not follow a symlink when purging, even if jenkins owns the symlink and the target (shame).
I workaround for that may be to point a symlink back from the new archive dir, then, when jenkins purges it's archive dir, the new symlink will dangle and a cron job can then later delete the new job archive dir
Copy Artifact Plugin (https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin) adds a build step for retrieving files from another project's workspace to current and work from there.

Hudson - separate SVN dir and build dir?

Is it possible in a Hudson job to specify a different Source directory to poll to the directory in which a build is run ?
I've used Hudson successfully to enforce compilation success in java projects.
An SVN directory is polled every say 5 mins and an ant target specified - the errant programmer getting emailed in the event of failures.
However in every case the ant build.xml happened to reside in the same directory as the SVN directory being polled.
Basically I am trying to apply the same system to an Oracle database build.
There are multiple directories to watch (schema, static data, stored procs etc and an upstream / downstream order).
However the ant build script resides several directories above the directories I wish to poll.
I guess the solution is I must create multiple ant build.xmls one for each database component and I assume a separate Hudson job for each ?
I wondered was there a better way of doing this.
Best Rgds
Peter
Checkout the project from the highest level and configure your build steps to execute the various steps in the the sub folder as you would do it manually. As long as everything needed is in the workspace you can build whatever is in there in the top level as well as sub folders.