RTC - extract specific file from repository - extract

I have a Perl script that I wrote to package release scripts.
The RTC bits in the script are as follows.
List the workspaces:
lscm list workspaces -r "$reposURI" -u $reposUser -P $reposPwd
List the componets:
lscm compare ws "$ws1" ws "$ws2" -r "$reposURI" -u $reposUser -P $reposPwd -I c
Compare the 2 workspaces' specified component to the changed files:
lscm compare ws "$ws1" ws "$ws2" -r "$reposURI" -u $reposUser -P $reposPwd -I cf
Great! I have the liust of files changed (trust me, this took a LOT of working out). Now, next step is simply extract the files listed from the changed workspace:
According to the documentation there is a "Lscm extract", but it seems not on the version I have. I cannot upgrade as this is a corporate environment where software installs are controlled centrally, and they are sticking with the current RTC version (3).
So, is there an alternative way?

I don't know if a lscm extract: it doesn't seem to exist in the RTC documentation.
The help page only mention a lscm changeset extract (used in RTC3.x).
lscm extract is only referenced one, in the article "Using the Jazz SCM command line to support software configuration audit", and I would say it is an error.

You can load only the file you care about: scm load <workspace> <path-in-workspace>. That will get the version onto the disk, but it will pollute your disk with RTC metadata (ie, the .jazz5 dir in the root of your sandbox). I suggest running in a temporary directory and then deleting that directory once you have the file content that you want.
That's kind of kludgy. Ideally you'd be able to move onto a modern version of RTC and use the 'extract' subcommand that you mention.

Related

I have an old media wiki root dir, I am not certain if I'm able to restore it

This is an older installation for which I have (what I believe to be) the full root directory of the mediawiki site.
The version of mediawiki is 1.26.
I know the permissions on the directory are not correct and have been messed around with (touch) files by other users, etc.
As I'm not very familiar with mediawiki, I understand the database or a dump of it are very important.
I cannot determine which db was used for this installation, as grep -RiI wgDBname only produces mentions of the wgDBname , but not an actual DBname being used.
I followed all applicable steps in https://www.mediawiki.org/wiki/Manual:Restoring_a_wiki_from_backup#External_links , however this link assumes some knowledge of the DB location (or even which DB was used in the first place).
I've issued
find . -name *.sql -exec ls -lh {} \; > /tmp/output and so on, to try and find (files > just a few KB) in size, to perhaps find a DB that way (assuming it was a mysql DB), and it may have been a postgres installation, and so on.
Any pointers to a possible search direction would be appreciated. Thank you.

Trying to get RmySQL to work but not understanding bash's export or filesystem conventions

I am trying to install RMySQL on my mac (mavericks) and it errors out when I try to build it from source, saying:
Configuration error: could not find the MySQL installation include
and/or library directories. Manually specify the location of the
MySQL libraries and the header files and re-run R CMD INSTALL.
INSTRUCTIONS:
Define and export the 2 shell variables PKG_CPPFLAGS and PKG_LIBS to include the directory for header files (*.h) and
libraries, for example (using Bourne shell syntax):
export PKG_CPPFLAGS="-I"
export PKG_LIBS="-L -lmysqlclient"
Re-run the R INSTALL command:
R CMD INSTALL RMySQL_.tar.gz
I tried to follow the instructions by entering:
export PKG_CPPFLAGS="-I/usr/local/mysql/include" export
PKG_LIBS="-L/usr/local/mysql/lib -lmysqlclient"
but when I re-run RMySQL it still doesn't work. Moreover, if I type
$PKG_LIBS
to see what that variable holds, I get
-bash: -L/usr/local/mysql/lib: No such file or directory'
I know that /usr/local/mysql/lib exists and it does contain a mySQL header. Am I misunderstanding the instructions?
I'm asking here only after a lot of effort to find solutions and/or work arounds. Sucks being a noob sometimes.
I am going to assume you're trying to get RmySQL to run on R 3.1.0 on Mavericks? Rather than worry about exporting variables etc, here is a simple clean solution for you that should avoid the headaches.
The RMySQL install link Pascal provided above really is your solution. You're probably just stumbling on syntax, or getting things to work from the terminal.
Even if you're a "noob", you should be able to get this working. I'll try to offer a "dummy's guide" walk through here, as I bet there are many others who have this problem too, even after trying to read the RMySQL installation readme.
I would bet with very high confidence the problem is just that you aren't specifying correctly the locations of the library and header folders for compiling. Read the errors carefully when you try to compile... the errors will probably tell you a file/header is missing, or some .so file (shared object) is missing.
One simple way compile RMySQL from source on R 3.1.0, mavericks is as follows (this does not require you to set any environmental variables, no editing of the Renviron file, etc):
Does MySQL work by itself? i.e. Can you open/run it no problems? If not, fix that first.
Find the precise location of your mysql installation. For me, on Mavericks, I see mysql installed at /usr/local/mysql-5.6.17-osx10.7-x86_64 (your version number may be different). There is also another folder /usr/local/mysql which is an alias to /usr/local/mysql-5.6.17-osx10.7-x86_64 (/usr/local/mysql finds the current version of mysql you are using, if multiple mysql file folders exist, I think). In this directory, I see two sub directories (among many) called "include" and "lib". Take a look; "include" will contain header files (include as in #include , etc, in simple C++ programs). The "lib" folder contains compiled source code of the mysql library.
An easy way to compile and install RMySQL which doesn't exactly follow the suggested way to do it in the installation guide is this. Note that this is doing the same thing as in the installation guide, just a little easier as it's one command line from the terminal, once you know where your mysql install folder is. Go to the terminal, and type the following exactly, with one space between each chunk (with your mysql folder name adjusted appropriately for the version number):
PKG_CPPFLAGS="-I/usr/local/mysql/include/" PKG_LIBS="-L/usr/local/mysql/lib/ -lmysqlclient" R CMD INSTALL RMySQL_0.9-3.tar.gz
OR (the same thing, just more typing)
PKG_CPPFLAGS="-I/usr/local/mysql-5.6.17-osx10.7-x86_64/include/" PKG_LIBS="-L/usr/local/mysql-5.6.17-osx10.7-x86_64/lib/ -lmysqlclient" R CMD INSTALL RMySQL_0.9-3.tar.gz
Note for dummies: Make sure when you run this command, that you are doing it from the terminal in the directory that contains the RMySQL_0.9-3.tar.gz file (or whatever the name of your folder is that contains the RMySQL source code)
and RMySQL compiles!
Don't be afraid about trying to compile source code -- it's not just for 'compiled language programmers' or 'computer science graduates'. Most of the time when compiling fails it's just because files are "missing" (there is no corruption on the source code) -- the user hasn't properly specified the locations of the header and libraries (shared objects). Now pull your big boy/girl panties up and just do it .... it's easy.
Notes for people clueless about compiling source code for packages in R:
a) pay special attention to the spacing in the above, otherwise it may not work. Do not have any spaces between the = and the variable/file names (e.g. don't try and have in the above PKG_CPPFLAGS ="-I/usr/local/mysql/include/" as it won't work)
b) When compiling, you want to specify the locations of the header files and the library files and this is what the "-I/ .... " and "-L/ ...." are doing. The -I directory specifies the location of the header files, and the -L the location of the library files. The library files also require the -l[name of library] extension (the -l is short for -lib in the library object names).
c) Note that in the directory /usr/local/mysql-5.6.17-osx10.7-x86_64/lib/ I do not see a file called "lmysqlclient", or even "libmysqlclient", but I do see files named (among others) "libmysqlclient.a" and "libmysqlclient.18.dylib". So don't worry about your MySQL installation not being correct if you don't see a file just called "libmysqlclient" in the lib folder.

How to convert bash file to a binary executable

I created a binary executable from bash script on linux server through SHC. The binary created works fine on linux machines, but through mistake on Mac. How could I convert my bash file to binary executable that is able to run everywhere(ubuntu, CentOS, Mac, Cygwin)?
shc -v -r -T -f ir16fetcher.sh
mv ir16fetcher.sh.x ir16fetcher
Shebang of my bash script
#!/bin/bash
On Linux machines
./ir16installer
USAGE : ir16fetcher <servername/ip address> [the n th latest build - optional. Default 1]
EXAMPLE: ir16fetcher jagger 2
EXAMPLE: ir16fetcher 167.116.6.155
REQUIRE: Please make sure conf file in installation folder ~/IRinstall/ir16 & ~/IRinstall/irmanager
On my Mac
./ir16installer
-bash: ./ir16installer: cannot execute binary file
I think it's not gonna work
"The compiled binary will still be dependent on the shell
specified in the first line of the shell code (i.e.
#!/bin/sh), thus shc does not create completely independent
binaries."
From http://www.datsi.fi.upm.es/~frosal/sources/shc.html
You will have to do this for every architecture and operating system you need to support. In any case, there doesn't really seem to be any benefits of using this method for distribution. It adds dependencies and complicates delivery, and I'm pretty sure whatever obfuscation the "shc" compiler implements is easily reversed.
if the goal here is to "hide" your source code, and then have the "hidden" copy of the code be executable on the Unix OSes you listed, then, encryption is really your only option.
I say this because encryption tools are available on every base Unix install. For your purposes, this is a very good thing as you wont have to download or configure anything additional. They're just there, as part of the natural installation of the OS. One of such tools is called openssl.
To Encrypt your file/script with openssl:
echo precious-content | openssl aes-128-cbc -a -salt -k mypassword
U2FsdGVkX1+K6tvItr9eEI4yC4nZPK8b6o4fc0DR/Vzh7HqpE96se8Fu/BhM314z
To Decrypt your file/script with openssl:
echo U2FsdGVkX1+K6tvItr9eEI4yC4nZPK8b6o4fc0DR/Vzh7HqpE96se8Fu/BhM314z | openssl aes-128-cbc -a -d -salt -k mypassword
precious-content
Now, to get openssl to do what you want it to do automatically without having to spend hours of your own time figuring out a way, you can paste your script to a site like www.EnScryption.com. This site will generate an "executable" version of your code for you, which you can then run on any Mac, Ubuntu, RedHat, CentOS box.

Using Git to track mysql schema - some questions

If this is recommended ?
Can I ask some git command examples about how to track versions of mysql schema?
Should we use another repository other then the one we normally use on our application root ?
Should I use something called hook ?
Update:
1) We navigate onto our project root where .git database resides.
2) We create a sub folder called hooks.
3) We put something like this inside a file called db-commit:
#!/bin/sh
mysqldump -u DBUSER -pDBPASSWORD DATABASE --no-data=true> SQLVersionControl/vc.sql
git add SQLVersionControl/vc.sql
exit 0
Now we can:
4) git commit -m
This commit will include a mysql schema dump that has been run just before the commit.
The source of the above is here:
http://edmondscommerce.github.io/git/using-git-to-track-db-schema-changes-with-git-hook.html
If this is an acceptable way of doing it, can I please ask someone with patience to comment line by line and with as much detail as possible, what is happening here:
#!/bin/sh
mysqldump -u DBUSER -pDBPASSWORD DATABASE --no-data=true> SQLVersionControl/vc.sql
git add SQLVersionControl/vc.sql
exit 0
Thanks a lot.
Assuming you have a git repo already, do the following in a shell script or whatever:
#!/bin/bash -e
# -e means exit if any command fails
DBHOST=dbhost.yourdomain.com
DBUSER=dbuser
DBPASS=dbpass # do this in a more secure fashion
DBNAME=dbname
GITREPO=/path/to/git/repo
cd $GITREPO
mysqldump -h $DBHOST -u $DBUSER -p$DBPASS -d $DBNAME > $GITREPO/schema.sql # the -d flag means "no data"
git add schema.sql
git commit -m "$DBNAME schema version $(`date`)"
git push # assuming you have a remote to push to
Then start this script on a daily basis from a cron job or what have you.
EDIT: By placing a script in $gitdir/hooks/pre-commit (the name is important), the script will be executed before every commit. This way the state of the DB schema is captured for each commit, which makes sense. If you automatically run this sql script every time you commit, you will blow away your database, which does not make sense.
#!/bin/sh
This line specifies that it's a shell script.
mysqldump -u DBUSER -pDBPASSWORD DATABASE --no-data=true> SQLVersionControl/vc.sql
This is the same as in my answer above; taking the DDL only from the database and storing it in a file.
git add SQLVersionControl/vc.sql
This adds the SQL file to every commit made to your repository.
exit 0
This exits the script with success. This is possibly dangerous. If mysqldump or git add fails, you may blow away something you wanted to keep.
If you're just tracking the schema, put all of the CREATE statements into one .sql file, and add the file to git.
$> mkdir myschema && cd myschema
$> git init
$> echo "CREATE TABLE ..." > schema.sql
$> git add schema.sql
$> git commit -m "Initial import"
IMO the best approach is described here: http://viget.com/extend/backup-your-database-in-git. For your convenience I repeat the most important pieces here.
The trick is to use mysqldump --skip-extended-insert, which creates dumps that can be better tracked/diffed by git.
There are also some hints regarding the best repository configuration in order to reduce disk size. Copied from here:
core.compression = 9 : Flag for gzip to specify the compression level for blobs and packs. Level 1 is fast with larger file sizes, level 9 takes more time but results in better compression.
repack.usedeltabaseoffset = true : Defaults to false for compatibility reasons, but is supported with Git >=1.4.4.
pack.windowMemory = 100m : (Re)packing objects may consume lots of memory. To prevent all your resources go down the drain it's useful to put some limits on that. There is also pack.deltaCacheSize.
pack.window = 15 : Defaults to 10. With a higher value, Git tries harder to find similar blobs.
gc.auto = 1000 : Defaults to 6700. As indicated in the article it is recommended to run git gc every once in a while. Personally I run git gc --auto everyday, so only pack things when there's enough garbage. git gc --auto normally only triggers the packing mechanism when there are 6700 loose objects around. This flag lowers this amount.
gc.autopacklimit = 10: Defaults to 50. Every time you run git gc, a new pack is generated of the loose objects. Over time you get too many packs which waste space. It is a good idea to combine all packs once in a while into a single pack, so all objects can be combined and deltified. By default git gc does this when there are 50 packs around. But for this situation a lower number may be better.
Old versions can be pruned via:
git rebase --onto master~8 master~7
(copied from here)
The following includes a git pre-commit hook to capture mysql database/schema, given user='myuser', password='mypassword', database_name='dbase1'. Properly bubbles errors up to the git system (the exit 0's in other answers could be dangerous and may not handle error scenarios properly). Optionally, can add a database import to a post-checkout hook (when capturing all the data, not just schema), but take care given your database size. Details in bash-script comments below.
pre-commit hook:
#!/bin/bash
# exit upon error
set -e
# another way to set "exit upon error", for readability
set -o errexit
mysqldump -umyuser -pmypassword dbase1 --no-data=true > dbase1.sql
# Uncomment following line to dump all data with schema,
# useful when used in tandem for the post-checkout hook below.
# WARNING: can greatly expand your git repo when employing for
# large databases, so carefully evaluate before employing this method.
# mysqldump -umyuser -pmypassword dbase1 > dbase1.sql
git add dbase1.sql
(optional) post-checkout hook:
#!/bin/bash
# mysqldump (above) is presumably run without '--no-data=true' parameter.
set -e
mysql -umyuser -pmypassword dbase1 < dbase1.sql
Versions of apps, OS I'm running:
root#node1 Dec 12 22:35:14 /var/www# mysql --version
mysql Ver 14.14 Distrib 5.1.54, for debian-linux-gnu (x86_64) using readline 6.2
root#node1 Dec 12 22:35:19 /var/www# git --version
git version 1.7.4.1
root#node1 Dec 12 22:35:22 /var/www# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 11.04
Release: 11.04
Codename: natty
root#node1 Dec 12 22:35:28 /var/www#
While I am not using Git, I have used source control for over 15 years. A best practice to adhere to when deciding where and how to store your src and accompanying resources in Source Control: If the DB Schema is used within the project then you should be versioning the schema and all other project resources in "that" project. If you develop a set of schemas or programming resources that you resuse in other projects then you should have a seperate repository for those reusable resources. That seperate Reusable resources project will be versioned on it's own and will track the versions of the actual reusable resources in that repository.
If you use a versioned resource out of the reusable repository in a different project then you have the following scenario, (just an example). Project XYZ version 1.0 is now using DB Schema_ABC version 4.0 In this case you will understand that you have used a specific version of a reusable resource and since it is versioned you will be able to track its use throughout your project. If you get a bug report on DBSchema_ABC, you will be able to fix the schema and re-version as well as understand where else DBSchem_ABC is used and where you may have to make some changes. From there you will also understand which projects contain wich versions of which reusable resources... You just have to understand how to track your resources.
Adopting this type of development Environment and Resource Management strategy is key to releasing usable software and managing a break/fix enhancement environment. Even if you're developing for your own edificcation on your own time, you should be using source control.. as you are..
As for Git, I would find a gui front end or a dev env integration if I can. Git is pretty big so I am sure it has plenty of front end support, maybe?
As brilliant as it sounds (the idea did occur to me as well), when I tried to implement it, I hit a wall. In theory, by using the --skip-extended-insert flag, despite initial dump would be big, the diffs between daily dumps should be minimal, hence the size increase over time of the repository could be assumed to be minimal as well, right? Wrong!
Git stores shapshots, not diffs, which means on each commit, it will take the entire dump file, not just the diff. Moreover, since the dump with --skip-extended-instert will use all field names on every single insert line, it will be huge compared to a dump done without --skip-extended-instert. This results in an explosion in size, the exact opposite what one would expect.
In my case, with a ~300MB sql dump, the repository went to gigabytes in days. So, what did I do? I first tried the same thing, only remove --skip-extended-instert, so that dumps will be smaller, and snapshots would be proportionally smaller as well. This approach held for a while, but in time it became unusable as well.
Still, the diff usage with --skip-extended-insert actually still seemed like a good idea, only, now I try to use subversion instead of git. I know, compared to git, svn is ancient history, yet it seems to work better, since it actually does use diffs instead of snapshots.
So in short, I believe best solution is doing the above, but with subversion instead of git.
(shameless plug)
The dbvc commandline tool allows you to manage your database schema updates in your repository.
It creates and uses a table _dbvc in the database which holds a list of the updates that are run. You can easily run the updates that haven't been apply to your database schema yet.
The tool uses git to determine the correct order of executing the updates.
DBVC usage
Show a list of commands
dbvc help
Show help on a specific command
dbvc help init
Initialise DBVC for an existing database.
dbvc init
Create a database dump. This is used to create the DB on a new environment.
mysqldump foobar > dev/schema.php
Create the DB using the schema.
dbvc create
Add an update file. These are used to update the DB on other environments.
echo 'ALTER TABLE `foo` ADD COLUMN `status` BOOL DEFAULT 1;' > dev/updates/add-status-to-foo.sql
Mark an update as already run.
dbvc mark add-status-to-foo
Show a list of updates that need to be run.
dbvc status
Show all updates with their status.
dbvc status --all
Update the database.
dbvc update
I have found the following options to be mandatory for a version control / git-compatible mysqldump.
mysqldump --skip-opt --skip-comments |sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/'
(and maybe --no-data)
--skip-opt is very useful, it takes away all of --add-drop-table --add-locks --create-options --disable-keys --extended-insert --lock-tables --quick --set-charset. The DEFINER sed is necessary when the database contains triggers.

How do I create binary patches?

What's the best way to go about making a patch for a binary file?
I want it to be simple for users to apply (a simple patch application would be nice). Running diff on the file just gives Binary files [...] differ.
Check out bsdiff and bspatch (website, manpage, paper, GitHub fork).
To install this tool:
Windows: Download and extract this package. You will also need a copy of bzip2.exe in PATH; download that from the "Binaries" link here.
macOS: Install Homebrew and use it to install bsdiff.
Linux: Use your package manager to install bsdiff.
Courgette, by the Google Chrome team, looks like most efficient tool for binary patching executables.
To quote their data:
Here are the sizes for the recent 190.1 -> 190.4 update on the developer channel:
Full update: 10,385,920 bytes
bsdiff update: 704,512 bytes
Courgette update: 78,848 bytes
Here are instructions to build it. Here is a Windows binary from 2018 courtesy of Mehrdad.
xdelta (website, GitHub) is another option. It seems to be more recent, but otherwise I have no idea how it compares to other tools like bsdiff.
Usage:
Creating a patch: xdelta -e -s old_file new_file delta_file
Applying a patch: xdelta -d -s old_file delta_file decoded_new_file
Installation:
Windows: Download the official binaries.
Chocolatey: choco install xdelta3
Homebrew: brew install xdelta
Linux: Available as xdelta or xdelta3 in your package manager.
Modern port: Very useful .NET port for bsdiff/bspatch:
https://github.com/LogosBible/bsdiff.net
My personal choice.
I tested it, and it was the only one of all links. Out of the box I was able to compile it (with Visual Studio, e.g., Visual Studio 2013). (The C++ source elsewhere is a bit outdated and needs at least a bit polishing and is only 32 bit which sets real memory (diff source size) limits. This is a port of this C++ code bsdiff and even tests if the patch results are identical to original code.)
Further idea: With .NET 4.5 you could even get rid of the #Zip library, which is a dependency here.
I haven't measured if it is slightly slower than the C++ code, but it worked fine for me, (bsdiff: 90 MB file in 1-2 minutes), and time-critical for me is only the bspatch, not the bsdiff.
I am not really sure, if the whole memory of a x64 machine is used, but I assume it. The x64 capable build ("Any CPU") works at least. I tried with a 100 MB file.
-
Besides: The cited Google project 'Courgette' may be the best choice if your main target are executable files. But it is work to build it (for Windows measures, at least), and for binary files it is also using pure bsdiff/bspatch, as far as I have understood the documentation.
For small, simple patches, it's easiest just to tell diff to treat the files as text with the -a (or --text) option. As far as I understand, more complicated binary diffs are only useful for reducing the size of patches.
$ man diff | grep -B1 "as text"
-a, --text
treat all files as text
$ diff old new
Binary files old and new differ
$ diff -a old new > old.patch
$ patch < old.patch old
patching file old
$ diff old new
$
If the files are the same size and the patch just modifies a few bytes, you can use xxd, which is commonly installed with the OS. The following converts each file to a hex representation with one byte per line, then diffs the files to create a compact patch, then applies the patch.
$ xxd -c1 old > old.hex
$ xxd -c1 new > new.hex
$ diff -u old.hex new.hex | grep "^+" | grep -v "^++" | sed "s/^+//" > old.hexpatch
$ xxd -c1 -r old.hexpatch old
$ diff old new
$
This is a simpler, cleaner, better version suggested by bmaupin that uses process substitution instead of intermediate files, diff, and grep:
$ comm -13 <(xxd -c1 old) <(xxd -c1 new) > old.hexpatch
$ xxd -c1 -r old.hexpatch old
$ diff old new
$
Here the comm -13 removes lines that appear only in the first input as well as lines that appear in both inputs, leaving only the lines exclusive to the second input.
HDiffPatch can run on Windows, macOS, Linux, and Android.
It supports diffs between binary files or directories;
Creating a patch: hdiffz [-m|-s-64] [-c-lzma2] old_path new_path out_delta_file
Applying a patch: hpatchz old_path delta_file out_new_path
Install:
Download from last release, or download the source code & make;
Jojos Binary Diff is another good binary diff algorithm;
diff and git-diff can handle binary files by treating them as text with -a.
With git-diff you can also use --binary which produces ASCII encodings of binary files, suitable for pasting into an email for example.
https://github.com/reproteq/DiffPatchWpf
DiffPatchWpf
DiffPatchWpf simple binary patch maker tool.
Compare two binary files and save the differences between them in new file patch.txt
Apply the patch in another binary fast and easy.
Now you can apply the differences in another binary quickly and easily.
example:
1- Load file Aori.bin
2- Load file Amod.bin
3- Compare and save Aori-patch.txt
4- Load file Bori.bin
5- Load patch Aori-patch.txt
6- Apply patch and save file Bori-patched.bin
alt tag
https://youtu.be/EpyuF4t5MWk
Microsoft Visual Studio Community 2019
Versión 16.7.7
.NETFramework,Version=v4.7.2
Tested in windows 10x64bits
Assuming you know the structure of the file you could use a C / C++ program to modify it byte by byte:
http://msdn.microsoft.com/en-us/library/c565h7xx(VS.71).aspx
Just read in the old file, and write out a new one modified as you like.
Don't forget to include a file format version number in the file so you know how to read any given version of the file format.