How to give relative path for destinations of plugins (executable bundles) packagemaker? - packagemaker

How to give relative path of destination of bundles while creating package from packagemaker?

I'm not sure what you mean, but maybe
cp -R ${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}.bundle ${SRCROOT}/${PRODUCT_NAME}.bundle
is what you want? ${SRCROOT} might be what you're looking for.
I use this in my build script (for this bundle) to copy the created bundle to my source directory (for easier access, and also because I want to zip it).

Related

How to include only a single folder in Bamboo build plan

I need Bamboo to build the project automatically when a file in "api" subfolder changes. When a file in any other subfolder changes the bamboo build plan shouldn't run.
Folder structure:
project
- api
- ui
- core
In the Plan Configuration repositories tab, from the "Include / exclude files" dropdown I have selected the following option
Include only changes that matches the following pattern
and I have tried the following patterns:
.*/api/.*
api/
api/*
api\/*
api/**
/api/*
but the build plan isn't running. With "Include / exclude files" dropdown set to None the build plan runs (but does so when a file changes in any other subfolder also)
I can't split the project up to different repositories.
What pattern should I use or is there any other solution for this?
Pattern that ended up working was
api/.*
It's a regular expression from the root of the checkout supposedly, although I have not used this feature. Here are some of their examples:
https://confluence.atlassian.com/display/BAMBOO052/_planRepositoryIncludeExcludeFilesExamples?_ga=2.91083610.1778956526.1502832020-118211336.1443803386
What you might try is let it checkout the whole thing without the include filter set, and don't let it delete the working directory. Look on the filesystem and verify the path from the root of the working directory. Then test your regex against the whole path relative from that working directory.

How to download HTTP directory with all files and sub-directories as they appear on the online files/folders list?

There is an online HTTP directory that I have access to. I have tried to download all sub-directories and files via wget. But, the problem is that when wget downloads sub-directories it downloads the index.html file which contains the list of files in that directory without downloading the files themselves.
Is there a way to download the sub-directories and files without depth limit (as if the directory I want to download is just a folder which I want to copy to my computer).
Solution:
wget -r -np -nH --cut-dirs=3 -R index.html http://hostname/aaa/bbb/ccc/ddd/
Explanation:
It will download all files and subfolders in ddd directory
-r : recursively
-np : not going to upper directories, like ccc/…
-nH : not saving files to hostname folder
--cut-dirs=3 : but saving it to ddd by omitting
first 3 folders aaa, bbb, ccc
-R index.html : excluding index.html
files
Reference: http://bmwieczorek.wordpress.com/2008/10/01/wget-recursively-download-all-files-from-certain-directory-listed-by-apache/
I was able to get this to work thanks to this post utilizing VisualWGet. It worked great for me. The important part seems to be to check the -recursive flag (see image).
Also found that the -no-parent flag is important, othewise it will try to download everything.
you can use lftp, the swish army knife of downloading if you have bigger files you can add --use-pget-n=10 to command
lftp -c 'mirror --parallel=100 https://example.com/files/ ;exit'
wget -r -np -nH --cut-dirs=3 -R index.html http://hostname/aaa/bbb/ccc/ddd/
From man wget
‘-r’
‘--recursive’
Turn on recursive retrieving. See Recursive Download, for more details. The default maximum depth is 5.
‘-np’
‘--no-parent’
Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files below a certain hierarchy will be downloaded. See Directory-Based Limits, for more details.
‘-nH’
‘--no-host-directories’
Disable generation of host-prefixed directories. By default, invoking Wget with ‘-r http://fly.srk.fer.hr/’ will create a structure of directories beginning with fly.srk.fer.hr/. This option disables such behavior.
‘--cut-dirs=number’
Ignore number directory components. This is useful for getting a fine-grained control over the directory where recursive retrieval will be saved.
Take, for example, the directory at ‘ftp://ftp.xemacs.org/pub/xemacs/’. If you retrieve it with ‘-r’, it will be saved locally under ftp.xemacs.org/pub/xemacs/. While the ‘-nH’ option can remove the ftp.xemacs.org/ part, you are still stuck with pub/xemacs. This is where ‘--cut-dirs’ comes in handy; it makes Wget not “see” number remote directory components. Here are several examples of how ‘--cut-dirs’ option works.
No options -> ftp.xemacs.org/pub/xemacs/
-nH -> pub/xemacs/
-nH --cut-dirs=1 -> xemacs/
-nH --cut-dirs=2 -> .
--cut-dirs=1 -> ftp.xemacs.org/xemacs/
...
If you just want to get rid of the directory structure, this option is similar to a combination of ‘-nd’ and ‘-P’. However, unlike ‘-nd’, ‘--cut-dirs’ does not lose with subdirectories—for instance, with ‘-nH --cut-dirs=1’, a beta/ subdirectory will be placed to xemacs/beta, as one would expect.
No Software or Plugin required!
(only usable if you don't need recursive deptch)
Use bookmarklet. Drag this link in bookmarks, then edit and paste this code:
javascript:(function(){ var arr=[], l=document.links; var ext=prompt("select extension for download (all links containing that, will be downloaded.", ".mp3"); for(var i=0; i<l.length; i++) { if(l[i].href.indexOf(ext) !== false){ l[i].setAttribute("download",l[i].text); l[i].click(); } } })();
and go on page (from where you want to download files), and click that bookmarklet.
wget is an invaluable resource and something I use myself. However sometimes there are characters in the address that wget identifies as syntax errors. I'm sure there is a fix for that, but as this question did not ask specifically about wget I thought I would offer an alternative for those people who will undoubtedly stumble upon this page looking for a quick fix with no learning curve required.
There are a few browser extensions that can do this, but most require installing download managers, which aren't always free, tend to be an eyesore, and use a lot of resources. Heres one that has none of these drawbacks:
"Download Master" is an extension for Google Chrome that works great for downloading from directories. You can choose to filter which file-types to download, or download the entire directory.
https://chrome.google.com/webstore/detail/download-master/dljdacfojgikogldjffnkdcielnklkce
For an up-to-date feature list and other information, visit the project page on the developer's blog:
http://monadownloadmaster.blogspot.com/
You can use this Firefox addon to download all files in HTTP Directory.
https://addons.mozilla.org/en-US/firefox/addon/http-directory-downloader/
wget generally works in this way, but some sites may have problems and it may create too many unnecessary html files. In order to make this work easier and to prevent unnecessary file creation, I am sharing my getwebfolder script, which is the first linux script I wrote for myself. This script downloads all content of a web folder entered as parameter.
When you try to download an open web folder by wget which contains more then one file, wget downloads a file named index.html. This file contains a file list of the web folder. My script converts file names written in index.html file to web addresses and downloads them clearly with wget.
Tested at Ubuntu 18.04 and Kali Linux, It may work at other distros as well.
Usage :
extract getwebfolder file from zip file provided below
chmod +x getwebfolder (only for first time)
./getwebfolder webfolder_URL
such as ./getwebfolder http://example.com/example_folder/
Download Link
Details on blog

Make: Redo some targets if configuration changes

I want to reexecute some targets when the configuration changes.
Consider this example:
I have a configuration variable (that is either read from environment variables or a config.local file):
CONF:=...
Based on this variable CONF, I assemble a header file conf.hpp like this:
conf.hpp:
buildConfHeader $(CONF)
Now, of course, I want to rebuild this header if the configuration variable changes, because otherwise the header would not reflect the new configuration. But how can I track this with make? The configuration variable is not tied to a file, as it may be read from environment variables.
Is there any way to achieve this?
I have figured it out. Hopefully this will help anyone having the same problem:
I build a file name from the configuration itself, so if we have
CONF:=a b c d e
then I create a configuration identifier by replacing the spaces with underscores, i.e.,
null:=
space:= $(null) #
CONFID:= $(subst $(space),_,$(strip $(CONF))).conf
which will result in CONFID=a_b_c_d_e.conf
Now, I use this $(CONFID) as dependency for the conf.hpp target. In addition, I add a rule for $(CONFID) to delete old .conf files and create a new one:
$(CONFID):
rm -f *.conf #remove old .conf files, -f so no error when no .conf files are found
touch $(CONFID) #create a new file with the right name
conf.hpp: $(CONFID)
buildConfHeader $(CONF)
Now everything works fine. The file with name $(CONFID) tracks the configuration used to build the current conf.hpp. If the configuration changes, then $(CONFID) will point to a non-existant .conf file. Thus, the first rule will be executed, the old conf will be deleted and a new one will be created. The header will be updated. Exactly what I want :)
There is no way for make to know what to rebuild if the configuration changed via a macro or environment variable.
You can, however, use a target that simply updates the timestamp of conf.hpp, which will force it to always be rebuilt:
conf.hpp: confupdate
buildConfHeader $(CONF)
confupdate:
#touch conf.hpp
However, as I said, conf.hpp will always be built, meaning any targets that depend upon it will need rebuilt as well. A much more friendly solution is to generate the makefile itself. CMake or the GNU Autotools are good for this, except you sacrifice a lot of control over the makefile. You could also use a build script that creates the makefile, but I'd advise against this since there exist tools that will allow you to build one much more easily.

How to set FSTrigger's folder path in Hudson CI integration tool?

I am using Hudson tool to automate tests for our project. I want to use FSTrigger plugin to trigger the build whenever there will a change in SVN.
As i set Folder path to "http://192.16.17.121/test/test1/config/". It gives error that folder should exist, but it exists at specified location.I can view it directly from browser too.
Can anybody tell me whats the problem?Your help will be appreciated.
Thanks...
You have to specify an absolute path to a directory on your filesystem. What you have shown there is a URL.
An absolute path would look like this:
/var/www/html/test/test1/config

How to use a relative pathname to a Mercurial hook

I have a script that is in the top level of my working copy and would like to use it as a Mercurial hook. If I use an absolute pathname to the hook then everything is fine, but I want a relative pathname so the whole thing can be easily moved around, used in other working copies and other developers can copy the hgrc as is.
/space/project/.hg/hgrc contains
[hooks]
update = genid
The genid script is at /space/project/genid
The hook is invoked just fine if I am in /space/project but if my current directory is /space/project/src/tools then 'hg update' will give an error as the hook cannot be found.
Python hooks cannot use a relative path. Script hooks can like this:
[hooks]
update = ./genid
In certain cases, environment variables are expanded in mercurial configuration. So you can check out if you can use a environment variable.
[hooks]
update = $MercurialHooks/genid
See Faq (12) in https://www.mercurial-scm.org/wiki/TipsAndTricks
I had the same problem and couldnt resolve it. The workaround was easy though! I versioned the file in the repo and just copied it to my .hg folder! Not ideal but it isnt that likely to change and other repo users can still get a copy of the file