TCL help: How to check for unmount/bad disks before read file - tcl

I need help here.
I have list of directory/file path and my program will read through every one.
Somehow one of the directory is unmount/bad disks and cause my program hang over there when I'm try to open the file using command below.
catch {set directory_fid [open $filePath r]}
So, how can I check the directory status before I'm reading/open the file? I want to skip that file if no response for certain time and continue to read next file.
*file isdir $dir is not working as well
*There is no response when i'm using ls -dir in Unix also.

Before you start down this path, I would review your requirements and see if there's any easier way to handle this. It would be better to fix the mounts so that they don't cause a hang condition if an access attempt is made.
The main problem is that for the directories you are checking, you need to know the corresponding mount point. If you don't know the mount point, it's hard to tell whether the directory you want to check will cause any hangs when you try to access it.
First, you would have to parse /etc/fstab and get a list of possible filesystem mount points (Assumption, Linux system -- if not Linux, there will be an equivalent file).
Second, to see what is currently mounted you need the di Tcl extension (wiki page) (or main page w/download links). (*). Using this extension, you can get a list of mounted filesystems.
# the load only needs to be done once...
set ext [info sharedlibextension]
set lfn [file normalize [file join [file dirname [info script]] diskspace$ext]]
load $lfn
# there are various options that can be passed to the `diskspace`
# command that will change which filesystems are listed.
set fsdata [diskspace -f {}]
set fslist [dict keys $fsdata]
Now you have a list of possible mount points, and you know which are mounted.
Third, you need to figure out which mount point corresponds to the directory you want to check. For example, if you have:
/user/bll/source/stuff.c
You need to check for /user/bll/source, then /user/bll, then /user, then / as possible mount points.
There's a huge assumption here that the file or any of its parent directories are not symlinked to another place.
Once you determine the probable mount point, you can check if it is mounted:
if { $mountpoint in $fslist } {
...
} else {
# better skip this one, the probable mount point is not mounted.
}
As you can see, this is a lot of work. It's fragile.
Better to fix the mounts so they don't hang.
(*) I wrote di and the di Tcl extension. This is a portable solution. You can of course use exec to run df or mount, but there are other issues (parsing, portability, determining which filesystems to use) if you use the more manual method.

Related

gcloudignore doesn't allow standard wildcard whitelist

Gcloudignore works like gitignore in that you can exclude certain files from uploading to GCF. Sometimes when you have really large projects with lots of generated files, it can be useful to exclude all files except a few.
.gcloudignore
# Ignore everything
# Or /*
*
# Except the Cloud Function files we want to deploy
!/package.json
!/index.js
The following gcloudignore file gives us: File index.js or function.js that is expected to define function doesn't exist in the root directory. meaning index.js is ignored and can not be read.
However the following ignore file syntax works just fine to deploy:
# Ignore everything
/[!.]*
/.?*
# Except the Cloud Function files we want to deploy
!/package.json
!/index.js
I tried peering into the gcloud program code, but I was wondering if anyone knows why this is the case?
I ran into this problem as well.
I found a workaround -- explicitly allow the current dir:
# Ignore everything by default
*
# Allow files explicitly
!foo.bar
!*.baz
# Explicitly allow current dir. `gcloud deploy` fails without it.
!.
Works for me. Don't know why.
OperationError: code=13, message=Error setting up the execution environment for your function. Please try deploying again after a few minutes.
I just got this problem in recent. The error code 13 means, in my view point, the code/function failed to initialize (with in time).
In my case, the problem is mostly caused by either Provided module can't be loaded. (Dependencies failed) or Provided code is not a loadable module. (Main module is missing) after checking Stackdriver Logs, which point to the deploy isn't complete.
Below is how I got it to work:
# Ignore everything by default
/*
# Allow files explicitly
!config.js
!index.js
# Tried !app/ but does not work
!app/**
# Explicitly allow current folder, as Arne Jørgensen said.
!.
Be care the !app/ isn't enough. I have to use !app/** to including all files.
If it's your case, check the source tab from function console to see which files uploaded in actual.

Apache forbidden error when trying to run a Perl script [duplicate]

I have a Perl CGI script that isn't working and I don't know how to start narrowing down the problem. What can I do?
Note: I'm adding the question because I really want to add my very lengthy answer to Stack Overflow. I keep externally linking to it in other answers and it deserves to be here. Don't be shy about editing my answer if you have something to add.
This answer is intended as a general framework for working through
problems with Perl CGI scripts and originally appeared on Perlmonks as Troubleshooting Perl CGI Scripts. It is not a complete guide to every
problem that you may encounter, nor a tutorial on bug squashing. It
is just the culmination of my experience debugging CGI scripts for twenty (plus!) years. This page seems to have had many different homes, and I seem
to forget it exists, so I'm adding it to the StackOverflow. You
can send any comments or suggestions to me at
bdfoy#cpan.org. It's also community wiki, but don't go too nuts. :)
Are you using Perl's built in features to help you find problems?
Turn on warnings to let Perl warn you about questionable parts of your code. You can do this from the command line with the -w switch so you don't have to change any code or add a pragma to every file:
% perl -w program.pl
However, you should force yourself to always clear up questionable code by adding the warnings pragma to all of your files:
use warnings;
If you need more information than the short warning message, use the diagnostics pragma to get more information, or look in the perldiag documentation:
use diagnostics;
Did you output a valid CGI header first?
The server is expecting the first output from a CGI script to be the CGI header. Typically that might be as simple as print "Content-type: text/plain\n\n"; or with CGI.pm and its derivatives, print header(). Some servers are sensitive to error output (on STDERR) showing up before standard output (on STDOUT).
Try sending errors to the browser
Add this line
use CGI::Carp 'fatalsToBrowser';
to your script. This also sends compilation errors to the browser window. Be sure to remove this before moving to a production environment, as the extra information can be a security risk.
What did the error log say?
Servers keep error logs (or they should, at least).
Error output from the server and from your script should
show up there. Find the error log and see what it says.
There isn't a standard place for log files. Look in the
server configuration for their location, or ask the server
admin. You can also use tools such as CGI::Carp
to keep your own log files.
What are the script's permissions?
If you see errors like "Permission denied" or "Method not
implemented", it probably means that your script is not
readable and executable by the web server user. On flavors
of Unix, changing the mode to 755 is recommended:
chmod 755 filename. Never set a mode to 777!
Are you using use strict?
Remember that Perl automatically creates variables when
you first use them. This is a feature, but sometimes can
cause bugs if you mistype a variable name. The pragma
use strict will help you find those sorts of
errors. It's annoying until you get used to it, but your
programming will improve significantly after awhile and
you will be free to make different mistakes.
Does the script compile?
You can check for compilation errors by using the -c
switch. Concentrate on the first errors reported. Rinse,
repeat. If you are getting really strange errors, check to
ensure that your script has the right line endings. If you
FTP in binary mode, checkout from CVS, or something else that
does not handle line end translation, the web server may see
your script as one big line. Transfer Perl scripts in ASCII
mode.
Is the script complaining about insecure dependencies?
If your script complains about insecure dependencies, you
are probably using the -T switch to turn on taint mode, which is
a good thing since it keeps you have passing unchecked data to the shell. If
it is complaining it is doing its job to help us write more secure scripts. Any
data originating from outside of the program (i.e. the environment)
is considered tainted. Environment variables such as PATH and
LD_LIBRARY_PATH
are particularly troublesome. You have to set these to a safe value
or unset them completely, as I recommend. You should be using absolute
paths anyway. If taint checking complains about something else,
make sure that you have untainted the data. See perlsec
man page for details.
What happens when you run it from the command line?
Does the script output what you expect when run from the
command line? Is the header output first, followed by a
blank line? Remember that STDERR may be merged with STDOUT
if you are on a terminal (e.g. an interactive session), and
due to buffering may show up in a jumbled order. Turn on
Perl's autoflush feature by setting $| to a
true value. Typically you might see $|++; in
CGI programs. Once set, every print and write will
immediately go to the output rather than being buffered.
You have to set this for each filehandle. Use select to
change the default filehandle, like so:
$|++; #sets $| for STDOUT
$old_handle = select( STDERR ); #change to STDERR
$|++; #sets $| for STDERR
select( $old_handle ); #change back to STDOUT
Either way, the first thing output should be the CGI header
followed by a blank line.
What happens when you run it from the command line with a CGI-like environment?
The web server environment is usually a lot more limited
than your command line environment, and has extra
information about the request. If your script runs fine
from the command line, you might try simulating a web server
environment. If the problem appears, you have an
environment problem.
Unset or remove these variables
PATH
LD_LIBRARY_PATH
all ORACLE_* variables
Set these variables
REQUEST_METHOD (set to GET, HEAD, or POST as appropriate)
SERVER_PORT (set to 80, usually)
REMOTE_USER (if you are doing protected access stuff)
Recent versions of CGI.pm ( > 2.75 ) require the -debug flag to
get the old (useful) behavior, so you might have to add it to
your CGI.pm imports.
use CGI qw(-debug)
Are you using die() or warn?
Those functions print to STDERR unless you have redefined
them. They don't output a CGI header, either. You can get
the same functionality with packages such as CGI::Carp
What happens after you clear the browser cache?
If you think your script is doing the right thing, and
when you perform the request manually you get the right
output, the browser might be the culprit. Clear the cache
and set the cache size to zero while testing. Remember that
some browsers are really stupid and won't actually reload
new content even though you tell it to do so. This is
especially prevalent in cases where the URL path is the
same, but the content changes (e.g. dynamic images).
Is the script where you think it is?
The file system path to a script is not necessarily
directly related to the URL path to the script. Make sure
you have the right directory, even if you have to write a
short test script to test this. Furthermore, are you sure
that you are modifying the correct file? If you don't see
any effect with your changes, you might be modifying a
different file, or uploading a file to the wrong place.
(This is, by the way, my most frequent cause of such trouble
;)
Are you using CGI.pm, or a derivative of it?
If your problem is related to parsing the CGI input and you
aren't using a widely tested module like CGI.pm, CGI::Request,
CGI::Simple or CGI::Lite, use the module and get on with life.
CGI.pm has a cgi-lib.pl compatibility mode which can help you solve input
problems due to older CGI parser implementations.
Did you use absolute paths?
If you are running external commands with
system, back ticks, or other IPC facilities,
you should use an absolute path to the external program.
Not only do you know exactly what you are running, but you
avoid some security problems as well. If you are opening
files for either reading or writing, use an absolute path.
The CGI script may have a different idea about the current
directory than you do. Alternatively, you can do an
explicit chdir() to put you in the right place.
Did you check your return values?
Most Perl functions will tell you if they worked or not
and will set $! on failure. Did you check the
return value and examine $! for error messages? Did you check
$# if you were using eval?
Which version of Perl are you using?
The latest stable version of Perl is 5.28 (or not, depending on when this was last edited). Are you using an older version? Different versions of Perl may have different ideas of warnings.
Which web server are you using?
Different servers may act differently in the same
situation. The same server product may act differently with
different configurations. Include as much of this
information as you can in any request for help.
Did you check the server documentation?
Serious CGI programmers should know as much about the
server as possible - including not only the server features
and behavior, but also the local configuration. The
documentation for your server might not be available to you
if you are using a commercial product. Otherwise, the
documentation should be on your server. If it isn't, look
for it on the web.
Did you search the archives of comp.infosystems.www.authoring.cgi?
This use to be useful but all the good posters have either died or wandered off.
It's likely that someone has had your problem before,
and that someone (possibly me) has answered it in this
newsgroup. Although this newsgroup has passed its heyday, the collected wisdom from the past can sometimes be useful.
Can you reproduce the problem with a short test script?
In large systems, it may be difficult to track down a bug
since so many things are happening. Try to reproduce the problem
behavior with the shortest possible script. Knowing the problem
is most of the fix. This may be certainly time-consuming, but you
haven't found the problem yet and you're running out of options. :)
Did you decide to go see a movie?
Seriously. Sometimes we can get so wrapped up in the problem that we
develop "perceptual narrowing" (tunnel vision). Taking a break,
getting a cup of coffee, or blasting some bad guys in [Duke Nukem,Quake,Doom,Halo,COD] might give you
the fresh perspective that you need to re-approach the problem.
Have you vocalized the problem?
Seriously again. Sometimes explaining the problem aloud
leads us to our own answers. Talk to the penguin (plush toy) because
your co-workers aren't listening. If you are interested in this
as a serious debugging tool (and I do recommend it if you haven't
found the problem by now), you might also like to read The Psychology
of Computer Programming.
I think CGI::Debug is worth mentioning as well.
Are you using an error handler while you are debugging?
die statements and other fatal run-time and compile-time errors get
printed to STDERR, which can be hard to find and may be conflated with
messages from other web pages at your site. While you're debugging your
script, it's a good idea to get the fatal error messages to display in your
browser somehow.
One way to do this is to call
use CGI::Carp qw(fatalsToBrowser);
at the top of your script. That call will install a $SIG{__DIE__} handler (see perlvar)display fatal errors in your browser, prepending it with a valid header if necessary. Another CGI debugging trick that I used before I ever heard of CGI::Carp was to
use eval with the DATA and __END__ facilities on the script to catch compile-time errors:
#!/usr/bin/perl
eval join'', <DATA>;
if ($#) { print "Content-type: text/plain:\n\nError in the script:\n$#\n; }
__DATA__
# ... actual CGI script starts here
This more verbose technique has a slight advantage over CGI::Carp in that it will catch more compile-time errors.
Update: I've never used it, but it looks like CGI::Debug, as Mikael S
suggested, is also a very useful and configurable tool for this purpose.
I wonder how come no-one mentioned the PERLDB_OPTS option called RemotePort; although admittedly, there aren't many working examples on the web (RemotePort isn't even mentioned in perldebug) - and it was kinda problematic for me to come up with this one, but here it goes (it being a Linux example).
To do a proper example, first I needed something that can do a very simple simulation of a CGI web server, preferably through a single command line. After finding Simple command line web server for running cgis. (perlmonks.org), I found the IO::All - A Tiny Web Server to be applicable for this test.
Here, I'll work in the /tmp directory; the CGI script will be /tmp/test.pl (included below). Note that the IO::All server will only serve executable files in the same directory as CGI, so chmod +x test.pl is required here. So, to do the usual CGI test run, I change directory to /tmp in the terminal, and run the one-liner web server there:
$ cd /tmp
$ perl -MIO::All -e 'io(":8080")->fork->accept->(sub { $_[0] < io(-x $1 ? "./$1 |" : $1) if /^GET \/(.*) / })'
The webserver command will block in the terminal, and will otherwise start the web server locally (on 127.0.0.1 or localhost) - afterwards, I can go to a web browser, and request this address:
http://127.0.0.1:8080/test.pl
... and I should observe the prints made by test.pl being loaded - and shown - in the web browser.
Now, to debug this script with RemotePort, first we need a listener on the network, through which we will interact with the Perl debugger; we can use the command line tool netcat (nc, saw that here: Perl如何remote debug?). So, first run the netcat listener in one terminal - where it will block and wait for connections on port 7234 (which will be our debug port):
$ nc -l 7234
Then, we'd want perl to start in debug mode with RemotePort, when the test.pl has been called (even in CGI mode, through the server). This, in Linux, can be done using the following "shebang wrapper" script - which here also needs to be in /tmp, and must be made executable:
cd /tmp
cat > perldbgcall.sh <<'EOF'
#!/bin/bash
PERLDB_OPTS="RemotePort=localhost:7234" perl -d -e "do '$#'"
EOF
chmod +x perldbgcall.sh
This is kind of a tricky thing - see shell script - How can I use environment variables in my shebang? - Unix & Linux Stack Exchange. But, the trick here seems to be not to fork the perl interpreter which handles test.pl - so once we hit it, we don't exec, but instead we call perl "plainly", and basically "source" our test.pl script using do (see How do I run a Perl script from within a Perl script?).
Now that we have perldbgcall.sh in /tmp - we can change the test.pl file, so that it refers to this executable file on its shebang line (instead of the usual Perl interpreter) - here is /tmp/test.pl modified thus:
#!./perldbgcall.sh
# this is test.pl
use 5.10.1;
use warnings;
use strict;
my $b = '1';
my $a = sub { "hello $b there" };
$b = '2';
print "YEAH " . $a->() . " CMON\n";
$b = '3';
print "CMON " . &$a . " YEAH\n";
$DB::single=1; # BREAKPOINT
$b = '4';
print "STEP " . &$a . " NOW\n";
$b = '5';
print "STEP " . &$a . " AGAIN\n";
Now, both test.pl and its new shebang handler, perldbgcall.sh, are in /tmp; and we have nc listening for debug connections on port 7234 - so we can finally open another terminal window, change directory to /tmp, and run the one-liner webserver (which will listen for web connections on port 8080) there:
cd /tmp
perl -MIO::All -e 'io(":8080")->fork->accept->(sub { $_[0] < io(-x $1 ? "./$1 |" : $1) if /^GET \/(.*) / })'
After this is done, we can go to our web browser, and request the same address, http://127.0.0.1:8080/test.pl. However, now when the webserver tries to execute the script, it will do so through perldbgcall.sh shebang - which will start perl in remote debugger mode. Thus, the script execution will pause - and so the web browser will lock, waiting for data. We can now switch to the netcat terminal, and we should see the familiar Perl debugger text - however, output through nc:
$ nc -l 7234
Loading DB routines from perl5db.pl version 1.32
Editor support available.
Enter h or `h h' for help, or `man perldebug' for more help.
main::(-e:1): do './test.pl'
DB<1> r
main::(./test.pl:29): $b = '4';
DB<1>
As the snippet shows, we now basically use nc as a "terminal" - so we can type r (and Enter) for "run" - and the script will run up do the breakpoint statement (see also In perl, what is the difference between $DB::single = 1 and 2?), before stopping again (note at that point, the browser will still lock).
So, now we can, say, step through the rest of test.pl, through the nc terminal:
....
main::(./test.pl:29): $b = '4';
DB<1> n
main::(./test.pl:30): print "STEP " . &$a . " NOW\n";
DB<1> n
main::(./test.pl:31): $b = '5';
DB<1> n
main::(./test.pl:32): print "STEP " . &$a . " AGAIN\n";
DB<1> n
Debugged program terminated. Use q to quit or R to restart,
use o inhibit_exit to avoid stopping after program termination,
h q, h R or h o to get additional info.
DB<1>
... however, also at this point, the browser locks and waits for data. Only after we exit the debugger with q:
DB<1> q
$
... does the browser stop locking - and finally displays the (complete) output of test.pl:
YEAH hello 2 there CMON
CMON hello 3 there YEAH
STEP hello 4 there NOW
STEP hello 5 there AGAIN
Of course, this kind of debug can be done even without running the web server - however, the neat thing here, is that we don't touch the web server at all; we trigger execution "natively" (for CGI) from a web browser - and the only change needed in the CGI script itself, is the change of shebang (and of course, the presence of the shebang wrapper script, as executable file in the same directory).
Well, hope this helps someone - I sure would have loved to have stumbled upon this, instead of writing it myself :)
Cheers!
For me, I use log4perl . It's quite useful and easy.
use Log::Log4perl qw(:easy);
Log::Log4perl->easy_init( { level => $DEBUG, file => ">>d:\\tokyo.log" } );
my $logger = Log::Log4perl::get_logger();
$logger->debug("your log message");
Honestly you can do all the fun stuff above this post.
ALTHOUGH, the simplest and most proactive solution I found was to just "print it".
In example:
(Normal code)
`$somecommand`;
To see if it's doing what I really want it to do:
(Trouble shooting)
print "$somecommand";
It will probably also be worth mentioning that Perl will always tell you on what line the error occurs when you execute the Perl script from the command line. (An SSH Session for example)
I will usually do this if all else fails. I will SSH into the server and manually execute the Perl script. For example:
% perl myscript.cgi
If there is a problem then Perl will tell you about it. This debugging method does away with any file permission related issues or web browser or web server issues.
You may run the perl cgi-script in terminal using the below command
$ perl filename.cgi
It interpret the code and provide result with HTML code.
It will report the error if any.

Bitbake append file to reconfigure kernel

I'm trying to reconfigure some .config variables to generate a modified kernel with wifi support enabled. The native layer/recipe for the kernel is located in this directory:
meta-layer/recipes-kernel/linux/linux-yocto_3.19.bb
First I reconfigure the native kernel to add wifi support (for example, adding CONFIG_WLAN=y):
$ bitbake linux-yocto -c menuconfig
After that, I generate a "fragment.cfg" file:
$ bitbake linux-yocto -c diffconfig
I have created this directory into my custom-layer:
custom-layer/recipes-kernel/linux/linux-yocto/
I have copied the "fragment.cfg file into this directory:
$ cp fragment.cfg custom-layer/recipes-kernel/linux/linux-yocto/
I have created an append file to customize the native kernel recipe:
custom-layer/recipes-kernel/linux/linux-yocto_3.19.bbappend
This is the content of this append file:
FILESEXTRAPATHS_prepend:="${THISDIR}/${PN}:"
SRC_URI += "file://fragment.cfg"
After that I execute the kernel compilation:
$ bitbake linux-yocto -c compile -f
After this command, "fragment.cfg" file can be found into this working directory:
tmp/work/platform/linux-yocto/3.19-r0
However none of the expected variables is active on the .config file (for example, CONFIG_WLAN is not set).
How can I debug this issue? What is supposed I'm doing wrong?
When adding this configuration you want to use append in your statement such as:
SRC_URI_append = "file://fragment.cfg"
After analyzing different links and solutions proposed on different resources, I finally found the link https://community.freescale.com/thread/376369 pointing to a nasty but working patch, consisting in adding this function at the end of append file:
do_configure_append() {
cat ${WORKDIR}/*.cfg >> ${B}/.config
}
It works, but I expected Yocto managing all this stuff. It would be nice to know what is wrong with the proposed solution. Thank you in advance!
If your recipe is based on kernel.bbclass then fragments will not work. You need to inherit kernel-yocto.bbclass
You can also use merge_config.sh scripts which is present in kernel sources. I did something like this:
do_configure_append () {
${S}/scripts/kconfig/merge_config.sh -m -O ${WORKDIR}/build ${WORKDIR}/build/.config ${WORKDIR}/*.cfg
}
Well, unfortunately, not a real answer... As I haven't been digging deep enough.
This was working alright for me on a Daisy-based build, however, when updating the build system to Jethro or Krogoth, I get the same issue as you.
Issue:
When adding a fragment like
custom-layer/recipes-kernel/linux/linux-yocto/cdc-ether.cfg
The configure step of the linux-yocto build won't find it. However, if you move it to:
custom-layer/recipes-kernel/linux/linux-yocto/${MACHINE}/cdc-ether.cfg
it'll work as expected. And it's a sligthly less hackish way of getting it to work.
If anyone comes by, this is working on jethro and sumo:
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
SRC_URI_append = " \
file://fragment.cfg \
"
FILESEXTRAPATHS documentation says:
Extends the search path the OpenEmbedded build system uses when looking for files and patches as it processes recipes and append files. The directories BitBake uses when it processes recipes are defined by the FILESPATH variable, and can be extended using FILESEXTRAPATHS.

Torque qsub : Change the ouput/error file destination doesn't work

If I did : qsub myscript.sh
Then it creates in the script path: myscript.sh.e12 and myscript.sh.o12 files.
But if I do : qsub -o /tmp/my.out myscript.sh
Then there is nothing in /tmp and in the script path only the myscript.sh.e12 file.
The output file is lost during the move. I don't know why.
I also tried with #PBS -o in pbs file but same result.
Thanks for your help.
Torque 2.5.7
RHEL 6.2
short answer: don't write output to /tmp/, write to some space you own, preferably with a unique path.
long answer: /tmp/ is ambiguous. Remember: the whole point of using a distributed resource manager is to run a job over multiple, or at least multiply assignable, compute resources. But each such device will almost certainly have its own /tmp/, and
you have no way of knowing to which one your job was written
you may have no rights on the arbitrary_device:/tmp/ on which one your job was written
So don't write output to /tmp/.

Make: Redo some targets if configuration changes

I want to reexecute some targets when the configuration changes.
Consider this example:
I have a configuration variable (that is either read from environment variables or a config.local file):
CONF:=...
Based on this variable CONF, I assemble a header file conf.hpp like this:
conf.hpp:
buildConfHeader $(CONF)
Now, of course, I want to rebuild this header if the configuration variable changes, because otherwise the header would not reflect the new configuration. But how can I track this with make? The configuration variable is not tied to a file, as it may be read from environment variables.
Is there any way to achieve this?
I have figured it out. Hopefully this will help anyone having the same problem:
I build a file name from the configuration itself, so if we have
CONF:=a b c d e
then I create a configuration identifier by replacing the spaces with underscores, i.e.,
null:=
space:= $(null) #
CONFID:= $(subst $(space),_,$(strip $(CONF))).conf
which will result in CONFID=a_b_c_d_e.conf
Now, I use this $(CONFID) as dependency for the conf.hpp target. In addition, I add a rule for $(CONFID) to delete old .conf files and create a new one:
$(CONFID):
rm -f *.conf #remove old .conf files, -f so no error when no .conf files are found
touch $(CONFID) #create a new file with the right name
conf.hpp: $(CONFID)
buildConfHeader $(CONF)
Now everything works fine. The file with name $(CONFID) tracks the configuration used to build the current conf.hpp. If the configuration changes, then $(CONFID) will point to a non-existant .conf file. Thus, the first rule will be executed, the old conf will be deleted and a new one will be created. The header will be updated. Exactly what I want :)
There is no way for make to know what to rebuild if the configuration changed via a macro or environment variable.
You can, however, use a target that simply updates the timestamp of conf.hpp, which will force it to always be rebuilt:
conf.hpp: confupdate
buildConfHeader $(CONF)
confupdate:
#touch conf.hpp
However, as I said, conf.hpp will always be built, meaning any targets that depend upon it will need rebuilt as well. A much more friendly solution is to generate the makefile itself. CMake or the GNU Autotools are good for this, except you sacrifice a lot of control over the makefile. You could also use a build script that creates the makefile, but I'd advise against this since there exist tools that will allow you to build one much more easily.