I'm trying to build a package which has some files under /etc that are not configuration. They are included in the conffiles automatically even if I create an empty package.conffiles in the debian directory.
How can I stop dh_installdeb from doing that?
I’m not sure I understand rafl’s answer, but dh_installdeb as of debhelper=9.20120115ubuntu3 adds everything below /etc to conffiles nearly unconditionally: debian/conffiles adds conffiles but does not override them.
It’s possible to override manually in debian/rules. For example, in order to prevent any files from being registered as conffiles:
override_dh_installdeb:
dh_installdeb
find ${CURDIR}/debian/*/DEBIAN -name conffiles -delete
(of course, indentation must be hard tab)
It's possible to define a upgrade rule at preinst script in debian/<package-name>.preinst using dpkg-maintscript-helper.
#!/bin/sh
# preinst script for <package-name>
set -e
case "$1" in
install|upgrade)
if dpkg-maintscript-helper supports rm_conffile 2>/dev/null; then
dpkg-maintscript-helper rm_conffile /etc/foo/conf.d/bar <Previous package version> -- "$#"
fi
;;
abort-upgrade)
;;
*)
echo "preinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
exit 0
More info:
The right way to remove an obsolete conffile in a Debian package
Here is what I came up with as an extension of Vasiliy's answer. It effectively does what dh_installdeb does but without automatically adding /etc files. This way you regain full control again over what files are considered conffiles and what are not.
override_dh_installdeb:
dh_installdeb
#echo "Recreating conffiles without auto-adding /etc files"
#for dir in ${CURDIR}/debian/*/DEBIAN; do \
PKG=$$(basename $$(dirname $$dir)); \
FILES=""; \
if [ -f ${CURDIR}/debian/conffiles ]; then \
FILES="${CURDIR}/debian/conffiles"; \
fi; \
if [ -f ${CURDIR}/debian/$${PKG}.conffiles ]; then \
FILES="$$FILES ${CURDIR}/debian/$${PKG}.conffiles"; \
fi; \
if [ -n "$$FILES" ]; then \
cat $$FILES | sort -u > $$dir/conffiles; \
elif [ -f $$dir/conffiles ]; then \
rm $$dir/conffiles; \
fi; \
done
(Of course, use REAL tabs if pasting into your rules file).
This answer uses BASH (or /bin/sh which is either symlinked to BASH or is a variant of it). There may be a way to achieve this by using only makefile internal commands, but I'm not that good with those.
This should work even when building multiple binary packages from the same source and it respects the plain debian/conffiles as well as the package-specific debian/${pkg}.conffiles.
Originally, this answer suggested providing your own debian/conffiles files only listing actual configuration files to be installed. Apparently that only serves to add more configuration files but won't override the whole conffiles file.
However, I can't quite see why you'd even want that. If the files are not configuration files, the user won't edit them, so none of the automatic conffile handling will get in your way on upgrades. Also, if they're not actually config files, I'd highly recommend to simply install them to a place other than /etc, avoiding your issue as well.
Related
I'm testing scripts for a client. For this I created a ~10k files which I uploaded to a test folder using the web UI. Then I trashed and then deleted this folder.
Then I added a shared folder from the client and listed all the files using the /v3/files with the proper query parameters to include files from other drives.
I noticed my script was not functioning well due to a lot of 404 responses. It turns out, deleting 10k files is not instantaneous for Google Drive, at least from the API's point of views. The listed files also included the listed files that I've just deleted who were then deleted later.
From what I've seen, Google Drive is able to process about 200 files/s.
I could just wait, but then I found another problem after I deleted the shared folder and replaced it with another shared folder from my client, all of which had ten of thousand of files. As expected, it took some time to see the number of files to go down. But then I saw the number increasing slowly then decreasing.
I suspect this is the adding of a folder that increases the number at the same time the deleting of the other decreases the files count but I am not sure.
Am I the only one who experienced this? Is there something in the API that I've missed that could mitigate this or at least could tell me when Google Drive has ended processing all operations?
Edit: steps to reproduce:
Code that I used to create a bunch of files:
#!/usr/bin/env bash
#create_lots_of_files.sh
mkdir -p lots_of_files
cd lots_of_files
for i in $(seq 10000); do
FOLDER=$(("$i"%10))
FILE="file_$i".txt
mkdir -p "$FOLDER"
echo "$FILE" > "$FOLDER/$FILE"
done
Then upload this folder to your drive. Grab a coffee this will take time.
Code to fetch the file ids using the api:
#!/usr/bin/env bash
# list_ids.sh <output file path> <bearer token>
set -e
# shellcheck disable=SC2128
SCRIPTDIR="$(dirname "$(realpath "$BASH_SOURCE")")"
PAGE_SIZE=1000
if [[ -z $1 ]]; then
echo "First argument must specify a path to store the files ids"
exit 1
fi
IDS_FILE="$1"
if [[ -z $2 ]]; then
echo "Second argument must be the bearer token"
exit 1
fi
ACCESS_TOKEN="$2"
if ! jq -h &> /dev/null; then
echo "error: need to install jq: sudo apt-get install jq"
exit 1
fi
cd "$SCRIPTDIR"
BASE_QUERY_STRING="https://www.googleapis.com/drive/v3/files\
?corpora=allDrives\
&includeItemsFromAllDrives=true\
&supportsAllDrives=true\
&pageSize=$PAGE_SIZE\
"
true > "$IDS_FILE"
while true; do
# If pageToken is empty then it defaults to the first page
QUERY_STRING="$BASE_QUERY_STRING&pageToken=$NEXT_PAGE_TOKEN"
RESPONSE="$(curl \
--silent \
--fail \
-H 'GData-Version: 3.0' \
-H "Authorization: Bearer $ACCESS_TOKEN" \
--request GET \
"$QUERY_STRING" \
)"
jq -r '.files | map(select(.mimeType != "application/vnd.google-apps.folder")) | .[].id' <<<"$RESPONSE" | tee -a "$IDS_FILE"
NEXT_PAGE_TOKEN="$(jq -r '.nextPageToken' <<< "$RESPONSE")"
if [[ -z "$NEXT_PAGE_TOKEN" || "$NEXT_PAGE_TOKEN" = 'null' ]]; then
break
fi
done
Keep track of the number of files with:
while true; do date; ./list_ids.sh ids.txt '<bearer token>' | wc -l; sleep 5; done
Delete lots_of_files on your drive and watch the files count.
I can tell you, you are not the first one. In my experience, this behaviour you are reporting is expected, you will see, changes need to be replicated across all Google Workspace servers, this has a delay usually referred to as 'propagation', as mentioned in this Help Center article https://support.google.com/drive/answer/7166529.
If you share or unshare folders with a lot of files or subfolders, it might take time before all permissions change. If you change a lot of edit or view permissions at once, it might take time before you see the changes.
Although the task you are doing is different to just sharing files, due to the high volume of files and folders you are working with and the fact that you are working with shared folders you will experience a delay. As outlined in this other Help Center article https://support.google.com/a/answer/7514107, you can expect changes to be fully applied within 24 hours.
I have assisted multiple data migrations with Google Workspace admins and this is also expected when working with large amounts of data.
I am trying to move gitlab-ce 8.5 source base to gitlab-ce 8.15 omnibus. We were using MySQL in source base but now we have to use thepsql with gitlab-ce omnibus`. When I was trying to take a backup so it was failing due to some empty repo.
Question: Is it any alternative way to move source base to omnibus with full backup?
I have moved gitlab from source base to the omnibus. You can use below link to convert db dump from MySQL to psql.
https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/update/mysql_to_postgresql.md
I have created a zip file of repos manually & copied to the gitlab omnibus server & restore it on /var/opt/gitlab/git-data/repository/.
After these steps, copy the below script on /var/opt/gitlab/git-data/xyz.sh & executed for updating the hooks.
#!/bin/bash
for i in repositories/* ; do
if [ -d "$i" ]; then
for o in $i/* ; do
if [ -d "$i" ]; then
rm "$o/hooks"
# change the paths if required
ln -s "/opt/gitlab/embedded/service/gitlab-shell/hooks" /var/opt/gitlab/git-data/"$o"/hooks
echo "HOOKS CHANGED ($i/$o)"
fi
done
fi
done
Note: Repos permission should be git:git
Some useful commands during the migration:
sudo gitlab-ctl start postgres **to start the Postgres service only**
sudo gitlab-psql **to use the gitlab bundle postgres.**
Feel free to comment if you face 5xx errors code on gitlab page.
I am trying to set up a data analysis pipeline in TopHat. For some reason, I cannot seem to get the program to use my designated output directory. I have tried the following commands:
--output-dir "/path" \
--output-dir /path \
-o "/path" \
-o /path \
However, all the files keep being written to ./tophat_out
Has anyone encountered a similar problem?
I ran into this problem too, specifying all options before the genome and sequence files solves this however.
When you build Tcl/Tk by default it creates the files
tclsh85
wish85
However many programs call tclsh and wish. This is one fix for that
cp tclsh85 tclsh
cp wish85 wish
However, can you simply build tclsh and wish directly, perhaps using a configure argument?
This behavior is The Right Thing as it allows several versions of the interpreter and its libraries to coexist in the system. The system, in turn, does provide a way to "bless" one of the version as "default" — for instance, Debian provides "alternatives". In essence, usually a symlink with the "canonical" name is created pointing to the real executable, like /usr/bin/tclsh → /usr/bin/tclsh85. And with the "blessed" version available via such a symlink for the applications that do not care about the precise version of the runtime, certain other applications still can pick some specific runtime version by referring to the interpreter's real executable name.
This also provides an easy way to test an existing program against an experimental runtime version: you just run /usr/bin/tclsh86 /path/to/the/script.tcl instead of just running /path/to/the/script.tcl as usually which relies on the shebang to pick the interpreter.
A long time ago, the builds of Tcl and Tk used to work in the way you describe. It was changed to the current system (putting the version number in the name) to allow multiple versions to coexist more smoothly; this was a very strong demand from the user community at the time.
Symlink the version-less filenames to the real ones (or use the mechanism of your distribution) if you want to give up control over which version to use. Alternatively, use this (fairly horrible) piece of mixed shell/Tcl code at the top of your files:
#!/bin/sh
# Try with a versionless name \
exec tclsh "$0" ${1+"$#"}
# Otherwise, try with tclsh8.6 \
exec tclsh8.6 "$0" ${1+"$#"}
# Otherwise, try with tclsh8.5 \
exec tclsh8.5 "$0" ${1+"$#"}
# Otherwise, try with tclsh8.4 \
exec tclsh8.4 "$0" ${1+"$#"}
# Otherwise... well... give up! \
echo "no suitable Tcl interpreter" >&1; exit 1
This relies on the fact that Tcl, unlike the Unix shell, treats a \ at the end of a comment line as meaning that the comment extends onto the next line.
(Myself? I don't usually put in #! lines these days; I don't consider it an imposition to write tclsh8.5 myscript.tcl.)
I would like to add prefix on all folders and directories.
Example:
I have
Hi.jpg
1.txt
folder/
this.file_is.here.png
another_folder.ok/
I would like to add prefix "PRE_"
PRE_Hi.jpg
PRE_1.txt
PRE_folder/
PRE_this.file_is.here.png
PRE_another_folder.ok/
Thanks to Peter van der Heijden, here's one that'll work for filenames with spaces in them:
for f in * ; do mv -- "$f" "PRE_$f" ; done
("--" is needed to succeed with files that begin with dashes, whose names would otherwise be interpreted as switches for the mv command)
Use the rename script this way:
$ rename 's/^/PRE_/' *
There are no problems with metacharacters or whitespace in filenames.
For adding prefix or suffix for files(directories), you could use the simple and powerful way by xargs:
ls | xargs -I {} mv {} PRE_{}
ls | xargs -I {} mv {} {}_SUF
It is using the paramerter-replacing option of xargs: -I. And you can get more detail from the man page.
This could be done running a simple find command:
find * -maxdepth 0 -exec mv {} PRE_{} \;
The above command will prefix all files and folders in the current directory with PRE_.
To add a prefix to all files and folders in the current directory using util-linux's rename (as opposed to prename, the perl variant from Debian and certain other systems), you can do:
rename '' <prefix> *
This finds the first occurrence of the empty string (which is found immediately) and then replaces that occurrence with your prefix, then glues on the rest of the file name to the end of that. Done.
For suffixes, you need to use the perl version or use find.
If you have Ruby(1.9+)
ruby -e 'Dir["*"].each{|x| File.rename(x,"PRE_"+x) }'
with Perl:
perl -e 'rename $_, "PRE_$_" for <*>'
On my system, I don't have the rename command. Here is a simple one liner. It finds all the HTML files recursively and adds prefix_ in front of their names:
for f in $(find . -name '*.html'); do mv "$f" "$(dirname "$f")/prefix_$(basename "$f")"; done
Here is a simple script that you can use. I like using the non-standard module File::chdir to handle managing cd operations, so to use this script as-is you will need to install it (sudo cpan File::chdir).
#!/usr/bin/perl
use strict;
use warnings;
use File::Copy;
use File::chdir; # allows cd-ing by use of $CWD, much easier but needs CPAN module
die "Usage: $0 dir prefix" unless (#ARGV >= 2);
my ($dir, $pre) = #ARGV;
opendir(my $dir_handle, $dir) or die "Cannot open directory $dir";
my #files = readdir($dir_handle);
close($dir_handle);
$CWD = $dir; # cd to the directory, needs File::chdir
foreach my $file (#files) {
next if ($file =~ /^\.+$/); # avoid folders . and ..
next if ($0 =~ /$file/); # avoid moving this script if it is in the directory
move($file, $pre . $file) or warn "Cannot rename file $file: $!";
}
This will prefix your files in their directory.
The ${f%/*} is the path till the last slash / -> the directory
The ${f##*/} is the text without anything before last slash / -> filename without the path
So that's how it goes:
for f in $(find /directory/ -type f); do
mv -v $f ${f%/*}/$(date +%Y%m%d)_Prefix_${f##*/}
done
Open cmd and set the directory to the folder and run the following command:
for /f "tokens=*" %a in ('dir /b') do ren "%a" "00_%a"
00_ is prefix in "00_%a", so you can change it according to your requirements.
It will rename all of the files in the selected folder.