How to move from gitalb source base to gitlab omnibus? - mysql

I am trying to move gitlab-ce 8.5 source base to gitlab-ce 8.15 omnibus. We were using MySQL in source base but now we have to use thepsql with gitlab-ce omnibus`. When I was trying to take a backup so it was failing due to some empty repo.
Question: Is it any alternative way to move source base to omnibus with full backup?

I have moved gitlab from source base to the omnibus. You can use below link to convert db dump from MySQL to psql.
https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/update/mysql_to_postgresql.md
I have created a zip file of repos manually & copied to the gitlab omnibus server & restore it on /var/opt/gitlab/git-data/repository/.
After these steps, copy the below script on /var/opt/gitlab/git-data/xyz.sh & executed for updating the hooks.
#!/bin/bash
for i in repositories/* ; do
if [ -d "$i" ]; then
for o in $i/* ; do
if [ -d "$i" ]; then
rm "$o/hooks"
# change the paths if required
ln -s "/opt/gitlab/embedded/service/gitlab-shell/hooks" /var/opt/gitlab/git-data/"$o"/hooks
echo "HOOKS CHANGED ($i/$o)"
fi
done
fi
done
Note: Repos permission should be git:git
Some useful commands during the migration:
sudo gitlab-ctl start postgres **to start the Postgres service only**
sudo gitlab-psql **to use the gitlab bundle postgres.**
Feel free to comment if you face 5xx errors code on gitlab page.

Related

How to delete a file in persistent volume through CLI in Openshift

I'm trying to delete a file that is stored in persistent volume through CLI. I know the path but not sure how do I through CLI delete the file.
Reason I want to do it through CLI is that I am automating a certain workflow that requires triggering of a powershell script that runs OpenShift CLI to delete a file in volume and scale down.
How about Executing Remote Commands feature to remove the file as follows.
For example,
# oc exec <pod name> -- rm -f /path/to/file.txt
I hope it help you.

Openshift 3 - Overriding .s2i/bin files - assemble & run scripts

I wanted clarification on the possible scripts that can be added in the .s2i/bin directory in my project repo.
The docs say when you add these files they will override the default files of the same name when the project is built. For example, if I place my own "assemble" file in the .s2i/bin directory will the default assemble file run also or be totally replaced by my script? What If I want some of the behavior of the default file? Do I have to copy the default "assemble" contents into my file so both will be executed?
you will need to call out the original "assemble" script from your own. Similar to this
#!/bin/bash -e
# The assemble script builds the application artifacts from a source and
# places them into appropriate directories inside the image.
# Execute the default S2I script
source ${STI_SCRIPTS_PATH}/assemble
# You can write S2I scripts in any programming language, as long as the
# scripts are executable inside the builder image.
Using OpenShift, I want to execute my own run script (run).
So, I added in the src of my application a file in ./s2i/run
that slightly changes the default run file
https://github.com/sclorg/nginx-container/blob/master/1.20/s2i/bin/run
Here is my run file
#!/bin/bash
source /opt/app-root/etc/generate_container_user
set -e
source ${NGINX_CONTAINER_SCRIPTS_PATH}/common.sh
process_extending_files ${NGINX_APP_ROOT}/src/nginx-start ${NGINX_CONTAINER_SCRIPTS_PATH}/nginx-start
if [ ! -v NGINX_LOG_TO_VOLUME -a -v NGINX_LOG_PATH ]; then
/bin/ln -sf /dev/stdout ${NGINX_LOG_PATH}/access.log
/bin/ln -sf /dev/stderr ${NGINX_LOG_PATH}/error.log
fi
#nginx will start using the custom nginx.conf from configmap
exec nginx -c /opt/mycompany/mycustomnginx/nginx-conf/nginx.conf -g "daemon off;"
Then, changed the dockerfile to execute my run script as follows
The CMD command can be called once and dictates where is the script located that is executed when the Deployment pod starts.
FROM registry.access.redhat.com/rhscl/nginx-120
# Add application sources to a directory that the assemble script expects them
# and set permissions so that the container runs without root access
USER 0
COPY dist/my-portal /tmp/src
COPY --chmod=0755 s2i /tmp/
RUN ls -la /tmp
USER 1001
# Let the assemble script to install the dependencies
RUN /usr/libexec/s2i/assemble
# Run script uses standard ways to run the application
#CMD /usr/libexec/s2i/run
# here we override the script that will be executed when the deployment pod starts
CMD /tmp/run

Wget -i gives no output or results

I'm learning data analysis in Zeppelin, I'm a mechanical engineer so this is outside my expertise.
I am trying to download two csv files using a file that contains the urls, test2.txt. When I run it I get no output, but no error message either. I've included a link to a screenshot showing my code and the results.
When I go into Ambari Sandbox I cannot find any files created. I'm assuming the directory the file is in is where the csv files will be downloaded too. I've tried using -P as well with no luck. I've checked in man wget but it did not help.
So I have several questions:
How do I show the output from running wget?
Where is the default directory that wget stores files?
Do I need additional data in the file other than just the URLs?
Screenshot: Code and Output for %sh
Thanks for any and all help.
%sh
wget -i /tmp/test2.txt
%sh
# list the current working directory
pwd # output: home/zeppelin
# make a new folder, created in "tmp" because it is temporary
mkdir -p /home/zeppelin/tmp/Folder_Name
# change directory to new folder
cd /home/zeppelin/tmp/Folder_Name
# transfer the file from the sandbox to the current working directory
hadoop fs -get /tmp/test2.txt /home/zeppelin/tmp/Folder_Name/
# download the URL
wget -i test2.txt

sshfs mapped folder changes group ownership from apache to wheel

Problem:
When I save a php file to a development server over sshfs, the group is changed from apache to wheel. Given that the file belongs to my user and is readable by apache, this breaks the dev copy. I have to reset the group after every save.
Server file permissions:
As recommended by the Drupal community, my user owns the files but they are grouped to apache.
How I'm mapped:
I have a script that helps me do a local map to development servers running RHEL 7.1, change the folder in term, open a file browser, and start PHP Storm. I've set up the following convenience function for myself to do all this in one nice command.
$ sshmap devsvr
Which executes this function:
function sshmap {
if [[ -z $2 ]]; then
basefolder="/var/www/html/"
else
basefolder=$2
fi
fusermount -uz ~/mount/$1
mkdir -p ~/mount/$1
sshfs me#$1.domain.ext:$basefolder ~/mount/$1
# file browser
nemo ~/mount/$1 &
# change local folder to that mount point
cd ~/mount/$1
# start phpstorm
pstorm $HOME/mount/$1 2>/dev/null &
}
Question:
How can I control this and prevent this change? Is there an argument in sshfs that I'm missing?
Errata:
Safe write is turned on in PHPSTorm

How do I get Hudson to stop escaping parts of my shell script?

I would like to have a shell script that copies some logs from a part of my system to the hudson workspace so I can archive them.
So right now I have
#!/bin/bash -ex
cp /directory/structure/*.log .
This is kind enough to be changed to
cp '/directory/structure/*.log' .
Which of course is not found since I don't have a file named *.log.
So how do I get this script to work?
EDIT
So I left out the part that I was using sudo cp /path/*.log, because I didn't think that would matter. Of course it does and sudo is the issue not hudson.
One simple answer would be to have the shell script in a separate file, and have hudson call that.
sudo bash -c "cp /directory/structure/*.log"
Throwing it out there, but haven't had a chance to try it in Hudson (so I don't know how it gets quoted):
for f in /directory/structure/*.log ; do
cp $f .
done
In my simple test in a bash shell, different quoting options produce either one or multiple invocations of the copy command (either with all matching files or one at a time), but they all manage to do the copy successfully.