Its been a while since i had to modify an application. Today doing another deploy I got
Counting objects: 16, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (16/16), done.
Writing objects: 100% (16/16), 1.44 KiB | 0 bytes/s, done.
Total 16 (delta 11), reused 0 (delta 0)
remote: Stopping Cron cartridge
remote: CLIENT_RESULT: cron scheduling service is already disabled for gear <<OPENSHIFHASH>>
remote: Stopping PHP 5.4 cartridge (Apache+mod_php)
remote: Stopping PHPMyAdmin cartridge
remote: Operation not permitted - /var/lib/openshift/<<OPENSHIFHASH>>/app-deployments/2016-09-25_11-55-15.153/repo/utils/PkgInfo.pyc
To ssh://<<OPENSHIFHASH>>#<<APP>>.rhcloud.com/~/git/app.git/
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'ssh://<<OPENSHIFHASH>>#<<APP>>.rhcloud.com/~/git/app.git/'
Then I logged in using ssh and I found out that in fact that file PkgInfo.pyc is own by root when in fact should be owned by the app
like this
-rw-------. 1 <<OPENSHIFHASH>> <<OPENSHIFHASH>> 2929 Sep 25 11:55 HashCache.pyc
-rw-------. 1 <<OPENSHIFHASH>> <<OPENSHIFHASH>> 0 Sep 25 11:55 __init__.py
-rw-------. 1 root root 101 Oct 23 00:10 __init__.pyc
-rw-------. 1 <<OPENSHIFHASH>> <<OPENSHIFHASH>> 8991 Sep 25 11:55 MultiPart.py
-rw-------. 1 <<OPENSHIFHASH>> <<OPENSHIFHASH>> 8051 Sep 25 11:55 MultiPart.pyc
-rw-------. 1 <<OPENSHIFHASH>> <<OPENSHIFHASH>> 646 Sep 25 11:55 PkgInfo.py
-rw-------. 1 root root 613 Oct 23 00:10 PkgInfo.pyc
-rw-------. 1 <<OPENSHIFHASH>> <<OPENSHIFHASH>> 6607 Sep 25 11:55 Progress.py
Why would that file changed the owner? That file was updated on Oct, the 23 but me (as the app) I'm not able to change them to root
any idea?
I believe I have the same issue though with different cartridges/application architecture so the error log is different. OpenShift help suggested to file a bug about it.
My current workaround uses manual deployments:
Disable auto deployment with $ rhc app-configure <YOUR_APP> --no-auto-deploy
$ git push origin
$ ssh <YOUR_APP>#<YOUR_APP>.rhcloud.com
$ gear build
Run gear start for all your cartridges - make sure that they run successfully
This is not optimal but hope it helps.
Related
I am trying to run exe file on qemu in system mode. For that, I created a compiled a kernel, buildroot and qemu based on the Prepare the environment for developing Linux kernel with qemu (also created a shared folder between the host and the guest qemu from How to share a directory with the host without networking in QEMU). Now, I want to compile and run exe on the qemu system... After compiling the pcimem project on the host, I opened qemu, moved to the shared folder and tried to run it, but I got -sh: ./pcimem: not found. When I run ls in qemu, I can see the files and that they have the right permissions:
# ls -la
total 80
drwxrwxr-x 3 1006 1001 4096 Oct 2 17:32 .
drwxrwxr-x 3 1006 1001 4096 Oct 2 17:31 ..
drwxrwxr-x 8 1006 1001 4096 Oct 2 17:31 .git
-rw-rw-r-- 1 1006 1001 59 Oct 2 17:31 .gitignore
-rw-rw-r-- 1 1006 1001 18092 Oct 2 17:31 LICENSE.txt
-rw-rw-r-- 1 1006 1001 80 Oct 2 17:31 Makefile
-rw-rw-r-- 1 1006 1001 5691 Oct 2 17:31 README
-rwxrwxr-x 1 1006 1001 21672 Oct 2 17:32 pcimem
-rw-rw-r-- 1 1006 1001 5283 Oct 2 17:31 pcimem.c
What am I doing wrong ? How can I run exe on the qemu in system mode ?
Update:
I found out here that I am trying to invoke the exe file from busybox terminal, and apparently it can't be done... so I tried to invoke the next command line:
ash ./pcimem
but than I get the next error:
./pcimem: line 1: can't create #: Operation not permitted
./pcimem: line 1:ELF: not found
./pcimem: line 2: syntax error: unexpected "("
I am using a raspberry pi 3 with OSMC as the operating system along with Debian Stretch and nginx, and installed manually mariaDB 10.2 following some instructions I found somewhere a while back.
I have changed the datadir for mariadb to /media/USBHDD2/shared/mysql
When I boot, or reboot, the pi, mariaDB fails to start. Before, when I had the default datadir = /var/lib/mysql it was all fine. If I change it back it is fine.
However, if I login to the console I can successfully start it by using
service mysql start
Note that I am using 'service' rather than 'systemctl' - the latter does not work. The files mariadb.service and mysql.service do not exist anywhere.
In /etc/init.d I find two files: mysql and myswql which seem to be identical. If I remove the myswql from the directory mariadb won't start at all. I have tried editing these by putting, for example, a sleep 15 at the beginning, but to no avail. I have read all sorts of solutions about trying to test if the USBHDD2 is mounted, eg using
while ! test -f /media/USBHDD2/shared/test.txt
do
sleep 1
done
which I tried in the /etc/init.d/mysql and myswql files, and also in rc.local before calling for the start of mysql.
But that doesn't work either.
I also renamed the links in rc?.d to S99mysql so is starts after everything else, still no joy.
I have spent two full days on this to no avail. What do I need to do to get this working so that mysql starts on boot?
Files system is ntfs
output from ls -la //media/USBHDD2/shared/mysql is as follows:
total 176481
drwxrwxrwx 1 root root 4096 Mar 27 11:41 .
drwxrwxrwx 1 root root 4096 Mar 27 13:06 ..
-rwxrwxrwx 1 root root 16384 Mar 27 11:41 aria_log.00000001
-rwxrwxrwx 1 root root 52 Mar 27 11:41 aria_log_control
-rwxrwxrwx 1 root root 0 Nov 3 2016 debian-10.1.flag
-rwxrwxrwx 1 root root 12697 Mar 27 11:41 ib_buffer_pool
-rwxrwxrwx 1 root root 50331648 Mar 27 11:41 ib_logfile0
-rwxrwxrwx 1 root root 50331648 Mar 26 22:02 ib_logfile1
-rwxrwxrwx 1 root root 79691776 Mar 27 11:41 ibdata1
drwxrwxrwx 1 root root 32768 Mar 25 18:37 montegov_admin
-rwxrwxrwx 1 root root 0 Nov 3 2016 multi-master.info
drwxrwxrwx 1 root root 20480 Sep 3 2019 mysql
drwxrwxrwx 1 root root 0 Sep 3 2019 performance_schema
drwxrwxrwx 1 root root 86016 Mar 25 20:06 rentmaxpro_wp187
drwxrwxrwx 1 root root 0 Sep 3 2019 test
drwxrwxrwx 1 root root 32768 Nov 3 2016 trustedhomerenta_admin
drwxrwxrwx 1 root root 32768 Nov 3 2016 trustedhomerenta_demo
drwxrwxrwx 1 root root 40960 Mar 25 21:05 trustedhomerenta_meta
drwxrwxrwx 1 root root 36864 Mar 25 21:25 trustedhomerenta_montego
drwxrwxrwx 1 root root 36864 Mar 26 20:37 trustedhomerenta_testmontego
The problem is that the external drive is configured as ntfs.
Mysql requires the files and directory to be owned by mysql:mysql but since the ntfs does not have the same system of owners and groups as linux, the linux mount process asigns its own owner and group to the filestructure when mounting the drive. By defualt this ends up being root:root so mysql cannot use them.
ntfs does not allow CHOWN to work, so there is no way to change the ownership away from root.
One solution is to backup all the files, repartition as EXT4, and then restore all the files.
The solution I finally used was to specify mysql as the owner and group at the time that the drive is being mounted. Thus my /etc/fstab file was changed to:
ID=C2CA68D9CA68CB6D /media/USBHDD2 ntfs users,exec,uid=mysql,gid=mysql 0 2
and now mysql starts properly at boot.
phew ;-)
Thanks #danblack for getting me thinking in the right direction
I have been having problems logging into MySQL through phpMyAdmin so I decided to uninstall both on my computer. To my surprise, I do not even have the permissions to uninstall MySQL.
I tried to uninstall MySQL using the command "sudo rm /usr/local/mysql" and the terminal returned that I did not have permission.
I looked up a StackOverflow question about not having access to /usr/local and a reply asked the user to do sudo $chown, which my terminal said I did not have permission to do.
I did have permission to do ls -la /usr/local, and this is what the terminal returned:
total 0
drwxr-xr-x 16 root wheel 512 Jun 6 22:32 .
drwxr-xr-x# 9 root wheel 288 Mar 27 2018 ..
-rw-r--r-- 1 x wheel 0 Feb 19 23:21
.com.apple.installer.keep
drwxrwxr-x 4 x admin 128 Jun 7 2018 Cellar
drwxrwxr-x 17 x admin 544 May 22 2018 Homebrew
drwxrwxr-x 61 x admin 1952 Jun 7 2018 bin
drwxrwxr-x 3 x admin 96 May 22 2018 etc
drwxr-xr-x 10 x wheel 320 May 22 2018 git
drwxrwxr-x 3 x admin 96 May 22 2018 lib
drwxrwxr-x 5 x wheel 160 May 22 2018 libexec
lrwxr-xr-x 1 x wheel 30 Jun 6 22:32 mysql -> mysql-5.7.26-
macos10.14-x86_64
drwxr-xr-x 13 x wheel 416 Jun 6 22:32 mysql-5.7.26-macos10.14-
x86_64
drwxrwxr-x 4 x admin 128 Jun 7 2018 opt
drwxr-xr-x 3 x wheel 96 May 22 2018 remotedesktop
drwxrwxr-x 5 x admin 160 May 22 2018 share
drwxrwxr-x 3 x admin 96 May 22 2018 var
I am shocked about the "remotedesktop" line, but I hope it is innocent considering it shows up the same day as Homebrew. Please help me understand these results and what to do next.
You can't remove a directory with rm unless you use the recursive flag (-r). You should get an "is a directory" error, not a permissions error. You'll probably want to include the "force" flag (-f) to avoid having to confirm each deletion.
That's also a symlink so you need to remove the specific instance of it, or adapt your command to remove anything MySQL-ish using a wildcard:
rm -rf /usr/local/mysql*
As always, pay extremely close attention to what you're doing when using recursive deletes and sudo. A single space in the wrong place can utterly ruin your day. Triple check before executing these commands.
I'm working in company where by they are using kvm virtualisation
[root#601 log]# virsh list --all --title
Id Name State Title
----------------------------------------------------------------------------------
2 reporting-pilosa07 running 10.3.6.172
3 reporting-pilosa09 running 10.3.6.173
4 reporting-pilosa11 running 10.3.6.174
5 reporting-pilosa13 running 10.3.6.175
6 reporting-pilosa05 running 10.3.6.171
the VMs are running, but time to time, they dead for some reason and I would like to look at individual VM logs
[root#601 qemu]# ls -ltr
total 32
-rw------- 1 root root 2341 Oct 15 2018 reporting-pilosa07.log
-rw------- 1 root root 2341 Oct 15 2018 reporting-pilosa09.log
-rw------- 1 root root 2341 Oct 15 2018 reporting-pilosa11.log
-rw------- 1 root root 2341 Oct 15 2018 reporting-pilosa13.log
-rw------- 1 root root 4885 Nov 12 2018 reporting-pilosa05.log
-rw------- 1 root root 7181 Jul 25 04:14 offlineonboarder02.log
[root#601 qemu]# pwd
/var/log/libvirt/qemu
The logs are not logging since a year back. Where can I enable back the logs so that I can observe the cause to why the VMs went dead?
Thanks.
Sorry in advance for my bad english...
I'm trying to clone an hg repository using Eclipse on Ubuntu.
I always have the following error message which is exactly the same when I perform a "hg clone" command :
"Operation not permitted: <workspace_folder>/<project_name>/.hg/requires"
Here is the /.hg content :
ls -al .hg/
total 40K
drwxrwxr-x 4 www-data web 4.0K Apr 18 09:33 .
drwxrwxr-x 3 www-data svn 4.0K Mar 5 17:52 ..
-rwxrwxr-x 1 www-data web 57 Mar 5 17:48 00changelog.i
drwxrwxr-x 2 www-data web 4.0K Apr 18 09:33 cache
-rwxrwxr-x 1 www-data web 40 Mar 5 17:51 dirstate
-rwxrwxrwx 1 www-data web 40 Apr 18 21:45 requires
drwxrwxr-x 3 www-data web 4.0K Apr 18 09:33 store
-rw-rw-r-- 1 nico web 0 Apr 18 09:33 undo.bookmarks
-rw-rw-r-- 1 nico web 7 Apr 18 09:33 undo.branch
-rw-rw-r-- 1 nico web 38 Apr 18 09:33 undo.desc
-rw-rw-r-- 1 nico web 40 Apr 18 09:33 undo.dirstate
And here is the /.hg/requires file content :
revlogv1
store
fncache
dotencode
Here is the output of the hg clone command :
running ssh nico#www.there.com "hg -R /var/dev/projects/my_hg_project serve --stdio"
sending hello command
sending between command
nico#www.there.com's password:
remote: 145
remote: capabilities: lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch stream unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024
remote: 1
destination directory: my_hg_project
abandon : Operation not permitted : /media/data/workspaces/my_workspace/my_hg_project/.hg/requires
I tried many things such as chmod and chown... I'm not a linux expert so I googled my error message but there is not many results.
Has anyone an idea about this?
Thank you very much in advance
You're right to be thinking file permissions because that's the root cause, but I'm a little confused. Are you showing up the file permissions on the server, but seeing that message on the your workstation? Where exactly are you trying to clone from and to? Are you cloning over http or ssh? What user are you running the clone command as?