I am following what's suggested in this article to change the iptables rules in order to allow incoming connections. For some reason, the qemu hooks does not run. I simply tried to write into a file with echo 'some output' > someweirdfilename before making any vm name checks to run the actual script itself to later check the existence of the file. It looks like the hook is not executed at all. Made sure that libvirtd.service is restarted, so is guest and eventually tried the complete reboot. All resulted in the same. Running libvirt 7.6.0 on a fedora 35. Does anyone have any suggestions for troubleshooting?
I am a little confused about start up scripts and the command line options. I am building a small raspberry pi based server for my node applications. In order to provide maximum protection against power failures and flash write corruption, the root file system is read only, and that embraces the home directory of my main user, were the production versions of my apps (two of them) are stored. Because the .pm2 directory here is no good for logs etc I currently set PM2_HOME environment variable to a place in /var (which has 512kb unused space around it to ensure writes to i. The eco-system.json file reads this environment variable also to determine where to place its logs.
In case I need to, I also have a secondary user with a read write home directory in another (protected by buffer space around it) partition. This contains development versions of my application code which because of the convenience of setting environments up etc I also want to monitor with PM2. If I need to investigate a problem I can log in to that user and run and test the application there.
Since this is a headless box, and with watchdog and kernel panic restarts built in, I want pm2 to start during boot and at minimum restart the two production apps. Ideally it should also starts the two development versions of the app also but I can live without that if its impossible.
I can switch the read only root partition to read/write - indeed it does so automatically when I ssh into my production user account. It switches back to read only automatically when I log out.
So I went to this account to try and create a startup script. It then said (unsurprisingly) that I had to run a sudo command like so:-
sudo su -c "env PATH=$PATH:/usr/local/bin pm2 startup ubuntu -u pi --hp /home/pi"
The key issue for me here is the --hp switch. I went searching for some clue as to what it means. Its clearly a home directory, but it doesn't match PM2_HOME - which is set to /var/pas in my case to take it out of the read only area. I don't want to try and and spray my home directory with files that shouldn't be there. So am asking for some guidance here
I found out by experiment what it does with an "ubuntu" start up script. It uses it to set PM2_HOME in the script by appending "/.pm2" to it.
However there is nothing stopping you editing the script once it has created it and setting PM2_HOME to whatever you want.
So effectively its a helper for the script, but only that and nothing more special.
I have an application in Openshift free plan with only one gear. I want to change it to scalabe and take usage of all of 3 free gears.
I read this blog post from openshift and I found that there is a way to do it. I should clone my current application to a new one as a scalable which will use the 2 remaining gears and then I will delete the original application. Thus, the new one will have 3 free gears.
The way that blog suggest is: rhc create-app <clone> --from-app <existing> --scaling
I have the following error: invalid option --from-app
Update
After running the command gem update rhc, I don't have the error above but...A new application with the given name has created with the same starting package (Python 2.7) just like the existing one, but all the files are missing. It actually create a blank application and not a clone of the existing.
Update 2
Here is the structure of the folder:
-.git
-.openshift
-wsgi
---static
---views
---application
---main.py
-requirements.txt
-setup.py
From what we've talked on IRC, your problem was around missing SSH configuration on Windows machine:
Creating application xxx ... done
Waiting for your DNS name to be available ...done
Setting deployment configuration ... done
No system SSH available. Please use the --ssh option to specify the path to your SSH executable, or install SSH.
I've double checked it, and it appears to be working without any problem.
The only requirement is to have the latest rhc client and putty or any other
SSH client. I'd recommend going through this tutorial once again and double-check everything to make sure everything is working properly.
Make sure you are using the newest version of the rhc gem with "gem update rhc" to make sure that you have access to that feature from the command line.
The --from-app will essentially do a 'rhc snapshot save & snapshot restore` (amoung other things) as you can see here from the source:
if from_app
say "Setting deployment configuration ... "
rest_app.configure({:auto_deploy => from_app.auto_deploy, :keep_deployments => from_app.keep_deployments , :deployment_branch => from_app.deployment_branch, :deployment_type => from_app.deployment_type})
success 'done'
snapshot_filename = temporary_snapshot_filename(from_app.name)
save_snapshot(from_app, snapshot_filename)
restore_snapshot(rest_app, snapshot_filename)
File.delete(snapshot_filename) if File.exist?(snapshot_filename)
paragraph { warn "The application '#{from_app.name}' has aliases set which were not copied. Please configure the aliases of your new application manually." } unless from_app.aliases.empty?
end
However this will not copy over anything in your $OPENSHIFT_DATA_DIR directory so if you're storing files there, you'll need to copy them over manually.
we've updated hudson to jenkins and have a few dependencies upon the "hudson" user we used to have.
Now that we have jenkins running (works fine) we'd like it to run as the user "hudson" in order to keep our other processes intact without having to rewrite them.
We found instructions on how to do this BEFORE installing jenkins, but we're already past that point. Jenkins is installed and up and running. Is there a way to let jenkins run as the user "hudson"?
We are running CENTOS
Jenkins usually runs with it's own user, so there are two main issues to handle:
Make sure user 'hudson' has full access to the files of user 'jenkins' (or whatever user it was set to run as).
Start the Jenkins-daemon (or other initiator) with the 'hudson' user.
(another approach is to change the user-ID so it is actually the same user but with two names)
Good luck!
If you've installed Jenkins from RPM, there should be an /etc/sysconfig/jenkins file with a JENKINS_USER setting that defaults to 'jenkins' that you can change to 'hudson'.
I second Gonen's comment above about making sure you change the ownership of the 'jenkins' owned files to 'hudson'. Don't forget about the /var/log/jenkins logs.
Also don't forget to restart the Jenkins service after updating the files.
I use GVIM on Ubuntu 9.10. I'm looking for the right way to configure GVIM to be able to edit remote files (HTML, PHP, CSS) by for exemple ftp.
When i use :e scp://username#remotehost/./path/to/file i get: error detected while processing BufEnter Auto commands for "*":E472: Command failed.
When i open a file on remote via Dolphin or Nautilus, i cannot use other files with NERDTree.
Finally when i edit on remote a file via Dolphin the rights are changing to access interdit.
So how to use GVIM to edit remote files like on my localhost?
I've found running the filesystem over ssh (by means of sshfs) a better option than having the editor handle that stuff or running the editor itself over an ssh tunnel.
So you need to
apt-get install sshfs
and then
sshfs remoteuser#remotehost:/remote/path /local/mountpoint
And that will let you edit your remote files as if they were on your local file system.
To make it even smoother you can add a line to /etc/fstab
sshfs#remoteusername#remotehost:/remote/path /local/mountpoint fuse user,noauto
For some reason I find that I have to use fusermount -u /local/mountpoint rather then just umount /local/mountpoint when experimenting with this. Maybe that's just my distro.
Recently I've also noted that the mounting user must be in the fuse group. So:
sudo addgroup <username> fuse
An other popular option of course, would be to run vim (rather then gvim) inside a GNU Screen session on one machine and connect to that session via ssh from wherever you happen to be. Code along all day at work and in the evening you ssh into your office computer, reattach to your gnu screen session and pick up exactly where you left off. I used find the richer color palette to be the only thing I really missed from gvim when using vim, but that can actually be fixed thanks to a fork of urxvt that will let you customize the entire 256 position color palette, not just the 16 first positions of the palette that most terminal emulators will let you customize.
There is one way and that is using the remote host's copy, using SSH to forward the X11 client to you, like so:
user#local:~/$ ssh -X user#host
...
user#host:~/$ gvim file
The latter command should open gvim on your desktop. Of course, this relies on the remote host having X11 / gnome / gvim installed in the first place, which might not be the solution you're looking for / an option in your case.
Note: X11 forwarding can be a security risk.
In order for netrw to work seamlessly, I believe you need to not be in compatibility mode.
Try
:set nocompatible
then
:edit scp://host/path/to/file
Try this
:e scp://username#remotehost//path/to/file
Note that the use of // is intentional after remotehost it gives the absolute path of your file
:)
http://www.celsius1414.com/2009/08/19/how-to-edit-remote-files-with-local-vim/
The vim tips wiki has an article on this, Editing remote files via scp in vim.
EDIT: Key authentication is not necessary for opening files over ssh. Vim will prompt for password.
It would be useful to note if netrw.vim was loaded by vim when it started.
:echo exists("g:loaded_netrwPlugin")
For opening files over ssh, you need your local machine's public key in the server's authorized keys. Following help section in vim documentation explains it pretty well.
:help netrw-ssh-hack
Quick way to export public key would be by using ssh-copy-id (if available).
ssh-copy-id user#host
And have a look at netrw documentation for network file editing over other protocols.
:help netrw
HTH.
According to the docs BufEnter is processed after the file has been read and the buffer created, so my guess is that netrw successfully read the file but you have a plugin that assumes the file is on the local filesystem and is trying to access it, e.g. to run ctags.
Try disabling all your plugin scripts except the default Vim ones, and then editing the file.
Also, try editing a directory to see if netrw can read that - you need to put the / on the end so that netrw knows it is a dir.
About your command, :e scp://username#remotehost/./path/to/file : note that with netrw, scp is taken relative to your home directory on that remote host. To avoid home-relative pathing, drop that "."; ie. :e scp://username#remotehost//path/to/file .
to accomplish this on windows download/install the Dokan library and Dokan SSHFS, which are the first and last links on this page.
I didn't think you were going to be able to directly edit a remote file using GVIM running locally. However, as others have pointed out, this is defintiely possible. This looks very interesting; I will check this out. I will leave the rest of my post up here, in case it is useful to anyone else, as an alternative method. This method will work even if you don't have SSH access to the file (ie, you only have FTP, or S3, or whatever).
You may get that effect, though, by tying GVIM into a graphical file transfer application. For example, on OS X, I use CyberDuck to transfer files (FTP, SFTP, etc). Then, I have it configured to use GVIM as my editor, so I can just double-click on a file in the remote listing, and CyberDuck will download a copy of that remote file, and open it in GVIM. When I save it in GVIM, CyberDuck uploads the file back to the remote host.
I'm sure that this functionality is not unique to CyberDuck, and is probably present in most nicer file transfer utilities.