How to persist proxy settings for Chrome on Ubuntu? - google-chrome

I've added some proxy settings in google-chrome.desktop:
sudo gedit /usr/share/applications/google-chrome.desktop
by adding them to each "Exec" entry (there are three):
Exec=/usr/bin/google-chrome-stable %U --proxy-server="localhost:3128" --proxy-bypass-list="localhost,127.0.0.1"
But: With each update of Chrome (and there are a lot...) these settings are overwritten. Is there a way to persist the proxy settings, making them survive an update?

Current workaround: I set the proxy settings with this command:
sudo sed -i.bak '/^Exec\=/ s/$/ \-\-proxy\-server\="localhost\:3128" \-\-proxy\-bypass\-list\="localhost\,127\.0\.0\.1"/' /usr/share/applications/google-chrome.desktop
Warning: This doesn't check if there's already a --proxy... declared in the processed lines. However -i.bak creates a backup of the file ;)

Related

How do I get xdebug/step-debugging working with ddev?

I've been working with ddev on my Drupal projects, and now want to use xdebug so I have step-debugging with PhpStorm (or really any IDE would be fine). But I can't seem to get it to stop on breakpoints. I tried to follow the instructions in ddev docs but that doesn't do get me going, and I don't know what to do next. I did:
Set the 172.28.99.99 IP address as discussed there
Enabled xdebug using config.yaml xdebug_enabled: true and ddev start (and checked with phpinfo to see that xdebug was enabled.)
Put PHPStorm in "listen for debug connections" mode
Debugging xdebug in any setup can be a little trouble, but here are the steps to take:
First, reread the docs. You may want to read the troubleshooting docs rather than this issue, since they're maintained more often.
Make sure xdebug has been enabled; it's disabled by default for performance reasons. Most people use ddev xdebug on to enable it when they want it, and ddev xdebug off when they're done with it, but it can also be enabled in .ddev/config.yaml.
Don't assume that some obscure piece of code is being executed and put a breakpoint there. Start by putting a breakpoint at the first executable line in your index.php. Oh-so-many times people think it should be stopping, but their code is not being executed.
ddev ssh into the web container. Can you ping host.docker.internal (and get responses)? If you can't, you might have an over-aggressive firewall.
In PHPStorm, disable the "listen for connections" button so it won't listen. Or just exit PHPStorm.
ddev ssh: Can telnet host.docker.internal 9003 connect? If it does, you have something else running on port 9003, probably php-fpm. Use lsof -i :9003 -sTCP:LISTEN to find out what is there and stop it, or change the xdebug port and configure PHPStorm to use the new one . Don't continue until your telnet command does not connect.
Now click the listen button on PHPStorm to start it listening for connections.
ddev ssh and try the telnet host.docker.internal 9003 again. It should connect. If not, maybe PHPStorm is not listening, or not configured to listen on port 9003?
Check to make sure that Xdebug is enabled. You can use php -i | grep grep Xdebug inside the container, or use any other technique you want that gives the output of phpinfo(), including Drupal's admin/reports/status/php. You should see with Xdebug v2.9.6, Copyright (c) 2002-2020 and php -i | grep "xdebug.remote_enable" should give you xdebug.remote_enable: On.
Set a breakpoint in the first relevant line of the index.php of your project and then visit the site with a browser. It should stop there.
A note from #heddn: If you want to have xdebug running only for fpm, phpenmod -s fpm xdebug for example, instead of running enable_xdebug.
A note from #mfrieling: If you use a browser extension like XDebug Helper which sets an IDE key, that must be the same as on the server. Since DDEV 1.10.0 "there's a real user created for you inside the web and db containers, with your username and userid" which is also used as IDE key by default. The used IDE key must be the same on the server, the browser extension/cookie sent and PHPStorm. You can change the IDE key in DDEV by creating a file .ddev/php/xdebug.ini with the following two lines (replace PHPSTORM with the value you want use:
[XDebug]
xdebug.idekey = PHPSTORM
Your followups are welcome here!
Thanks, had the same problem and adding the file .ddev/docker-compose.xdebug.yaml fixed the issue.
However, I am running on a Mac / OSX and found these additional steps worked to discover the IP address of the internal host from inside the container:
1.) Log into the web continaner ddev ssh
2.) Run ping docker.for.mac.localhost
3.) Set the returned IP address for host.docker.internal in the above yaml file.
4.) Remove and start the DDEV.
Also worth mentioning validating xdebug in PHPStorm is useful to check the config.
Careful with Macs, as they may have php-fpm running. If it's the case PHP Storm won't find the connection (as it's already taken by php-fpm).
To see if it's the case run:
lsof -i :9000 -sTCP:LISTEN
if it returns something like php-fpm, then you have this issue
Try closing it (see PHP-FPM can't be closed ).
Running it once you have fixed it (potentially restarted your mac) you should see something like this:
➜ solrpoc lsof -i :9000 -sTCP:LISTEN
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
phpstorm 512 alejandro.moreno 490u IPv6 0xaf3eef0f3233a83 0t0 TCP *:cslistener (LISTEN)

Libvirt generated profiles

I'm using apparmor as hardening layer for libvirt-qemu , everything is OK , but there is one thing that I can't solve systematically, let me explain :
When create a new qemu instance , profile is generated from /etc/apparmor.d/libvirt/TEMPLATE.qemu to a file with path /etc/apparmor.d/libvirt/libvirt-81303229-df4c-4b18-b33b-277bcda81b0f for example .
When instance is shut-off profile is unloaded from kernel by apparmor and it is OK as expected. But if i remove the instance definitively, i would expect that profile is removed also from filesystem, but it is not and still present in filesystem. After some time I have very big mess in libvirt instance profile files
Yes .. I can write a cron job what will be delete unnecessary libvirt profile files ... but ..is there some more clear solution , maybe builtin function of apparmor ?
Thanks
Are you using libvirt undefine to delete the stopped guest? It appears that virt-aa-helper should delete an undefined domain but I think it is a bug and you should file a ticket.
You can use the virt-aa-helper command directly to remove the files which is probably the safest as it should deal with the dependencies for you.
An example command is:
$ sudo /usr/lib/libvirt/virt-aa-helper -D -u libvirt-3c3d5aa2-f581-457d-b5ab-efbf9fdd4a6e
But it may be some edge case that they need to account for, where you can undefine a running instance to convert it to ephemeral. You would need to take care of that edge case.
Note: Because virt-aa-helper is intended to be run by libvirt you will have to use sudo with the command. If you do not it will silently fail and not remove the profile.

Gitlab with non-standard SSH port (on VM with Iptable forwarding)

My gitlab is on a virtual machine on a host server. I reach the VM with a non-standard SSH port (i.e. 766) which an iptable rule then forward from host:766 to vm:22.
So when I create a new repo, the instruction to add a remote provide a mal-formed URL (as it doesn't use the 766 port. For instance, the web interface give me this:
Malformed
git remote add origin git#git.domain.com:group/project.git
Instead of an URL containing :766/ before the group.
Wellformed
git remote add origin git#git.domain.com:766/group/project.git
So it time I create a repo, I have to do the modification manually, same for my collaborator.
How can I fix that ?
In Omnibus-packaged versions you can modify that property in the /etc/gitlab/gitlab.rb file:
gitlab_rails['gitlab_shell_ssh_port'] = 766
Then, you'll need to reconfigure GitLab:
# gitlab-ctl reconfigure
Your URIs will then be correctly displayed as ssh://git#git.domain.com:766/group/project.git in the web interface.
if you configure the ssh_port correctly in config/gitlab.yml, the webpages will show the correct repo url.
## GitLab Shell settings
gitlab_shell:
...
# If you use non-standard ssh port you need to specify it
ssh_port: 766
ps.
the correct url is:
ssh://git#git.domain.com:766/group/project.git
edit: after the change you need to clear caches, etc:
bundle exec rake cache:clear assets:clean assets:precompile RAILS_ENV=production
N.B.: this was tested on an old Giltab version (v5-v6), and might not be suitable for modern instance.
You can achieve similar behavior in a 2 step process:
1. Edit: config/gitlab.yml
On the server, set the port to the one you use:
ssh_port: 766
2. Edit ~/.ssh/config
On your machine, add the following section corresponding to your gitlab:
Host sub.domain.com
Port 766
Limit
You will need to repeat this operation on each user's computer…
References
GitLab and a non-standard SSH port
Easy way to fix this issue:
ssh://git#my-server:4837/~/test.git
git clone -v ssh://git#my-server:4837/~/test.git
Reference URL

Cannot login to phpMyAdmin, no errors shown

I have MySQL set up correctly on my linux computer, however I want a better way to input data into the database besides terminal. For this reason, I downloaded phpMyAdmin. However, when I try to log in to the phpMyAdmin from index.php, it doesnt do anything. It seems to just refresh the page without doing anything. I am putting in the correct MySQL username and password. What is the issue?
Here is a screen shot of what it shows after I click "go".
This is a possible issue when the path to save php_session is not correctly set :
The directory for storing session does not exists or php do not have sufficient rights to write to it.
To define the php_session directory simply add the following line to the php.ini :
session.save_path="/tmp/php_session/"
And give write rights to the http server.
usually, the http server run as user daemon in group daemon. If it is the case, the following commands will make it :
chown -R :daemon /tmp/php_session
chmod -R g+wr /tmp/php_session
service httpd restart
Login fails if session folder in not writeable. To check that, create a PHP file in your web directory with:
<?php
$sessionPath = 'undefined';
if (!($sessionPath = ini_get('session.save_path'))) {
$sessionPath = isset($_ENV['TMP']) ? $_ENV['TMP'] : sys_get_temp_dir();
}
if (!is_writeable($sessionPath)) {
echo 'Session directory "'. $sessionPath . '"" is not writeable';
} else {
echo 'Session directory: "' . $sessionPath . '" is writeable';
}
If session folder is not writeable do either
sudo setfacl -R -m u:www-data:rwx <session directory> or chmod 777 sudo setfacl -R -m u:www-data:rwx <session directory>
-
I am late to the game, but on Amazon linux AMI I could not log in to phpmyadmin ... it just kept refreshing the login screen with no errors.
I have fixed with below command
sudo chmod -R 755 /var/lib/php/session
I fixed my issue on CentOS 7 with MariaDB and phpmyadmin I downloaded from offical phpmyadmin site by adding
session.save_path = "/var/lib/php/session"
to /etc/php.ini
and
chown -R :lighttpd /var/lib/php/session
I also restarted php-fpm and lighttpd after
In my case the solution was to set an Apache setting properly:
ProxyPassReverseCookiePath
This was required, because ProxyPass and ProxyPassReverse were in use, but cookie paths are not changed automatically.
It'd be great if PHPMyAdmin had shown something like session not found or anything, when password is sent with POST.
Do you have a .htaccess file in one of the parent directories that strips off index.php from the url by doing a 301 redirect?
301 redirects discard the form data and redirect you as if you didn't submit anything. So you get returned to the login page.
So you should create a local .htaccess file in the phpmyadmin directory with a single line RewriteEngine On. This will overwrite the previous rewrite rule to nothing.
You may need to clear the browser cache as Chrome aggressively caches 301 redirects.
In my case the hard drive was full.
Use df -h to check the space left on your hard drive, and if you want you can free some space by using the command sudo apt-get clean, which removes installation files.
I hope this will help some future users.
I ran these commands and it worked for me:
sudo service httpd restart
sudo service mysqld stop
sudo service mysqld start
Try searching the web for installation or setup guides for phpMyAdmin. Look at two or three of these and make sure you have covered all the required steps. (If you have already done so, please include which guides you have followed it in the question).
See if it helps to edit config.inc.php (acecoder mentioned this as well).
Check if this guide is of any help.
Which distro are you on? Try searching for the name of the distro you are using together with "phpMyAdmin guide" or "phpMyAdmin setup howto".
If you encounter errors along the way, post the error text here, if it's short (or paste via a pastebin-like site if it's long).
Are you sure that mysql is running? I had the same issue after doing a database import and filling up the volume containing the mysql database. After changing various permissions and clearing sessions, I tried to restart mysql (/etc/init.d/mysql restart) and it failed because the volume was full. After increasing /var and starting mysql successfully, I was able to log into phpmyadmin just fine.
If you have an error like:
Host 'host_name' is blocked because of many connection errors.
Login in your mysql as root and run the flush hosts command
1.- mysql -u root -p
2.- mysql > flush hosts
After this I was able to login again in phpmyadmin
phpMyAdmin will show errors when login fails. If it doesn't, it means that your setup has an error.
The most likely place to check is your php.ini settings. Since there doesn't seem to be an official list of phpMyAdmin-compatible settings, it's mostly trial and error.
Make sure you have enabled the stuff that needs to be enabled. Also check that you did not enable uncommon php.ini settings (like enable_post_data_reading = Off) because phpMyAdmin assumes them to be "the usual ones".
To ease debugging, start with a clean default php.ini file then tweak them line by line to see which setting is causing the error. (Don't forget that you need to restart your server after changing the php.ini file for the changes to take place.)
In my case it was due to an old Apache session.
Stop Apache, clear all pending sessions in your sessions.save_path directory (example: /var/lib/php/session) and restart Apache.
Make sure to set a 32 chars long random key in 'config.inc.php' in the $cfg['blowfish_secret'] value. That solved it for me.
Didn't realize I need to restart MariaDB after modifying config.inc.php:
service mariadb restart
Otherwise at least in my case changes didn't come affect. Also make sure your php session directory is writable by webserver (typically session.save_path = "/var/lib/php/session")

How to use GVIM to edit a remote file?

I use GVIM on Ubuntu 9.10. I'm looking for the right way to configure GVIM to be able to edit remote files (HTML, PHP, CSS) by for exemple ftp.
When i use :e scp://username#remotehost/./path/to/file i get: error detected while processing BufEnter Auto commands for "*":E472: Command failed.
When i open a file on remote via Dolphin or Nautilus, i cannot use other files with NERDTree.
Finally when i edit on remote a file via Dolphin the rights are changing to access interdit.
So how to use GVIM to edit remote files like on my localhost?
I've found running the filesystem over ssh (by means of sshfs) a better option than having the editor handle that stuff or running the editor itself over an ssh tunnel.
So you need to
apt-get install sshfs
and then
sshfs remoteuser#remotehost:/remote/path /local/mountpoint
And that will let you edit your remote files as if they were on your local file system.
To make it even smoother you can add a line to /etc/fstab
sshfs#remoteusername#remotehost:/remote/path /local/mountpoint fuse user,noauto
For some reason I find that I have to use fusermount -u /local/mountpoint rather then just umount /local/mountpoint when experimenting with this. Maybe that's just my distro.
Recently I've also noted that the mounting user must be in the fuse group. So:
sudo addgroup <username> fuse
An other popular option of course, would be to run vim (rather then gvim) inside a GNU Screen session on one machine and connect to that session via ssh from wherever you happen to be. Code along all day at work and in the evening you ssh into your office computer, reattach to your gnu screen session and pick up exactly where you left off. I used find the richer color palette to be the only thing I really missed from gvim when using vim, but that can actually be fixed thanks to a fork of urxvt that will let you customize the entire 256 position color palette, not just the 16 first positions of the palette that most terminal emulators will let you customize.
There is one way and that is using the remote host's copy, using SSH to forward the X11 client to you, like so:
user#local:~/$ ssh -X user#host
...
user#host:~/$ gvim file
The latter command should open gvim on your desktop. Of course, this relies on the remote host having X11 / gnome / gvim installed in the first place, which might not be the solution you're looking for / an option in your case.
Note: X11 forwarding can be a security risk.
In order for netrw to work seamlessly, I believe you need to not be in compatibility mode.
Try
:set nocompatible
then
:edit scp://host/path/to/file
Try this
:e scp://username#remotehost//path/to/file
Note that the use of // is intentional after remotehost it gives the absolute path of your file
:)
http://www.celsius1414.com/2009/08/19/how-to-edit-remote-files-with-local-vim/
The vim tips wiki has an article on this, Editing remote files via scp in vim.
EDIT: Key authentication is not necessary for opening files over ssh. Vim will prompt for password.
It would be useful to note if netrw.vim was loaded by vim when it started.
:echo exists("g:loaded_netrwPlugin")
For opening files over ssh, you need your local machine's public key in the server's authorized keys. Following help section in vim documentation explains it pretty well.
:help netrw-ssh-hack
Quick way to export public key would be by using ssh-copy-id (if available).
ssh-copy-id user#host
And have a look at netrw documentation for network file editing over other protocols.
:help netrw
HTH.
According to the docs BufEnter is processed after the file has been read and the buffer created, so my guess is that netrw successfully read the file but you have a plugin that assumes the file is on the local filesystem and is trying to access it, e.g. to run ctags.
Try disabling all your plugin scripts except the default Vim ones, and then editing the file.
Also, try editing a directory to see if netrw can read that - you need to put the / on the end so that netrw knows it is a dir.
About your command, :e scp://username#remotehost/./path/to/file : note that with netrw, scp is taken relative to your home directory on that remote host. To avoid home-relative pathing, drop that "."; ie. :e scp://username#remotehost//path/to/file .
to accomplish this on windows download/install the Dokan library and Dokan SSHFS, which are the first and last links on this page.
I didn't think you were going to be able to directly edit a remote file using GVIM running locally. However, as others have pointed out, this is defintiely possible. This looks very interesting; I will check this out. I will leave the rest of my post up here, in case it is useful to anyone else, as an alternative method. This method will work even if you don't have SSH access to the file (ie, you only have FTP, or S3, or whatever).
You may get that effect, though, by tying GVIM into a graphical file transfer application. For example, on OS X, I use CyberDuck to transfer files (FTP, SFTP, etc). Then, I have it configured to use GVIM as my editor, so I can just double-click on a file in the remote listing, and CyberDuck will download a copy of that remote file, and open it in GVIM. When I save it in GVIM, CyberDuck uploads the file back to the remote host.
I'm sure that this functionality is not unique to CyberDuck, and is probably present in most nicer file transfer utilities.