I'm running RabbitMQ V.2.0.0. on a Linux machine. The mnesia base is current the default, but the within that directory Rabbit creates directories, eg. rabbit#ip-123.1.1.123.
The ip in the directory name is based on the inet addr of the machine. This directories hold information about user, exchanges, vhost (I think).
My question is, how can I fix/config these directory names with ip to be not based on ip?
To change the Mnesia directory, just set MNESIA_DIR in /etc/rabbitmq/rabbitmq.conf.
Also, a great place to ask RabbitMQ related questions is on the rabbitmq-discuss mailing list.
It seems you can edit the scripts files (rabbitmq-server, rabbitmq-mulit and rabbitmqcti). In these scripts at the top is a hostname variable.
I set the hostname to localhost and restarted.
This is not the best, but good enough for my requirements. The hostname must be a proper address, it cannot be something arbitrary.
The main problem is that your new machine has new hostname - and directory is named after it (just renaming directory as mentioned before, does not help) so we need to rename your machine hostname and make RabbitMq to work with old files.
Let "ip-0-0-0-0" be old machine name (so there should be a mnesia folder /var/lib/rabbitmq/mnsesia/ip-0-0-0-0), and new machine host
name is something like "ip-1-1-1-1", but new name doesnot matter as we will overwrite it. Execute following commands:
sudo -s
echo "127.0.0.1 ip-0-0-0-0" >> /etc/hosts
echo "ip-0-0-0-0" > /etc/hostname
reboot
After reboot your machine will have a new name and RabbitMq should work with old files.
Related
I have setup a private wiki (1.35.1) running on Ubuntu Mate which is a guest OS on VMWare Workstation 16. I'd like to run this wiki at 2 locations (A & B) that are isolated (no VPN connection). I will be the only user accessing since it is my private wiki.
I've got the wiki setup and running at location A and will simply archive the guest and bring it up at location B as an identical copy.
Question: After I spend the day at location A (editing my wiki there), can i just simply copy the entire /var/www/html/Mediawiki folder and the entire /var/lib/mysql folder (MariaDB) onto a thumb drive and dump onto location B?
The intent is for these to be identical wiki's - synchronized by me (sneakernet) with thumb drive.
UPDATE - this is working well so far. Below is how i do it.
Stop the mysql server - sudo service mysql stop
Copy (using rsync) all new or changed files from /var/lib/mysql/ to my external share with:
sudo rsync -cavurt --delete --info=del,name,stats2 "/var/lib/mysql/" "/home/rp/shares/VM_share_ubuntu/wiki_sql_files"
Copy all new or changed files from /var/www/html/mediawiki-1.35.1 to my external share with:
sudo rsync -cavurt --delete --info=del,name,stats2 "/var/www/html/mediawiki-1.35.1/" "/home/rp/shares/VM_share_ubuntu/wiki_mediawiki_files"
Start the mysql server - sudo service mysql start
Now, copy the new/changed files to the 2nd machine:
Stop the mysql server - sudo service mysql stop
Copy in (using rsync) all new or changed mysql files with:
sudo rsync -cavurt --delete --info=del,name,stats2 "/home/rp/shares/VM_share_ubuntu/wiki_sql_files/" "/var/lib/mysql"
Copy in all new or changed mediawiki files with:
sudo rsync -cavurt --delete --info=del,name,stats2 "/home/rp/shares/VM_share_ubuntu/wiki_mediawiki_files/" "/var/www/html/mediawiki-1.35.1"
Start the mysql server - sudo service mysql start
In those rsync commands note that the end of the source folder needs to be / and the target folder does NOT have the ending /. The significance of that is explained in this thread.
UPDATE 2: If you modify the /etc/php/7.4/apache2/php.ini file on one machine you will need to make sure to make the same update on the other machine. e.g. If you change the file upload size from the default 2M or some other change that affects php.ini.
This would mostly work as long as you set $wgServer dynamically. Pages that use absolute URLs and are loaded from cache would link to the wrong URL, but that should be very rare - almost everything uses relative URLs.
I am trying to reset my MySQL root password following the official reference here.
In step #2, I have to do the following
Locate the .pid file that contains the server's process ID. The exact
location and name of this file depend on your distribution, host name,
and configuration. Common locations are /var/lib/mysql/,
/var/run/mysqld/, and /usr/local/mysql/data/. Generally, the file name
has an extension of .pid and begins with either mysqld or your
system's host name.
So I go to /var/lib/ and find the mysql folder. I double-clicked it, I got the following pop-up window:
The folder contents could not be displayed.
You do not have the permissions necessary to view the contents of "mysql".
I am pretty sure that I am indeed the system admin. Why is it like so and how to fix it?
Start with working with the terminal/console as a root user.
Not a system expert - but it should get you somewhere:
Get into the ubuntu terminal/console
switch to the root user (sudo bash)
Then follow this one :
https://help.ubuntu.com/community/MysqlPasswordReset
I have installed XAMPP on OSX Lion.
Because I want to serve a folder from one of my development folders I have added a virtualhost to /Applications/XAMPP/xamppfiles/etc/extra/httpd-vhosts.conf
<VirtualHost *:80>
ServerAdmin email#gmail.com
DocumentRoot "/Users/myosxUsername/Documents/dir/dir/htdocs"
ServerName qmh
ErrorLog "logs/qmh-error_log"
CustomLog "logs/qmh-access_log" common
</VirtualHost>
and also added an entry to the hosts file
127.0.0.1 qmh
Because of permission issue with the server accessing the directory /Users/myosxUsername/Documents/dir/dir/htdocs I have also changed the user in httpd.conf to my myosxUsername
User myosxusername
Group admin
# previous setting below
# User nobody
# Group nobody
After those changes virtual hosts work fine.
The problem is that when i now use phpmyadmin to create a new database i get the error message
db_create.php: Missing parameter: new_db
if I change the user back to:
User nobody
Group nobody
then phpmyadmin works fine, but my virtualhost directory cannot be accessed due to permission issue.
I assume I somehow have to tell apache to not use the new user for mysql access somehow? Your help is appreciated. Thanks
See item 2.8 from phpMyAdmin FAQ (http://wiki.phpmyadmin.net/pma/FAQ_2.8)
In the php.ini directive session.save_path and upload_tmp_dir, if these directories don't exist, are read-only or not accessable (f.e.
due to base_dir restrictions) this error will occur. See trk
PHP installed from a packages (eg. an rpm) might set the permissions on these directories for an assumed user (eg. 'apache'). -
Users of other web servers, eg Lighttpd, may need to change the
ownership of these directories (eg. to 'lighttpd').
On Windows, if PHP is using directories for session.save_path and upload_tmp_dir that are somewhere within the main "Temp"
directory, you must create those directories yourself; unlike other
Windows programs PHP will not create them itself.
If you are using Hardened-PHP (/suhosin patch) with the ini directive varfilter.max_request_variables set to the default (200) or
another low value, you could get this error if your table has a high
number of columns. Adjust this setting accordingly ( request limits,
thanks to Klaus Dorninger for the hint).
In config.inc.php, try to leave the $cfg['PmaAbsoluteUri'] directive empty. See also FAQ 4.7.
Maybe you have a broken PHP installation or you need to upgrade your Zend Optimizer. See http://bugs.php.net/31134.
In the php.ini directive arg_separator.input, a value of ";" will cause this error. Replace it with "&;".
(tip from https://serverfault.com/questions/385465/phpmyadmin-missing-parameter)
I have hgweb.wsgi setup on an ubuntu server under apache2. Furthermore I have basic authing using the apache2 htpasswd approach. This all works nicely. However, we want to control what each user have access to and ACL seems to be the best approach. So inside the repos .hg folder I've created a hgrc and modified it according to the documentation for getting ACL up and running ( I've also enabled the extension ). The problem is I get no indication that the hgrc is used at all. If I add [ui] debug = true I still get nothing from the remote client. Sadly I'm not quite sure how to go about debugging this so any help would be much appreciated.
To make sure that a .hg/hgrc file in a repository is being consulted add something noticable to the [web] section like:
[web]
description = Got this from the hgrc
style = coal
name = RENAMED
If you don't see those in the web interface your .hg/hgrc isn't being consulted, and the most common reason for that is -- permissions. Remember that the .hg/hgrc has to owned by a user or group that is trusted by the webserver user (usually apache or www-data or similar). If apache is running under the user apache then chown the .hg/hgrc file over to apache for ownership -- root won't do and htpasswd user is irrelevant.
If that file is being consulted then you need to start poking around in the apache error logs. Turning on debug and verbose will put more messages into the apache error log, not into the remote client's output.
Say I have the following ssh .config file:
Host host_nickname
User xxx
HostName yyy.zz.vvv
ControlMaster auto
ControlPath ~/.ssh/%r#%h:%p
In case you are not familiar with ControlMaster or ControlPath, here is the description from the ssh_config manual:
ControlMaster:
Enables the sharing of multiple sessions over a single network
connection. When set to ``yes'', ssh(1) will listen for connec-
tions on a control socket specified using the ControlPath argu-
ment. Additional sessions can connect to this socket using the
same ControlPath with ControlMaster set to ``no'' (the default).
These sessions will try to reuse the master instance's network
connection rather than initiating new ones, but will fall back to
connecting normally if the control socket does not exist, or is
not listening.
In Mercurial, if you want to push or pull from a repository, you could just type the following:
hg push ssh://user#example.com/hg/
Now, my question:
I would like to ask Mercurial to push (or pull) against a repository at /path/to/repository on the server corresponding to my ssh config entry host_nickname. How do I do this?
If you look under hg help urls you'll find
ssh://[user#]host[:port]/[path][#revision]
So, assuming that /path/to/repository works from your login dir on the remote machine, then type
hg [push|pull] ssh://host_nickname/path/to/repository
This works because hg isn't doing the name resolution; ssh is, and you've specified the correspondence between host_nickname and the real HostName. Also, ControlMaster won't affect this, as that just allows multiplexing over a single ssh connection. Note, if hg isn't in your remote PATH, then you need to specify it via --remotecmd /path/to/hg.