Changing log file location of couchbase - couchbase

To change the server log location , I follow the following step But my log location is not change
Step :-
Log in as root or sudo and navigate to the directory where you installed Couchbase. For example: /opt/couchbase/etc/couchbase/static_config
Edit the static_config file and change the error_logger_mf_dir variable to a different directory. For example: {error_logger_mf_dir, “/home/user/cb/opt/couchbase/var/lib/couchbase/logs”}
Restart the Couchbase service. After restarting the Couchbase service, all subsequent logs will be in the new directory.
But my logs still generating in the same directory (default directory) ?

Related

duplicate mediawiki between 2 locations

I have setup a private wiki (1.35.1) running on Ubuntu Mate which is a guest OS on VMWare Workstation 16. I'd like to run this wiki at 2 locations (A & B) that are isolated (no VPN connection). I will be the only user accessing since it is my private wiki.
I've got the wiki setup and running at location A and will simply archive the guest and bring it up at location B as an identical copy.
Question: After I spend the day at location A (editing my wiki there), can i just simply copy the entire /var/www/html/Mediawiki folder and the entire /var/lib/mysql folder (MariaDB) onto a thumb drive and dump onto location B?
The intent is for these to be identical wiki's - synchronized by me (sneakernet) with thumb drive.
UPDATE - this is working well so far. Below is how i do it.
Stop the mysql server - sudo service mysql stop
Copy (using rsync) all new or changed files from /var/lib/mysql/ to my external share with:
sudo rsync -cavurt --delete --info=del,name,stats2 "/var/lib/mysql/" "/home/rp/shares/VM_share_ubuntu/wiki_sql_files"
Copy all new or changed files from /var/www/html/mediawiki-1.35.1 to my external share with:
sudo rsync -cavurt --delete --info=del,name,stats2 "/var/www/html/mediawiki-1.35.1/" "/home/rp/shares/VM_share_ubuntu/wiki_mediawiki_files"
Start the mysql server - sudo service mysql start
Now, copy the new/changed files to the 2nd machine:
Stop the mysql server - sudo service mysql stop
Copy in (using rsync) all new or changed mysql files with:
sudo rsync -cavurt --delete --info=del,name,stats2 "/home/rp/shares/VM_share_ubuntu/wiki_sql_files/" "/var/lib/mysql"
Copy in all new or changed mediawiki files with:
sudo rsync -cavurt --delete --info=del,name,stats2 "/home/rp/shares/VM_share_ubuntu/wiki_mediawiki_files/" "/var/www/html/mediawiki-1.35.1"
Start the mysql server - sudo service mysql start
In those rsync commands note that the end of the source folder needs to be / and the target folder does NOT have the ending /. The significance of that is explained in this thread.
UPDATE 2: If you modify the /etc/php/7.4/apache2/php.ini file on one machine you will need to make sure to make the same update on the other machine. e.g. If you change the file upload size from the default 2M or some other change that affects php.ini.
This would mostly work as long as you set $wgServer dynamically. Pages that use absolute URLs and are loaded from cache would link to the wrong URL, but that should be very rare - almost everything uses relative URLs.

Neo4j: Couldn't load the external resource in centos 7

I create CSV file by a software dynamically in another centos 7 server and send it to Neo4j server by Samba in home/t/Desktop/temp directory and need to load them into Neo4j.
But Neo4j could not load the file and i get this error:
java.sql.SQLException: Some errors occurred :
[Neo.ClientError.Statement.ExternalResourceFailed]:Couldn't load the external resource at: file:/home/t/Desktop/temp/5d8db3a4-83d3-4850-b134-7e3d24855b88.csv
I comment the import line at neo4j config file and add below line to it too.
dbms.security.allow_csv_import_from_file_urls=true
The permission for the temp directory is nobody:nobody and 0777.
But still error!!!
I think Neo4j has some issues with Selinux and other security things in Centos 7.
You can make a new top level directory (under /) e.g. named test and set the permissions appropriately:
sudo mkdir /test
sudo chmod 777 /test
by making it a top level directory you don't have to worry about permissions of intermediate directories...
answer link:
https://unix.stackexchange.com/a/127298

How can I uninstall IPFS completely and restart everything from scratch and get a new peer id?

How can I uninstall IPFS completely and restart everything from scratch and get a new peer id? I tried to delete the go-ipfs folder but I can still get Error: ipfs configuration file already exists! when I do ipfs init.
The data store as well as the config will be stored in a subdirectory .ipfs of your home directory. So if you are on a UNIX based system $HOME/.ipfs. You would have to delete this directory and then run ipfs init to get an empty store and a new peer id.
Note that you can also configure the location of the store directory using the IPFS_PATH environment variable, which can be useful to get the IPFS store on a different mount point.

How to set htpasswd for oauth in master config for minishift (v1.11.0) (Openshift Origin)

I'm trying to activate authentification via htpasswd in my minishift 1.11.0 installation. I cannot find the master config file to set the values described in the documentation for Openshift Origin. I've searched in the minishift-VM via minishift ssh and in the minishift folders in my home folder on my Windows 7 Host.
How can I activate htpasswd for minishift 1.11.0?
EDIT:
I found the master-config.yaml in the folder /var/lib/minishift/openshift.local.config/master/. I changed the content under oauthConfig as described in the Openshift documentation:
https://docs.openshift.org/latest/install_config/configuring_authentication.html
The .htpasswd file is located in the same folder and referenced in the master config with it's absolute path.
But when I stop and start minishift again, the starting process ends with the following error:
-- Starting OpenShift container ...
Starting OpenShift using container 'origin'
FAIL
Error: could not start OpenShift container "origin"
Details:
No log available from "origin" container
minishift : Error during 'cluster up' execution: Error starting the cluster.
In Zeile:1 Zeichen:1
+ minishift start --vm-driver=virtualbox
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (Error during 'c...ng the cluster.:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
EDIT 2:
I'm suspecting, that Openshift directly uses the tool htpasswd to verify the passwords of the users. I was not able to install htpasswd in the boot2docker VM, that minishift uses, so the initialization of the container failes. (also yum is not installed by default).
Is it possible to install htpasswd in boot2docker? If yes, where can I get the package?
I think I have found the problem. While trying I changed to the centos image for minishift with the corresponding flag at startup:
minishift start --iso-url=centos
When I wanted to patch the configuration to the master with minishift openshift config set it failed and rolled back. Searching in the logs (with minishift logs) got me this line:
error: Invalid MasterConfig /var/lib/origin/openshift.local.config/master/master-config.yaml
oauthConfig.identityProvider[0].provider.file: Invalid value: "/var/lib/minishift/openshift.local.config/master/.htpasswd": could not read file: stat /var/lib/minishift/openshift.local.co
nfig/master/.htpasswd: no such file or directory
Openshift couldn't find the HTPasswd file, because for Openshift the master-config.yaml file lies in
/var/lib/origin/openshift.local.config/master
and not in
/var/lib/minishift/openshift.local.config/master
as I had written in the config file. The latter one is the path of the files as seen by the minishift-VM itself (as seen, when using minishift ssh), but the Openshift instance, that runs inside it sees only the first one. I only had to update the master config file to the right filepath.
I haven't checked, if this also solves the problem for the boot2docker-iso, but I think this must have been the problem. And HTPasswd really doesn't need to be installed in the VM to let this work. You just need the file with your users and passwords reachable for the VM.
PS.: I also got a strange side behaviour. One user was already defined, when I changed to HTPasswd. I also defined him in the password file, but when trying to log with this username via the webconsole, I got the error, that the user could not be created. All other usernames work correctly. Maybe I have to delete him in some internal user directory, before adding him to HTPasswd.

mysql folder inaccessible on Ubuntu

I am trying to reset my MySQL root password following the official reference here.
In step #2, I have to do the following
Locate the .pid file that contains the server's process ID. The exact
location and name of this file depend on your distribution, host name,
and configuration. Common locations are /var/lib/mysql/,
/var/run/mysqld/, and /usr/local/mysql/data/. Generally, the file name
has an extension of .pid and begins with either mysqld or your
system's host name.
So I go to /var/lib/ and find the mysql folder. I double-clicked it, I got the following pop-up window:
The folder contents could not be displayed.
You do not have the permissions necessary to view the contents of "mysql".
I am pretty sure that I am indeed the system admin. Why is it like so and how to fix it?
Start with working with the terminal/console as a root user.
Not a system expert - but it should get you somewhere:
Get into the ubuntu terminal/console
switch to the root user (sudo bash)
Then follow this one :
https://help.ubuntu.com/community/MysqlPasswordReset