does anybody know if i could mount the Google drive (virtual drive) in a win10 system, in wsl2?
Thank you
sudo mount -t drvfs G: /mnt/g
The first time you do it, run sudo mkdir /mnt/g first.
Related
seeking advice to deploy my wordprees site on a new GCP vm.
the previous vm is down it was wordpress certified by bitnami click to deploy on Debian.
the vm got suspend and after reinstated its not accessible throw ssh.
so i create new ubuntu vm and mount the debian disk to take the files to the new vm.
i copied the wordpress folders to the new vm and i used new database.
i got the plugins and theme but no pages or photos or products or setting.
now i need to transfer the database to mysql in ubuntu.
the path of the files is
mounted/opt/bitnami/mysql/data/bitnami_wordpress
my new vm mysql path is
/var/lib/mysql
wordpress migration to new VM instance GCP mysql not running after migration
My site got unauthorized redirect, I have edited the theme php files then the GCP ssh dead ?!
I have followed the same again as my question
I create new vm wordpress certified by bitnami
Stop the 2 VM instance
gcloud config set project my-pro
gcloud beta compute instances detach-disk old --disk old --zone namezone
gcloud beta compute instances attach-disk new --disk old --zone namezone
Start the new VM instance
sudo mkdir /old-disk
sudo mount /dev/sdb1 /old-disk/
sudo /opt/bitnami/ctlscript.sh stop
sudo cp -r /deb-disk/opt/bitnami /opt
sudo umount /dev/sdb1
Stop the 2 VM instance
gcloud beta compute instances detach-disk new --disk old --zone namezone
gcloud beta compute instances attach-disk old --disk old --zone namezone --boot
Kindly advice as the VM is down and i need to restore it asap
#########################
Thanks echo resolved after executing
sudo tail -n40 /opt/bitnami/mysql/data/mysqld.log
and i got to know the permission is not enough
[ERROR] [MY-010958] [Server] Could not open log file.
[ERROR] [MY-010041] [Server] Can't init tc log
[ERROR] [MY-010119] [Server] Aborting
so i execute
sudo chmod 777 /opt/bitnami/mysql/data
sudo chown mysql:root -R /opt/bitnami/mysql/data
sudo find /opt/bitnami/mysql/data -type d -exec chmod 750 {} \;
sudo find /opt/bitnami/mysql/data -type f -exec chmod 640 {} \;
sudo /opt/bitnami/ctlscript.sh start
Bitnami Engineer here,
Every time a user wants to migrate the WordPress content to a new instance, we suggest them use the All in One WordPress Migration plugin. You will simply need to export the data in your current instance
and export it later in the new instance you launch
If you had any other customization in the instance (SSL certificates, redirection, ...), you will need to apply those changes in the new instance as well.
You can learn more about that here.
I have created a Linux VM instance in Google Compute Engine. I installed the JDK and Tomcat with the following commands using SSH.
sudo apt-get install default-jdk
sudo apt-get install tomcat8
I have a war file in my local machine. How can I move the war file from local machine to Compute Engine VM and run the war on the Tomcat server ?
I would recommend using Storage Buckets to store files. This way you can copy the same files to different VMs.
To use google storage from command line, first you would need to install gsutil.
To copy from your local machine to a bucket:
gsutil cp *.txt gs://my-bucket
To copy from a bucket to a VM, connect to your VM and run:
gsutil cp gs://my-bucket/*.txt .
More info about cp with gsutil on this link.
Is it is possible to use docker-machine with a google disk?
I have a docker-machine running via the docker-machine driver. I then need to be able to: docker run -v"path to google disk" From the terminal / docker-machine?
That's an interesting use case. There isn't a Volume Plugin to do that at the moment. But I may look into it (I just experimented with writing a Volume Plugin for Google Cloud Storage).
However, you should be able to mount the disk on the Docker Machine itself, and then reference it as you would with any other filesystem directories.
E.g.,
Attach a disk to the instance
Format and mount (e.g. mount to /mnt/mydisk)
Run docker run -ti -v /mnt/mydisk:/data busybox /bin/sh
I have an AWS instance running CentOS 6.5. It has been updated, secured, and setup for web hosting (LAMP). I attached an EBS volume to the instance and mounted it under /data.
Two questions:
How can I get MySQL to use the /data directory as its database storage location? (I don't want to run the program from the /data directory, just put the .sql file there.
How can I do the same for my web site? I plan on running a wordpress site and its current location is in the /var/www/html directory. I want to change this to /data/site.
I want to keep the web site files and database on a separate volume: /data. If my instance was to get corrupt or inaccessible, I can attach the EBS volume to a new instance.
I have read dozens of tutorials and articles on how to get MySQL moved to a different directory, but nothing is working. MySQL refuses to start up after. Can I keep MySQL installed as is, but have it read/write the database on a different directory like /data which is a mounted EBS volume or is this not possible at all with linux?
Here are some of the tutorials and articles I been following/testing with:
aws.amazon.com/articles/1663?_encoding=UTF8&jiveRedirect=1
spruce.it/noise/setting-up-a-proper-lamp-stack-on-aws-ec2-ebs/
EDIT:
This is what I am doing.
Create a new instance using this ami: https://aws.amazon.com/marketplace/pp/B00IOYDTV6?ref=cns_srchrow
Once the instance is up, I run updates using: sudo yum update -y
One updated, I set it up as a LAMP web server using these instructions: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html
In addition to the above steps, I allow port 80 tcp connections on the built-in firewall. I run these commands: sudo iptables -I INPUT -p tcp --dport 80 -j ACCEPT and sudo service iptables save
Once this is done, I test my site at http://IP-ADDRESS (this shows me the Apache Test Page)
Once LAMP is installed, I install the MySQL Server by running this: yum install mysql-server
After that is installed, I proceed to the "To secure the MySQL server" instructions on the previous Amazon document.
Next, I install PHPMyAdmin using these two tutorials: http://tecadmin.net/installing-apache-mysql-php-on-centos-redhat/# and http://tecadmin.net/how-to-install-phpmyadmin-on-centos-using-yum/
At this point, I have a fully functioning web server. Now, I want to use the AWS EBS volume to store all the databases and website files. First, I attach the newly create AWS EBS volume. I use this tutorial to do this: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html
THIS IS WHERE THE PROBLEMS START.
Using the information in this tutorial: aws.amazon.com/articles/1663?_encoding=UTF8&jiveRedirect=1. It says FAILED.
So one thing you can do is the following that avoids copying all directories. You need to make sure that all permissions are setup correctly for it to work:
mysql dat dir:
mv /var/lib/mysql /var/lib/mysql.orig
mkdir -p /<your-new-ebs-mountpoint>/var/lib/mysql
chown mysql.mysql /<your-new-ebs-mountpoint>/var/lib/mysql
chmod 700 /<your-new-ebs-mountpoint>/var/lib/mysql
etc configs:
mkdir -p /<your-new-ebs-mountpoint>/etc
cp /etc/my.cnf /<your-new-ebs-mountpoint>/etc/my.cnf
mv /etc/my.cnf /etc/my.cnf.orig
ln -s /<your-new-ebs-mountpoint>/etc/my.cnf /etc/my.cnf
logs:
mkdir -p /<your-new-ebs-mountpoint>/var/log
mv /var/log/mysqld.log /var/log/mysqld.log.orig
touch /<your-new-ebs-mountpoint>/var/log/mysqld.log
chown mysql.mysql /<your-new-ebs-mountpoint>/var/log/mysqld.log
chmod 640 /<your-new-ebs-mountpoint>/var/log/mysqld.log
ln -s /<your-new-ebs-mountpoint>/var/log/mysqld.log /var/log/mysqld.log
I have been following the directions on setting up a MySql database on EC2 here: http://aws.amazon.com/articles/1663?_encoding=UTF8&jiveRedirect=1
I got a problem when I run:
$ sudo mkfs.xfs /dev/sdh
Cannot stat /dev/sdh: No such file or directory
What can it be?
Edit:
I ran the following based on #karudzo advice
$ cd /dev
$ sudo /sbin/MAKEDEV sdh
$ sudo mkfs.xfs /dev/sdh
mkfs.xfs: cannot open /dev/sdh: No such device or address
That means that /dev/sdh doesn't exist yet. Try this: http://getsatisfaction.com/cohesiveft/topics/attaching_a_ebs_volume_to_ec2_instance_at_dev_sdh_doesnt_appear_to_attach
You can't create ad-hoc devices for your instances. In the Amazon console you'll need to:
1) Create an EBS volume
2) Attach that volume to your instance at device location /dev/sdh
Then you'll be able to run mkfs.xfs on the block device