Since using docker takes up a lot of space for images, I would like to attach an external hard drive to my 10GB instance Ubuntu VM. However, I've added a blank disk and attached it, but I end up with this message when I type "fdisk -l":
Disk /dev/sdb doesn't contain a valid partition table.
How do I create an external NTFS drive and mount it to my filesystem?
Just as on any other Ubuntu instance, once you've attached an unformatted or unsuitably-formatted drive... one good set of instructions is for example at https://help.ubuntu.com/community/InstallingANewHardDrive .
To do it manually, run, as root (sudo bash for example):
$ apt-get install ntfsprogs
$ df -k # just to check nothing is mounted on /dev/sdb...
$ # umount /dev/sdb if df -k shows something mounted there
$ fdisk # to fix the partition table, see http://linux.die.net/man/8/fdisk
$ # if you need a tutorial, http://www.howtogeek.com/106873/how-to-use-fdisk-to-manage-partitions-on-linux/
$ mkfs.ntfs -f /dev/sdb1 # if you're in a hurry, or
$ # mkfs.ntfs /dev/sdb1 # if you have all the time in the world
Incidentally, this is a system administration question, not a software development one, so you might be happier asking it over at serverfault -- we do monitor the google-cloud-platform there, too.
Two side issues -- (1) why NTFS? You're unlikely to be using this PD with Windows, so a native Linux file system might be preferable... (2) what does this have to do with google-app-engine? Did you mistype that tag meaning actually google-compute-engine instead...?
Related
I followed the tutorial google has up on youtube for creating a custom image for compute engine using VirtualBox by the link as follow
https://www.youtube.com/watch?v=YlcR6ZLebTM
I have succeed in created custom images and imported it to the Google Compute Engine.
But when I try to follow this document to attached a new persistent disk :
https://cloud.google.com/compute/docs/disks/persistent-disks#attachdiskcreation
The document mentions a command line tool :
/usr/share/google/safe_format_and_mount
but the folder /usr/share/google does not exist in my custom image.
How can I install it ?
or is there another way to mount a new persistence disk in GCE instance
?
The /usr/share/google/safe_format_and_mountcommand comes with the Google Compute Engine image packages. You can see the source code here.
You can either install the packages or run these commands:
1- Determine the device location of your new persistent disk: ls -l /dev/disk/by-id/google-*. Let's suppose it's /dev/sdb
2- sudo mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 -F /dev/sdb
3- sudo mount -o discard,defaults /dev/sdb <destination_folder>
Run df-h or mount to check if your disk is already mounted in the destination folder.
I have a GCE instance that I have customised and uploaded various applications to (such as PHP apps running under Apache). I now want to duplicate this instance - i.e. everything on it.
I originally thought clone might do this but I had a play around with it and it only seems to clone the instance config and not anything customised on it.
I've been googling it and it looks like what I need to do is create an image and use this image on a new instance or clone?
Is that correct?
If so, are there any could steps by steps out there to do this?
I had a look at the Google page on images and it talks about having to terminate the instance to do this. I'm a bit wary of this. Maybe it's just the language used in the docs, but I don't want to lose my existing instance.
Also, will everything be stored on the image?
So, for example, will the following all make it onto the image?
MySQL - config & databases schemas & data?
Apache - All installed apps under /var/www/html
PHP - php.ini, etc...
All other server configs/modifications?
You can create a snapshot of the source instance, then create a new instance selecting the source snapshot as disk. It will replicate the server very fast. For other attached disks, you have to create a new disk and copy file by net (scp, rsync etc)
In the Web Console, create a snapshot, then click on the snapshot and over CREATE INSTANCE button, you can customize the settings and then click where it says:
Equivalent REST or command line
and copy the command line, this will be your template.
From this, you can create a a BASH script (clone_instance.sh), I did something like this:
#!/bin/bash -e
snapshot="my-snapshot-name"
gcloud_account="ACCOUNTNUMBER-compute#developer.gserviceaccount.com"
#clone 10 machines
for machine in 01 02 03 04 05 06 07 08 09 10
do
gcloud compute --project "myProject" disks create "instance-${machine}" \
--size "220" --zone "us-east1-d" --source-snapshot "${snapshot}" \
--type "pd-standard"
gcloud compute --project "bizqualify" instances create "webscrape-${machine}" \
--zone "us-east1-d" --machine-type "n1-highmem-4" --network "default" \
--maintenance-policy "MIGRATE" \
--service-account "ACCOUNTNUMBER-compute#developer.gserviceaccount.com" \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--tags "http-server","https-server" \
--disk "name=webscrape-${machine},device-name=webscrape-${machine},mode=rw,boot=yes,auto-delete=yes"
done
Now, in your terminal, you can execute your script
sh clone_instance.sh
In case you have other disks attached, the best way without actually unmounting them is changing the path of how they're mounted in /etc/fstab.
If you use the UUID in fstab and use the same disks from snapshots (which will have the same UUIDs) then you can do the cloning without unmounting anything.
Just change each disk in fstab to use UUID like this
UUID=[UUID_VALUE] [MNT_DIR] ext4 discard,defaults,[NOFAIL] 0 2
you can get the UUID from
sudo blkid /dev/[DEVICE_ID]
if you're unsure about your DEVICE_ID you can use
sudo lsblk
to get the list of device ids used by your system.
It's 2021 and this is now very simple:
Click the VM Instance you want to clone
Click "Create Machine Image" at the top
From Machine Images on the left, open your new image and click "Create VM Instance"
This will clone the machine specs and data.
As was mentioned, if the source instance has a secondary disk attached, it is not possible to ssh into the new instance.
I had to take a snapshot of a production instance, so I couldn't unmount the secondary disk without causing disruption.
I was able to fix the problem by creating a disk from the snapshot, mounting the disk on another instance, removing any reference to the secondary disk, i.e., removing the entry from /etc/fstab.
Once I had done that, I was able to use the disk as boot disk in a new instance, and ssh to it.
You can use the GCP Import VM option, to import this machine back to the project.
I created a binary executable from bash script on linux server through SHC. The binary created works fine on linux machines, but through mistake on Mac. How could I convert my bash file to binary executable that is able to run everywhere(ubuntu, CentOS, Mac, Cygwin)?
shc -v -r -T -f ir16fetcher.sh
mv ir16fetcher.sh.x ir16fetcher
Shebang of my bash script
#!/bin/bash
On Linux machines
./ir16installer
USAGE : ir16fetcher <servername/ip address> [the n th latest build - optional. Default 1]
EXAMPLE: ir16fetcher jagger 2
EXAMPLE: ir16fetcher 167.116.6.155
REQUIRE: Please make sure conf file in installation folder ~/IRinstall/ir16 & ~/IRinstall/irmanager
On my Mac
./ir16installer
-bash: ./ir16installer: cannot execute binary file
I think it's not gonna work
"The compiled binary will still be dependent on the shell
specified in the first line of the shell code (i.e.
#!/bin/sh), thus shc does not create completely independent
binaries."
From http://www.datsi.fi.upm.es/~frosal/sources/shc.html
You will have to do this for every architecture and operating system you need to support. In any case, there doesn't really seem to be any benefits of using this method for distribution. It adds dependencies and complicates delivery, and I'm pretty sure whatever obfuscation the "shc" compiler implements is easily reversed.
if the goal here is to "hide" your source code, and then have the "hidden" copy of the code be executable on the Unix OSes you listed, then, encryption is really your only option.
I say this because encryption tools are available on every base Unix install. For your purposes, this is a very good thing as you wont have to download or configure anything additional. They're just there, as part of the natural installation of the OS. One of such tools is called openssl.
To Encrypt your file/script with openssl:
echo precious-content | openssl aes-128-cbc -a -salt -k mypassword
U2FsdGVkX1+K6tvItr9eEI4yC4nZPK8b6o4fc0DR/Vzh7HqpE96se8Fu/BhM314z
To Decrypt your file/script with openssl:
echo U2FsdGVkX1+K6tvItr9eEI4yC4nZPK8b6o4fc0DR/Vzh7HqpE96se8Fu/BhM314z | openssl aes-128-cbc -a -d -salt -k mypassword
precious-content
Now, to get openssl to do what you want it to do automatically without having to spend hours of your own time figuring out a way, you can paste your script to a site like www.EnScryption.com. This site will generate an "executable" version of your code for you, which you can then run on any Mac, Ubuntu, RedHat, CentOS box.
I have deployed Jenkins in my CentOS machine, Jenkins was working well for 3 days, but yesterday there was a Disk space is too low. Only 1.019GB left. problem.
How can I solve this problem, it make my master offline for hours?
You can easily change the threshold from jenkins UI (my version is 1.651.3):
[]
Update: How to ensure high disk space
This feature is meant to prevent working on slaves with low free disk space. Lowering the threshold would not solve the fact that some jobs do not properly cleanup after they finish.
Depending on what you're building:
Make sure you understand what is the disk output of your build - if possible - restrict the output to happen only to the job workspace. Use workspace cleanup plugin to cleanup the workspace as post build step.
If the process must write some data to external folders - clean them up manually on post build steps.
Alternative1 - provision a new slave per job (use spot slaves - there are many plugins that integrate with different cloud provider to provision on the fly machines on demand)
Alternative2 - run the build inside a container. Everything will be discarded once the build is finished
Beside above solutions, there is a more "COMMON" way - directly delete the largest space consumer from Linux machine. You can follow the below steps:
Login to Jenkins machine (Putty)
cd to the Jenkins installation path
Using ls -lart to list out hidden folder also, normally jenkin
installation is placed in .jenkins/ folder
[xxxxx ~]$ ls -lart
drwxrwxr-x 12 xxxx 4096 Feb 8 02:08 .jenkins/
list out the folders spaces
Use df -h to show Disk space in high level
du -sh ./*/ to list out total memory for each subfolder in current path.
du -a /etc/ | sort -n -r | head -n 10 will list top 10 directories eating disk space in /etc/
Delete old build or other large size folder
Normally ./job/ folder or ./workspace/ folder can be the largest folder. Please go inside and delete base on you need (DO NOT
delete entire folder).
rm -rf theFolderToDelete
You can limit the reduce of disc space by discarding the old builds. There's a checkbox for this in the project configuration.
This is actually a legitimate question so I don't understand the downvotes, perhaps it belongs on Superuser or Serverfault. This is a soft warning threshold not hard limit where the disk is out of space.
For hudson see where to configure hudson node disk temp space thresholds - this is talking about the host, not nodes
Jenkins is the same. The conclusion is for many small projects the system property called hudson.diagnosis.HudsonHomeDiskUsageChecker.freeSpaceThreshold could be decreased.
In saying that I haven't tested it and there is a disclaimer
No compatibility guarantee
In general, these switches are often experimental in nature, and subject to change without notice. If you find some of those useful, please file a ticket to promote it to the official feature.
I got the same issue. My jenkins version is 2.3 and its UI is slightly different. Putting it here so that it may helps someone. Increasing both disk space thresholds to 5GB fixed the issue.
I have a cleanup job with the following build steps. You can schedule it #daily or #weekly.
Execute system groovy script build step to clean up old jobs:
import jenkins.model.Jenkins
import hudson.model.Job
BUILDS_TO_KEEP = 5
for (job in Jenkins.instance.items) {
println job.name
def recent = job.builds.limit(BUILDS_TO_KEEP)
for (build in job.builds) {
if (!recent.contains(build)) {
println "Preparing to delete: " + build
build.delete()
}
}
}
You'd need to have Groovy plugin installed.
Execute shell build step to clean cache directories
rm -r ~/.gradle/
rm -r ~/.m2/
echo "Disk space"
du -h -s /
To check the free space as Jenkins Job:
Parameters
FREE_SPACE: Needed free space in GB.
Job
#!/usr/bin/env bash
free_space="$(df -Ph . | awk 'NR==2 {print $4}')"
if [[ "${free_space}" = *G* ]]; then
free_space_gb=${x/[^0-9]*/}
if [[ ${free_space_gb} -lt ${FREE_SPACE} ]]; then
echo "Warning! Low space: ${free_space}"
exit 2
fi
else
echo "Warning! Unknown: ${free_space}"
exit 1
fi
echo "Free space: ${free_space}"
Plugins
Set build description
Post-Build Actions
Regular expression: Free space: (.*)
Description: Free space: \1
Regular expression for failed builds: Warning! (.*)
Description for failed builds: \1
For people who do not know where the configs are, download the tmpcleaner from
https://updates.jenkins-ci.org/download/plugins/tmpcleaner/
You will get an hpi file here. Go to Manage Jenkins-> Manage plugins-> Advanced and then upload the hpi file here and restart jenkins
You can immediately see a difference if you go to Manage Nodes.
Since my jenkins was installed in a debian server, I did not understand most of the answers related to this since i cannot find a /etc/default folder or jenkins file.
If someone knows where the /tmp folder is or how to configure it for debian , do let me know in comments
I have a large amount of text data I need to import into MySQL. I'm doing this on a MacBook and don't have enough space for it so I want to store it in an external hard drive (I'm not really concerned about speed at this point - this is just for testing).
What's the best way to do it?
Install MySQL on the external hard drive (is this possible on a Mac?)
Install MySQL on the laptop's hard drive and have the tables on the external (how?)
One simple hack is to create an symbolic link replacing your current mysql database file location pointing to the external disk. Google symbolic link.
sample usage would be after you shutdown mysql, change the old mysql db folder name to something else, and create the symbolic link using the ln command like below
ln -s [EXTERNAL DRIVE PATH] [MYSQL DB FOLDER PATH]
Then move all the previous content of the mysql db folder to the new location.
Open /etc/mysql/my.cnf and find the value of the datadir. Alternatively, you can find this out in the mysql monitor with
mysql> select ##datadir;
Stop mysql
sudo systemctl stop mysql
Copy the data from there to your external drive
sudo rsync -av /var/lib/mysql /mnt/myHDD/somedir/mysql
Modify the location of the datadir in my.cnf.
Start mysql again
sudo systemctl start mysql
Verify that everything is still fine and remove the original data dir.
This page contains a more extensive guide but all the additional issues it warns about were not relevant for me on my raspberry PI. I.e. I skipped them and it worked.
For the second option, a tablespace might do the trick:
http://dev.mysql.com/doc/refman/5.1/en/create-tablespace.html
User user658991 answer is halfway there.
After adding the soft link, you will need to add the following line to /etc/apparmor.d/usr.sbin.mysqld beneath the 2 lines to the old mysql folder.
/path/to/mysql/folder/on/the/external/ r
/path/to/mysql/folder/on/the/external/ ** rwk
Without these 2 lines, MySQL fails to start complaining of:
Can't create test file /path/to/mysql/folder/on/the/external/hostname.lower-test
Can't create test file /path/to/mysql/folder/on/the/external/hostname.lower-test
mysqld: Can't change dir to '/path/to/mysql/folder/on/the/external/' (Errcode: 13)
Restart apparmor for the changes to take effect.
sudo invoke-rc.d apparmor restart
With this, MySQL starts normally.