I have a qemu qcow2 disk snapshot dev.img that is based off a backing file dev.bak.
How can I merge both into a standalone devplus.img, while leaving dev.bak as it is?
I got some help from the qemu mailing list:
First copy the original base file to your standalone image file:
cp dev.bak devplus.img
Then "rebase" the image file that was backed off the original file so that it uses the new file:
qemu-img rebase -b devplus.img dev.img
Then you can commit the changes in the dev file back into the new base:
qemu-img commit dev.img
Now you can use devplus.img as a standalone image file and get rid of dev.img if you wish, leaving the original dev.bak intact and not corrupting any other images that were based off it.
qemu-img convert -O qcow2 dev.img devplus.img
This detects that dev.img is based off dev.bak and creates a standalone devplus.img.
Related
I have been downloading the file by using gsutil, and the process has crushed.
The documentation on gsutil is located at :
https://cloud.google.com/storage/docs/gsutil_install#redhat
The file location is described on : https://genebass.org/downloads
How can I resume the file download instead of starting from scratch ?
I have been looking for answers to a similar question, although those have been provided to different questions. For example :
GSutil resume download using tracker files
As mentioned in GCP docs, using the gsutil cp command:
gsutil automatically performs a resumable upload whenever you use the cp command to upload an object that is larger than 8 MiB. You do not need to specify any special command line options to make this happen. [. . .] Similarly, gsutil automatically performs resumable downloads (using standard HTTP Range GET operations) whenever you use the cp command, unless the destination is a stream. In this case, a partially downloaded temporary file will be visible in the destination directory. Upon completion, the original file is deleted and overwritten with the downloaded contents.
If you're also using gsutil in large production tasks, you may find useful information on Scripting Production Transfers.
Alternatively, you can achieve resumable download from Google Cloud Storage using the Range header (just take note of the HTTP specification threshold).
I'm not sure which command you're using (cp or rsync), but either way gsutil will fortunately take care of resuming downloads for you.
From the docs for gsutil cp:
gsutil automatically resumes interrupted downloads and interrupted resumable uploads, except when performing streaming transfers.
So, if you're using gsutil cp, it will automatically resume the partially downloaded files without starting them over. However, resuming with cp will also re-download the files that were already completed. To avoid this, use the -n flag so the files you've already downloaded are skipped, something like:
gsutil -m cp -n -r gs://ukbb-exome-public/300k/results/variant_results.mt .
If instead you're using gsutil rsync, then it will simply resume downloading.
I have created the script in GUI MODE and running the script in not GUI mode and it generates the CSV file but it is not adding header file in the CSV file. How to add it?
just add -f in your code in GUI mode after moving to the bin folder
Here is the code:
jmeter -f -n -t"templates\Lucene_search1.jmx" -l "templates\Lucene_Search_Results_Data1.csv" -e -o"C:\Starting Python\apache-jmeter-5.0\HTML_Reports"
Make sure to add the next line to user.properties file:
jmeter.save.saveservice.print_field_names=true
Also be aware that if you're appending the results into the existing file - JMeter will not add field names to it, you will either need to delete the existing results file and re-run the test or redirect the output into a new file and copy header and the new results from the new file into the old file manually.
References:
Results file configuration
Apache JMeter Properties Customization Guide
I am having one shell script in Linux in which the output will be generated in .csv format.
At the end of the script i am making this .csv to .gz format to reduce the space on my machine.
The file which is generated comes in this format Output_04-07-2015.csv
The command which i have written to make it zip is:-gzip Output_*.csv
But i am facing an issue that if the file already exists, then it should make the new file with that reported time stamp.
Can anyone help me with it.?
If all you want is to just overwrite the file if it already exists, gzip has a -f flag for it.
gzip -f Output_*.csv
What the -f flag does is forcefully create the gzip file, and overwrite whatever existing zip file there might already be.
Have a look at the man pages by typing man gzip or even this link for many other options.
If instead you want to do it more elegantly, you could check out and see if shell commands for your script work for you or not. But that would differ depending on what shell you have, bash, cshell, etc.
I am having difficulty importing large datasets into Couchbase. I have experience doing this very fast with Redis via the command line but I have not seen anything yet for Couchbase.
I have tried using the PHP SDK and it imports about 500 documents / second. I have also tried the cbcdocload script in the Couchbase bin folder but it seems to want each document in its on JSON file. It is a bit of work to create all these files and then load them. Is there some other importation process I am missing? If cbcdocload is the only way load data fast then is it possible to put multiple documents into 1 json file.
Take the file that has all the JSON documents in it and zip up the file:
zip somefile.zip somefile.json
Place the zip file(s) into a directory. I used ~/json_files/ in my home directory.
Then load the file or files by the following command:
cbdocloader -u Administrator -p s3kre7Pa55 -b MyBucketToLoad -n 127.0.0.1:8091 -s 1000 \
~/json_files/somefile.zip
Note: '-s 1000' is the memory size. You'll need to adjust this value for your bucket.
If successful you'll see output stating how many documents were loaded, success, etc.
Here is a brief script to load up a lot of .zip files in a given directory:
#!/bin/bash
JSON_Dir=~/json_files/
for ZipFile in $JSON_Dir/*.zip ;
do /Applications/Couchbase\ Server.app/Contents/Resources/couchbase-core/bin/cbdocloader \
-u Administrator -p s3kre7Pa55 -b MyBucketToLoad \
-n 127.0.0.1:8091 -s 1000 $ZipFile
done
UPDATED: Keep in mind this script will only work if your data is formatted correctly or if the files are less than the max single document size of 20MB. (not the zipfile, but any document extracted from the zip)
I have created a blog post describing bulk loading from a single file as well and it is listed here:
Bulk Loading Documents Into Couchbase
I create a snapshot of my qcow2 image file like
qemu-img -c before_update Server_sda_qcow2.img
After I have updated the system and everything is working well. I will
write the snapshot back to the base file and delete the snapshot.
I tried
qemu-img -a before_update Server_sda_qcow2.img
But it seems that it won't work.
How can I archieve this with qemu-img?
I solved the problem by myself:
In my case I only need to delete the snapshot with:
qemu-img -d before_update Server_sda_qcow2.img
When I will bring the base file back to the snapshot I have to do:
qemu-img -a before_update Server_sda_qcow2.img