I can't install the VirtualBox Guest Additions in the latest build of Google Chrome OS. When I run the installer, I get the following error:
Verifying archive integrity... All good.
Uncompressing VirtualBox 4.1.8 Guest Additions for Linux.........
VirtualBox Guest Additions installer
mkdir: cannot create directory `/opt/VBoxGuestAdditions-4.1.8': Read-only file system
tar: /opt/VBoxGuestAdditions-4.1.8: Cannot chdir: No such file or directory
tar: Error is not recoverable: exiting now
What do I do now? Can I mount the filesystem in read-write mode? Does the Lime build support the guest additions? I'm using the Vanilla build.
Host OS: Mac OS X Lion (10.7)
Guest OS: Google Chrome OS Vanilla from http://chromeos.hexxeh.net/
VirtualBox version: 4.1.8
So someone on the VirtualBox IRC channel irc://chat.freenode.net/#vbox told me that the guest additions won't work on Chrome OS.
After trying the suggestion from #sarnold, running mount -oremount,rw /, I was told Unable to determine your Linux distribution
If you want to try remounting the filesystem as read-write, the command is:
mount -oremount,rw /
But there might be a good reason for / to be mounted read-only. I doubt the VirtualBox guest tools care where they are installed, so if you just unpack the archive using tar or ar or whatever is necessary, you can probably install them somewhere that is mounted read-write and configure them appropriately.
If you don't mind using dev mode, I was able to run a parrot os vm with qemu and kvm on a pixelbook. I used the change kernel flags script from crouton repo, then installed qemu and kvm packages normally for my debian 9 crouton chroot. Virt manager doesn't work, but I can make a hard drive image with CLI and boot up a vm with CLI and it all works, albeit it is a bit slow even with kvm. Probably cause even a pixelbook has low resources compared to a normal laptop.
Related
I've installed qemu and libvirt on macos big sur with m1 using macports. When i've tried install libvirt with homebrew, it was broken (asked /proc/cpuinfo, lol). Libvirt started as daemon, i've got libvirt-sock on unix-socket directory, but have no virtqemud-sock.
Actions (all with sudo):
port install qemu
port install libvirt
virsh -c qemu:///system
Result:
error: unable to connect to socket «/opt/local/var/run/libvirt/virtqemud-sock»: No such file or directory
Libvirt started as daemon, i've got libvirt-sock on unix-socket directory, but have no virtqemud-sock.
Libvirt has two ways to running - the monolithic daemon (libvirtd) is the traditional approach and the modular daemons (virt${XXXX}d for varying ${XXXX}) is the new way. The libvirt client here is attempting to connect to the modular daemon virtqemud, but it seems like you've started the monolithic libvirtd
More guidance can be found at
https://libvirt.org/daemons.html
https://libvirt.org/uri.html#mode-parameter
I am trying to use qemu on a Windows machine to host Android x86. I am using the following command to start qemu:
qemu-system-x86_64.exe -vga std -m 2048 -smp 2 -soundhw ac97 -net nic,model=e1000 -net user -cdrom android-x86_64-8.1-r1.iso -hda android.img -accel haxm
I am having a problem enabling either whpx or haxm and no matter what I do the result is the same: qemu complains that
-machine accel=haxm: No accelerator found. The same for whpx.
I made sure that intel virtualisation and vtx are enabled in the BIOS, I made sure that both Windows Hypervisor Platform and Hyper-V are installed from Turn Windows Features On or Off, I installed HAXM using the Visual Studio 2017 installer, using the Android Studio installer, using the standalone installer downloaded straight from Intel's webpage and nothing.
What I find amusing is that Android Studio and VS both were able to run their emulators just fine with either haxm or whpx enabled. It's just qemu that is stubborn.
What else should I do to be able to use either one of those? If I ommit the -accel command, qemu starts just fine but the performance is horrible.
Note that I did not have multiple versions of HAXM installed at the same time nor did I have Hypervisor enabled when trying to use haxm and vice versa.
The option to enable HAXM is -accel hax not -accel haxm
-machine accel=haxm: No accelerator found means QEMU doesn't know about the requested accelerator.
If your HAXM would indeed not work the error would something like this:
Failed to open the HAX device!
Open HAX device failed
Environment:
NAME="Red Hat Enterprise Linux Atomic Host"
VERSION="7.3"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Atomic Host"
VARIANT_ID=atomic.host
VERSION_ID="7.3"
PRETTY_NAME="Red Hat Enterprise Linux Atomic Host 7.3"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.3:GA:atomic-host"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.3
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION=7.3
OpenShift-Ansible Version:
3.6.11-1
This is openshift-ansible setup with Atomic hosts, so OpenShift itself is containerized.
Question:
Has anyone configured OpenShift using OpenShift-Ansible for MS Active Directory? I found this reference, but it implies that OpenShift master node service runs under systemd:
http://www.greenreedtech.com/openshift-origin-active-directory-integration/
Any suggestions?
(Unfortunately, I don't have ability to test it but) OpenShift documentation says that
The installation process creates relevant systemd units which can be used to start, stop, and poll services using normal systemctl commands.
So, I'd expect that command systemctl restart origin-master should work (except, in your case it will be atomic-openshift-master)
It also says that
configuration files are placed in the same locations during containerized installation as RPM based installations
so I'd expect that this instruction would work.
while using the vmware tools distributed with the vmware player 12, it always came the error 'unable to start vmware-tools.service execution aborted', no matter what way I tried(even reinstalled the guest os). Finally I gave up and tried to use the open-vm-tools distributed with the fedora os. It is so simply working just by typing a command! Like this:
vmhgfs-fuse /mnt/hgfs
and then I found my shared folder name in hgfs!
Amazing!
If you're still puzzled by the official vmware-tools, and driven crazy by all kinds of weird issues it has caused, maybe you could try this way.
Just one step to share your files between host os and guest os.
PS:The way to set the shared folder is omitted.
As it turns out, Fedora 24 already ships with an open-source version of vmware tools, called open-vm-tools (and open-vm-tools-desktop). One way you can access shared folders is by opening a terminal and typing the following (as root)
sudo vmhgfs-fuse /mnt/hgfs
Then, your shared folders should be accessible in the hgfs folder.
I'm trying to use cuda to accelerate tensorflow. I'm running tensorflow using the docker image.
Firstly, when I launch the gpu image, it has a mismatch in the LT_LIBRARY_PATH environment variable:
~# echo $LD_LIBRARY_PATH
/usr/local/nvidia/lib:/usr/local/nvidia/lib64:
root#d578acbbc2cd:~# ls /usr/local/
bin cuda cuda-7.0 etc games include lib man sbin share src
There's no nvidia directory there. When I try to run the convolutional.py demo, it can't initialise the cuda support:
# python models/image/mnist/convolutional.py
Succesfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Succesfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Succesfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Succesfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 8
modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.2.0-23-generic/modules.dep.bin'
E tensorflow/stream_executor/cuda/cuda_driver.cc:466] failed call to cuInit: CUDA_ERROR_UNKNOWN
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:98] retrieving CUDA diagnostic information for host: d578acbbc2cd
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:106] hostname: d578acbbc2cd
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:131] libcuda reported version is: Not found: was unable to find libcuda.so DSO loaded into this program
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:242] driver version file contents: """NVRM version: NVIDIA UNIX x86_64 Kernel Module 352.68 Tue Dec 1 17:24:11 PST 2015
GCC version: gcc version 5.2.1 20151010 (Ubuntu 5.2.1-22ubuntu2)
"""
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:135] kernel reported version is: 352.68
I tensorflow/core/common_runtime/gpu/gpu_init.cc:112] DMA:
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 8
It then goes on to train using cpu only.
# find /usr -name libcuda.so
/usr/lib/x86_64-linux-gnu/libcuda.so
So in the docker image, there's only the gnu cpu cuda implementation. No NVIDIA stuff. In the host ubuntu 15.10 session, I have libcuda.so installed:
$ find /usr -name libcuda.so
/usr/lib/x86_64-linux-gnu/libcuda.so
/usr/lib/i386-linux-gnu/libcuda.so
/usr/local/cuda-7.5/targets/x86_64-linux/lib
/stubs/libcuda.so
So these seem to be stubs ... not sure why.
Is there some trick to getting this to work?
Try rebuilding the Docker image directly from the Tensorflow repository (i.e. don't rely on the image on the container registry) and use https://github.com/NVIDIA/nvidia-docker to run the container (the Docker command described in the Tensorflow documentation is not portable).
I had a similar problem, though not in docker. The libcuda.so in /usr/local/cuda/lib64/stubs was a broken sym link. When I searched for libcuda.so it only turned up a file in a lib32 folder.
It seems that the problem was how I originally installed the NVIDIA device driver. At some point in the driver install process you're given the option to install the lib32 drivers. I had thought this meant in addition to lib64 drivers so I selected it. Turns out it only installs lib32 and not lib64 drivers.
I reinstalled the NIVDIA device driver, this time not selecting the lib32 'option'. Now tensorflow finds libcuda.so.
I had the same problem with running tensorflow on a Ubuntu machine after I upgraded my driver to 352.63 and 352.93. (I remember it works with 346.* but when I try to install 346., it installs 352. automatically for some reason).
I finally figured out that it's caused by permission issue. (I can run it with root) So, I changed the permission of the libcuda.so.352-63 file to executable by anyone and it works well now.
Hope this will be helpful to those still struggling with this issue.
I didn't try the docker one, but I guess it's also caused by permission setting.
Try this command
sudo apt-get install nvidia-modprobe
As mentioned here:
https://github.com/tensorflow/tensorflow/issues/394
and
http://kkjkok.blogspot.in/2016_08_01_archive.html
After I updated NVIDIA driver to 378.09 on Ubuntu 14.10 I had the same error,
although all the right for lib files were set correctly.
Thanks to #PhoenixQ, I tried to run with sudo and it worked.
After that I tried to run without sudo one more time and error disappeared. I'm not sure what ecxactly happened, but maybe something was configured during call with sudo, which was not possible withous sudo.
So the solution:
Try to run the same thing with sudo.
After this. Tryu running without sudo. Worked for me.