How to setup and save qemu running option - qemu

I'm using qemu to replace bochs (since it doesn't update anymore)
In bochs, I can save the running settings into files and reload it. Furthermore, there will be a listed table of running options while boot up.
I'm wondering if I can do the same with qemu, save running settings such as cpu model, and other stuffs into some files and reload it next time I run emulation.
And if there exists a full listed running option table like thing for me to have a complete view on which options I can set.
Thanks a lot!

For this sort of UI and management of VMs you should look at a "management layer" program that sits on top of QEMU. libvirt's "virt-manager" is one common choice here. A management-layer will generally allow you to define options for a VM and save them so you can start and stop that VM without having to specify all the command line options every time. It will also configure QEMU in a more secure and performant way than you get by default, which often requires rather long QEMU command lines.
QEMU itself doesn't provide this kind of facility because its philosophy is to just be the low-level tool which runs a VM, and leave the UI and persistent-VM-management to other software which can do a better job of it.

Related

KVM/QEMU No bootable device 0003 error and solution

I am new to kvm, so maybe a lot of you will take it silly, but I would like to describe the problem I had.
When I was typing
kvm
in qemu the error of 'no bootable device' with code 0003 occurs.
This is because your command line is (implicitly) asking to create a virtual machine with no disk. The guest BIOS then looks for disks or CDROMs that it can boot from, and finds none. This is exactly the same behaviour as if you had a real hardware PC, and powered it up with no disks in it.
In general, the QEMU command line can be long and complicated, especially if you want best performance from the virtual machine. For most users it's often best to use a "management layer" program like libvirt, which takes care of these details for you, rather than trying to run QEMU directly.
The easiest fix, which let me open the virtual machine through terminal was just to find it:
sudo find /var/lib/libvirt/images/
Choose the correct one and copy/paste the path to the code:
sudo kvm /var/lib/libvirt/images/Centos.qcow2
And thats it..
I couldn't find it in manual to qemu or kvm, but it is that easy, the hours of config changes and bus types were not necessary.

How to profile CUDA code on a headless node?

I'm working on a CUDA application I'd like to profile. Up to now all I've used is the command line profiler, nvprof, which just displayes the summarized statistics.
I thought about using the GUI profiler, NVVP. The problem is that the remote Linux node I'm running the application on doesn't have anything GUI (even X.org). Moreover, even if I managed to get some X11 stack on the remote node, keeping my own laptop alive for the whole time of the profiling would be, well, tedious.
I tried collecting all the needed information in the following way:
nvprof --analysis-metrics -o application.nvprof ./myapplication
Then I copy the output file onto my laptop and view it in NVVP. This has three problems, though.
First of all, I don't get any file transfer information when I load the output file into NVVP. It's not shown at all in the NVVP window.
Secondly, the call graph is completely distorted. The gaps between kernel launches are at least 100x bigger than the kernel durations, which makes any dependency and flow analysis impossible.
Lastly, my application uses a lot of the GPU memory. During the profiling the device gets out of memory, which is not the case during the standalone run.
How should I properly profile my CUDA application on a headless node?
NVVP supports headless nodes as a first-class citizen. Remote profiling is a major feature of NVVP.
The way this works is that NVVP runs on your local GUI-enabled host machine and invokes nvprof on the headless machine, generates the required files there, copies the files over, and opens them. All of this happens transparently and automatically. You can run further analyses from NVVP as usual and it will repeat these steps for you.
To use remote profiling, open NVVP, then File->New Session. Add a Connection instead of using Local, putting in details of the headless machine. Click on Manage... to point NVVP to the toolkit path on the remote machine. Once this one-time setup is done, enter the path to the executable and run as usual.
You can read about remote profiling in the relevant documentation.

Can a TeamCity build agent be configured to only run builds with a particular parameter dependency?

I have a TeamCity build agent installed on a machine which in theory is dedicated to running dynamic security scans and I don't want it doing anything else (i.e. running the duplicates finder).
Short of either creating custom agent configuration properties then customising each build's agent dependencies (which perhaps strictly speaking I should be doing anyway) or configuring the agent to only run selected configurations, is there any way to avoid this? Both of these approaches require additional configuration on a per-build basis either on every single build.
In a perfect world, I'd like to be able to tell the agent to only ever run builds which match a particular agent dependency. Is this possible or am I coming at it from the wrong direction?
I'm afraid TeamCity doesn't provide a way to specify that agent can run only configurations with a specific property (and not run other configurations).
So, there are only two ways to specify agents: either with agent requirements, or with configuring the agent to only run selected configurations.
You could probably try to make some batch change in your build configuration properties, because all build configuration settings/properties are stored in XML files on disk.
In current versions of TeamCity (e.g. 8.1) you can create a pool just for your security machine, and only assign the one machine to that pool, remembering to remove it from other pools.
Then you can assign the security project to that pool. That should solve your problem.

Hudson slaves, how to access workspace

Howto configure system to have one master and multiple slaves where building normal c-code with gmake? How slaves can access workspace from master? I guess NFS share is way to go, but if that's not possible any other options?
http://wiki.hudson-ci.org/display/HUDSON/Distributed+builds is there but cannot understand how workspace sharing is handled?
Rsync? From master: SCM job -> done -> rsync to all slaves -> build job and if was done on slave -> rsync workspace back to master?
Any proof of concept or real life solutions?
When Hudson runs a build on a slave node, it does a checkout from source control on that node. If you want to copy other files over from the master node, or copy other items back to the master node after a build, you can use the Copy to Slave plugin.
It's surely a late answer, but may help others.
I'm currently using the "Copy Artifact plug-in" with great results.
http://wiki.hudson-ci.org/display/HUDSON/Copy+Artifact+Plugin
(https://stackoverflow.com/a/4135171/2040743)
Just one way of doing things, others exist.
Workspaces are actually not shared when distributed to multiple machines, as they exist as directories in each of the multiple machines. To solve the coordination of items, any item that needs distributed from one workspace to another is copied into a central repository via SCP.
This means that sometimes I have a task which needs to wait on the items landing in the central repository. To fix this, I have the task run a shell script which polls the repository via SCP for the presence of the needed items, and it errors out if the items aren't available after five minutes.
The only downside to this is that you need to pass around a parameter (build number) to keep the builds on the same page, preventing one build from picking up a previous version built artifact. That and you have to set up a lot of SSH keys to avoid the need to pass a password in when running the SSH scripts.
Like I said, not the ideal solution, but I find it is more stable than the ssh artifact grabbing code for my particular release of Hudson (and my set of SSH servers).
One downside, the SSH servers in most Linux machines seem to really lack performance. A solution like mine tends to swamp your SSH server with a lot of connections coming in at about the same time. If you find the same happens with you, you can add timer delays (easy, imperfect solution) or you can rebuild the SSH server with high-performance patches. One day I hope that the high-performance patches make their way into the SSH server base code, provided that they don't negatively impact the SSH server security.

Bring Hudson slave nodes online at certain times

I am setting up a number of slaves to my Hudson master, grouped by labels. I would like to be able to have a set of nodes that run during the day and an additional set of nodes that are turned on during the evening.
Is this possible, either directly by hudson or via plugin or script? If so what is your recommended solution?
There is an experimental feature to schedule when each slave should be available. It is in core, but you have to set a system property to enable it. So if you start Hudson with
java -Dhudson.scheduledRetention=true -jar hudson.war
You will get an extra configuration option on each node, allowing you to specify a schedule of when that node should be used.
Let the OS (or any other scheduler) control the start and stop of a node. Hudson only uses what's available. Not sure how Hudson acts if a node dies while running a job.
Update: The feature that Michael Donohue is not experimental anymore and is available for all nodes (I use the SSH node). Works great (at least the take only if needed feature).
Expanding on what #Peter Schuetze said...
Unless the nodes are VMs that you want Hudson to manage (see the VMware plugin), the start and stop operations are out of Hudson's control. Depending on how you have your slaves set up, Hudson may just automatically connect when it sees the node is online, or you may need to make sure the slave runs something at startup.
You can use the Hudson API (generally HTTP POSTs to URLs on the Hudson master) to tell Hudson that nodes are going offline ahead of time. This will help avoid builds that get killed when the node goes down. Check out the HTML source on the node's page (http://hudson/computer/node_name) to see what the web interface does for the "mark offline" and "disconnect" operations.