virsh - difference between pool-define-as and pool-create-as - libvirt

Not sure if this is the right forum. libvirt page linked here. If this needs to be posted at a different place plaease let me know.
What is the difference between virsh pool-define-as and create-as? Reading the man page for virsh, it seems you avoid having to run pool build and pool start when using create-as. Is that the only difference? My testing indicates both picks up existing files ( in the case of pool type dir ) as volumes. Am I missing anything.
Thanks,
Ashok

Objects in libvirt can be either transient or persistent. A transient object only exists for as long as it is running, while a persistent object exists all the time. Essentially with a persistent object the XML configuration is saved by libvirt in /etc/libvirt.
So in the case of storage pools, if you use 'virsh pool-define-as' you'll be created a configuration file for a persistent storage pool. You can later start this storage pool using 'virsh pool-start', stop it with 'virsh pool-destroy' and start it again later, or even set it to auto-start on host boot.
If you want a transient storage pool, you can use 'virsh pool-create-as' which will immediately start a storage pool, without saving its config on disk. This storage pool will totally disappear when you do 'virsh pool-destory' (though the actually storage will still exist, libvirt simply won't know about it). Witha transient storage pool, you obviously can't make it auto-start on boot, since libvirt doesn't know about its config.
As a general rule, most people/apps will want to use persistent pools.

Related

What if log file size exceeds my claimed Persistent volume?

I am working on logging of my application on Persistent Volume.
I am using OpenShift, I created storage(Persistent volume under nas-thin class) and allocated 64Gib to it. I added mount path to this PV for one of my pods where my application is running and generating logs in one of the folder named "logs".
My mount path is "/logs". Hence anything inside this folder will be root for my PVC.
I am appending my logs inside logs folder in a single file.
I tried to read about expanding PV but couldn't understand much.
What would happen if my log file size exceeds allocated PV size(which is 64Gib)?
That will depend on the persistent storage actually being used by the cluster. Some persistent volume providers will let you write more data than you actually defined. So you'll have to test how your storage and your application actually behave on your particular cluster.
That being said, it is generally a bad idea to have container workload log to a persistent volume. I would strongly recommend to log to STDOUT and then use an external logging system to manage your logs instead of writing to a file.
How will you deal with multiple replicas of your application running? Do you really want to go into each container to get the log files? How will you correlate logs between different replicas?
Applications running on OpenShift / Kubernetes should not manage their logs in files but write to STDOUT.

MySQL with Nodd js

I am creating a rest api that uses mysql as data base. My confusion is that should i connect to database in every request and release the connection at the end of the operation. Or should i connect the database at the start of the server and make it globally available and forget about releasing the connection
I would caution that neither option is quite wise.
The advantage of creating one connection for each request is that those connections can interact with your database in parallel, this is great when you have a lot of requests coming through.
The disadvantage (and the reason you might just create one connection on startup and share it) is obviously the setup cost of establishing a new connection each time.
One option to look into is connection pooling https://en.wikipedia.org/wiki/Connection_pool.
At a high level you can establish a pool of open connections on startup. When you need to make a request remove one of those connections from the pool, use it, and return it when done.
There are a number of useful Node packages that implement this abstraction, you should be able to find one if you look.

Infinispan 5.3.0 - Change cluster configuration

I've a cluster, this cluster has four nodes.
If I stop one node, and edit the configuration file (add a new replicated cache),
When I'll start the node,
Will the cluster have the new replicated cache?
In the others three nodes, is it necessary change the configuration file?
Regards.
a) Yes, the new replicated cache will be created on the node. However, if you have the same cache (name) with different configurations, you're asking for trouble.
b) No, the configuration on other nodes will not change. You have to change it manually, either stopping the nodes, or running rolling upgrade.
You may also look into JMX operations for starting/stopping cache, but this does not allow to change the configuration (I am not 100% sure if starting a cache with unknown name wouldn't start a new cache with default configuration).
If you have programmatic access to CacheManager, you can start cache with configuration provided programmatically.

how to restore vm instance from terminated state ?

I halted the linux vm instances for couple of days. I dont see any options to turn it back on. I hit reboot and it pops the error here is detailed log
"RESOURCE_NOT_READY: The resource 'projects/myproject/zones/us-central1-a/instances/myinstance' is not ready"
I still see the disk for instance there and it marks active. Is there any way to reactivate the instance ?
thanks
Instances in a "TERMINATED" state cannot be restarted. If your instance is in this state, the only option is to delete the instance. Any data on scratch disks is gone.
Persistent Disks are independent of instances. If your disk is a Persistent Disk, you can create a new instance with the disk attached and continue using the disk.
This is now possible!
Please have a look at the instances.start API call https://developers.google.com/apis-explorer/#p/compute/v1/compute.instances.start
as of 2019-03-22, a machine in the state of "terminated", use can choose to either "start" or "delete" it.
https://cloud.google.com/compute/docs/instances/instance-life-cycle

Compute Engine Instance

I have created a Google Compute Engine instance with CentOS and added some stuff there, such as Apache, Webmin, ActiveCollab, Gitolite etc.. etc.
The problem is that the VM is always running out of memory because the RAM is too low.
How do I change the assigned RAM in Google Compute Engine?
Should I have to copy the VM to another with bigger RAM? If so will it copy all the contents from my CentOS installation?
Can anyone give me some advises on how to get more RAM without having to reinstall everything.
Thanks
The recommended approach for manually managed instances is to boot from a Persistent root Disk. When your instance has been booted from Persistent Disk, you can delete the instance and immediately create a new instance from the same disk with a larger machine type. This is similar to shutting down a physical machine, installing faster processors and more RAM, and starting it back up again. This doesn't work with scratch disks because they come and go with the instance.
Using Persistent Disks also enables snapshots, which allow you to take a point-in-time snapshot of the exact state of the disk and create new disks from it. You can use them as backups. Snapshots are also global resources, so you can use them to create Persistent Disks in any zone. This makes it easy to migrate your instance between zones (to prepare for a maintenance window in your current zone, for example).
Never store state on scratch disks. If the instance stops for any reason, you've lost that data. For manually configured instances, boot them from a Persistent Disk. For application data, store it on Persistent Disk, or consider using a managed service for state, like Google Cloud SQL or Google Cloud Datastore.