it seems to me that the new elastic beanstalk Enhanced Health Overview always complaining about memory usage
is there any real solution to by pass this?
In the context of enhanced health for elastic beanstalk environments, a low memory warning indicates that the processes running on your system are consuming too much memory. You should ssh to the instance and see the memory usage for individual processes. You might want to switch to a bigger instance type with more RAM if your processes take a lot of memory.
Related
Does it make sense to run open shift 3 / okd on bare metal or on virtual machines
What would be the pros and cons of each?
would it not affect overall performance if it runs on virtual machines?
Basically, you had better to use virtual machine for efficient resource usage, and if you use cloud platform supported by OpenShift for using virtual machine, you can also use auto scaling through API.
In other hand, if you need to use GPU and CPU aggressively for some tasks, then you had better some host as bare metal.
You can mix/adjust above host machines as your system requirements either.
I hope it help you.
During configuring IoT-Agent for Ultralight 2.0 there is a possibility to set docker variable IOTA_REGISTRY_TYPE- Whether to hold IoT device info in memory or in a database (mongodb by default). Documentation that I'm referencing.
Firstly I would like to have it set for memory and what would it imply?
Could the data be preserved only in some allocated part of memory within docker env.? Could I omit further variables within configuration file, like IOTA_MONGO_HOST (The hostname of mongoDB - used for holding device information).
Architecture for my system has raspberry pi running IoT Agent and VM running Orion Context Broker and MongoDB. Both are reachable because they see each other in LAN. Is it necessary for MongoDB to be the same database for IoT Agent and Orion Context Broker if they are linked?
Is it possible to run IoT Agent with memory only type of device information persistence (instead of database type)? Will it have any effect on whole infrastructure running besides of obvious lack of device data holding?
Firstly I would like to have it set for memory and what would it imply?
There would be no need for a MongoDB database attached to the IoT Agent, there would be no persistence of provisioned devices in the event of disaster recovery.
Could the data be preserved only in some allocated part of memory within docker env.?
No
Could I omit further variables within configuration file, like IOTA_MONGO_HOST (The hostname of mongoDB - used for holding device information).
The Docker Env parameters are merely overrides to the values found in the config.js within the Enabler itself, so all of the ENV variables can be omitted if you are using defaults.
Is it necessary for MongoDB to be the same database for IoT Agent and Orion
Context Broker if they are linked?
The IoT Agent and Orion can run entirely separately and usually would use separate MongoDB instances. At least this would be the case in a properly architected production environment.
The Step-by-Step Tutorials are lumping everything together on one Docker engine for simplicity. A proper architecture has been sacrificed to keep the narrative focused on the learning goals. You don't need two Mongo-DB instances to handle less than 20 dummy devices.
When deploying to a production environment, try looking at the SmartSDK Recipes
in order to scale up to a proper architecture:
see: https://smartsdk.github.io/smartsdk-recipes/
Is it possible to run IoT Agent with memory only type of device information
persistence (instead of database type)? Will it have any effect on whole
infrastructure running besides of obvious lack of device data holding?
I haven't checked this, but there may be a slight difference in performance since memory access should be slightly faster. The pay-off is that you will lose the provisioned state of all devices if failure occurs. If you need to invest in disaster recovery then Mongo-DB is the way to go, and periodically back-up your database so you can always return to last-known-good
So I saw there is an option in google compute (I assume the same option exists in other cloud VM suppliers so the question isnt specifically on Google compute, but on the underlying technology) to resize the disk without having to restart the machine, and I ask, how is this possible?
Even if it uses some sort of abstraction to the disk and they dont actually assign a physical disk to the VM, but just part of the disk (or part of a number of disks), once the disk is created in the guest VM is has a certain size, how can it change without needing a restart? Does it utilize NFS somehow?
This is built directly into disk protocols these days. This capability has existed for a while, since disks have been virtualized since the late 1990s (either through network protocols like iSCSI / FibreChannel, or through a software-emulated version of hardware like VMware).
Like the VMware model, GCE doesn't require any additional network hops or protocols to do this; the hypervisor just exposes the virtual disk as if it is a physical device, and the guest knows that its size can change and handles that. GCE uses a virtualization-specific driver type for its disks called VirtIO SCSI, but this feature is implemented in many other driver types (across many OSes) as well.
Since a disk can be resized at any time, disk protocols need a way to tell the guest that an update has occurred. In general terms, this works as follows in most protocols:
Administrator resizes disk from hypervisor UI (or whatever storage virtualization UI they're using).
Nothing happens inside the guest until it issues an IO to the disk.
Guest OS issues an IO command to the disk, via the device driver in the guest OS.
Hypervisor emulates that IO command, notices that the disk has been resized and the guest hasn't been alerted yet, and returns a response to the guest telling it to update its view of the device.
The guest OS recognizes this response and re-queries the device size and other details via some other command.
I'm not 100% sure, but I believe the reason it's structured like this is that traditionally disks cannot send updates to the OS unless the OS requests them first. This is probably because the disk has no way to know what memory is free to write to, and even if it did, no way to synchronize access to that memory with the OS. However, those constraints are becoming less true to enable ultra-high-throughput / ultra-low-latency SSDs and NVRAM, so new disk protocols such as NVMe may do this slightly differently (I don't know).
I have been tasked with recommending the VM provisioning for an OpenShift production environment. The OpenShift installation documents don't really detail a lot of different options. I know that we want High Availability (which means multiple masters) but some of the things that I'm a bit confused by are:
separate hosts for etcd
infrastructure nodes
Do I need separate hosts/nodes for etcd? (advantages seem to be performance related but would like to better understand)
Do I need separate hosts/nodes for the infrastructure components (registry, router, etc.) or can these just be hosted on the master nodes?
AFAIK etcd can be on same host as master unless you really have a big cluster and want maintenance of etcd separate of openshift cluster.
Running routers on dedicated nodes help having high availability and reduce chances of nodes running into health issues due to other container work loads running on same machine. applications inside openshift cluster can run even if all masters go down (may be rare) but router nodes need to be available all the time for serving traffic.
There are many reference architectures published by redhat checkout blog.openshift.com and also redhat.com official docs
etcd and masters can be installed in the same node or separately. Here you can find some best practices for etcd. As you see, here is recommended that it is installed separately and this is what I would suggest if you can "afford" more servers. If not, co-locating masters and etcds we can say is symbiotic in that masters are CPU intensive whereas etcd uses a lot of disk IO and memory.
Regarding infrastructure deployments such as routers, docker-registry, EFK stack, metrics and so forth, the recommended deployment configuration (all within your possibilities) is that masters are not schedulable, and they worry only about serving the API and controlling the nodes. Then you can split your schedulable nodes into infrastructure and compute nodes.
Infrastructure nodes will only host applications used by the cluster itself or by other applications (i.e. Gitlab or Nexus)
Worker/Compute nodes will host business applications
Having a multi-master installation with HA routers is of course the best solution, but then you have to decide how you want to provide this HA, is it with an external LoadBalancer or with IP Failover?
As #debianmaster mentioned, there are several reference architecture documents you can read. Like this one here
Is there any limit on server on serving number of requests per second or number of requests serving simultaneously. [in configuration, not due to RAM, CPU etc hardware limitations]
Is there any limit on number of simultaneous requests on an instance of CouchbaseClient in Java servlet.
Is it best to create only one instance on CouchbaseClient and keep it open or to create multiple instances and destroy.
Is Moxi helpful with Couchbase 1.8.0 server/Couchbase java client 1.0.2
I need this info to setup application in production.
Thanks you
The memcached instance that runs behind Couchbase has a hard
connection limit of 10,000 connections. Couchbase in general
recommends that you should increase the number of nodes to address
the distrobution of traffic on that level.
The client itself does not have a hardcoded limit in regards to how
many connections it makes to a Couchbase cluster.
Couchbase generally recommends that you create a connection pool
from your application to the cluster and just re-use those
connections versus creation and destroying them over and over. On
heavier load applications, the creation and destruction of these
connections over and over can get very expensive from a resource
perspective.
Moxi is an integrated piece of Couchbase. However, it is generally
in place as an adapter layer for clients developers to specifically
use it or to give legacy access to applications designed to directly
access a memcached interface. If you are using the Couchbase client
driver you won't need to use the Moxi interface.