Slow consumer detection in Apache Qpid, but why? - qpid

I found that Apache Qpid has a function that detects slow consumers and disconnect them.
It means that if I have slow connectivity to queue or long think-time, I would never get service because I can burden the servers.
Has anyone perform experiments on this (Effect of slow consumers on QoS)?

Related

What is the difference between infura and geth?

I understand both methods are used for running dapps. What I don't understand is the clear cut difference between the two or how one is more advantageous over the other? I'm new to blockchain, so please explain with a simple terminology.
The difference is:
Infura has geth installation running for you, exposing most used, most low-CPU-consuming methods for you via Web.
You can install geth yourself but you will need a server with about 500GB of SSD disk, and wait 1 month to download the entire State.
If you are not going to do any serious monetary transfers I recommend using Etherscan, it is more complete than Infura.
To execute transactions and/or queries against blockchains, you need connections.
Infura is an API gateway to main network and some test networks. It supports a subset of web3 interface. When you like to execute a transaction against the Ethereum blockchain, you may use infura as connection to the blockchain. So in this case, you are not directly connected to Ethereum, but infura has a connection. The Metamask Browser Plugin works with infura.
The alternative approach is to have an Ethereum client like geth or parity running on your machine. In this case, the Ethereum Client connects to several public nodes of the blockchain and forwards your transactions to the blockchain.
Depending on your architecture and requirements, both approaches could be the best solution.

Pm2 cluster mode, The ideal number of workers?

I using PM2 to run my nodejs application.
When starting it in cluster mode "pm2 start server -i 0": PM2 will automatically spawn as many workers as you have CPU cores.
What is the ideal number of workers to run and why?
Beware of the context switch
When running multiple processes on your machine, try to make sure each CPU core will be kepy busy by a single application thread at a time. As a general rule, you should look to spawn N-1 application processes, where N is the number of available CPU cores. That way, each process is guaranteed to get a good slice of one core, and there’s one spare for the kernel scheduler to run other server tasks on. Additionally, try to make sure the server will be running little or no work other than your Node.JS application, so processes don’t fight for CPU.
We made a mistake where we deployed two busy node.js applications to our servers, both apps spawning N-1 processes each. The applications’ processes started vehemently competing for CPU, resulting in CPU load and usage increasing dramatically. Even though we were running these on beefy 8-core servers, we were paying a noticeable penalty due to context switching. Context switching is the behaviour whereby the CPU suspends one task in order to work on another. When context switching, the kernel must suspend all state for one process while it loads and executes state for another. After simply reducing the number of processes the applications spawned such that they each shared an equal number of cores, load dropped significantly:
https://engineering.gosquared.com/optimising-nginx-node-js-and-networking-for-heavy-workloads

Is it possible to patch clustered SQL Server without a BizTalk outage?

We have a BizTalk Server install backed by a clustered SQL environment for high availability.
However, whenever the SQL environment is patched there is a momentary outage as part of node failover. Consequently the host instances stop and BizTalk shuts down (if we move to CU2 the host instances will automatically restart, but this is a separate issue).
This is undesirable, as it prevents incoming web requests and breaks open web service clients. As such, is there a strategy for gracefully patching SQL Server without a BizTalk outage?
It seems this is impossible. Marking this as the accepted answer until someone can pleasantly surprise me otherwise.

Couchbase 1.8.0 concurrency (number of concurrent req support in java client/server): scalability

Is there any limit on server on serving number of requests per second or number of requests serving simultaneously. [in configuration, not due to RAM, CPU etc hardware limitations]
Is there any limit on number of simultaneous requests on an instance of CouchbaseClient in Java servlet.
Is it best to create only one instance on CouchbaseClient and keep it open or to create multiple instances and destroy.
Is Moxi helpful with Couchbase 1.8.0 server/Couchbase java client 1.0.2
I need this info to setup application in production.
Thanks you
The memcached instance that runs behind Couchbase has a hard
connection limit of 10,000 connections. Couchbase in general
recommends that you should increase the number of nodes to address
the distrobution of traffic on that level.
The client itself does not have a hardcoded limit in regards to how
many connections it makes to a Couchbase cluster.
Couchbase generally recommends that you create a connection pool
from your application to the cluster and just re-use those
connections versus creation and destroying them over and over. On
heavier load applications, the creation and destruction of these
connections over and over can get very expensive from a resource
perspective.
Moxi is an integrated piece of Couchbase. However, it is generally
in place as an adapter layer for clients developers to specifically
use it or to give legacy access to applications designed to directly
access a memcached interface. If you are using the Couchbase client
driver you won't need to use the Moxi interface.

Comet and long polling requests on DreamHost?

Is there any solution for running these kind of operations on DreamHost or other shared hosting environments where I don't have access to tweak apache?
You certainly can, but as long as Apache HTTP server doesn't provide non-blocking IO capabilities (and each polling connection has a server thread associated to it), you'll be running out of memory very fast (after 2-3k connections).
If you meant Apache Tomcat, NIO is turned off by default, and you need to have access to configuration files in order to change this.