Obtaining a Pool ID to deploy OpenShift Origin - openshift

I'm trying to automate the deployment of OpenShift Origin into AWS, because it's a dependency of another product which I also need to deploy on demand. There are various solutions for this, but they all require a Pool ID at some point in the process. What is a Pool ID? I realise it's associated with a Redhat subscription, but can I script the generation of a Pool ID? And if so, is it necessary to treat it as a secret?

You can obtain the subscriptions available pool with :
subscription-manager list --available --pool-only
If you are many subscription, you can filter the result with --matches option (filter can contain regex) :
--matches=FILTER_STRING
lists only subscriptions or products containing the
specified expression in the subscription or product
information, varying with the list requested and the
server version (case-insensitive).

Related

How does Zookeeper manage node roles in other clusters?

My understanding is that Zookeeper is often used to solve the problem of "keeping track of which node plays a particular role" in a distributed system (e.g. master node in a DB or in a MapReduce cluster, etc).
For simplicity, say we have a DB with one master and multiple replicas and the current master node in the DB goes down. In this scenario, one would, in principle, make one of the replica nodes a new master node. At this point my understanding is:
If we didn't have Zookeeper
The application servers may not know that we have a new master node, so they would not know where to send writes unless we have some custom logic on the app server itself to detect / correct this problem.
If we have Zookeeper
Zookeeper would somehow detect this failure, and update the value for the corresponding master key. Moreover, application servers can (optionally?) register hooks in Zookeeper, so Zookeeper can notify them of this failure, so that the app servers can update (e.g. in memory), which DB node is the new master.
My questions are:
How does Zookeper know what node to make master? Is Zookeper responsible for this choice?
How is this information propagated to nodes that need to interact with Zookeeper? E.g. If one of the Zookeeper nodes go down, how would the application servers know which Zookeeper node to hit in this scenario? Does Zookeeper manage this differently from competing solutions like e.g. etcd?
The answer to both 1. and 2. is called leader election process and briefly works in the following way:
When a process starts in a cluster managed by ZK, the cluster enters an election state. If there is a leader then there exists an established hierarcy and the existing leader is just verified. If there is no leader (say master is down), ZK forces the znodes to use sequence flags to look for a new leader. Each node talks to its peers and sends a message containing the node's identifier (sid) and the most recent transaction it executed (zxid). These messages are called votes. When a node receives a vote it can either neglect it or keep it depending the zxid. If zxid is newer it keeps the vote if older than what it has it discards it. If there is a tie in zxids then the vote with the highest sid wins! So there will come a time when all nodes will have the same vote which will define the new leader by the sid. So this is how ZK elects a new leader node!

Move an client (agent) to another deploymentgroup

In the past we had two different deployment groups A and B holding 3 "clients" (agents) each.
We would like to merge all clients of group B into group B, because the actual reason why we had 2 groups is gone now.
Is there a way other than removing and reconfiguring each of the clients?
Unfortunately, I haven't found any feature in the UI of our on-premise Azure Devops Server 2019 Update 1.1 yet.
A deployment group is a logical set of deployment target machines that have agents installed on each one. Deployment groups represent the physical environments; for example, "Dev", "Test", "UAT", and "Production". In effect, a deployment group is just another grouping of agents, much like an agent pool.
We could also not be able to directly move agents across agent pool. Same here as deployment group.
So, it's not able to do this. Afraid you may have to remove and reconfigure each of the clients.

Amazon SWF ActivityWorker - workerForCommonTaskList vs workerForHostSpecificTaskList

In Amazon SWF what is the difference between the following kinds of ActivityWorker - workerForCommonTaskList vs workerForHostSpecificTaskList. Does it mean the workerForHostSpecificTaskList picks up tasks on a list meant to be executed on the host where the ActivityWorker is running? If so how does one add tasks to such a list?
A task list is essentially a queue. SWF supports an unlimited number of task lists and they are created on demand without an explicit registration. It is up to an application to decide how many workers consume from a task list. The common design pattern is to have a common task list that worker on every host listens to and a host specific task list per host (or mesos or cubernetis task or even process instance). Then one of the activity tasks that was dispatched to a common task list returns the host specific task list name and then activities can be scheduled to a host specific task list to route activity executions on a specific host.
How activity is scheduled to a specific task list is client side library specific. In Java AWS Flow framework it is done by passing it to an ActivitySchedulingOptions structure that can be passed as an additional parameter to an activity invocation.
See fileprocessing sample that demonstrates such routing.
BTW, have you looked at the Cadence, which is open source alternative to SWF which is actively developed and much more feature reach than SWF?

Hazelcast dynamic imap configuration propagation to members

If I have multiple Hazelcast cluster members using the same IMap and I want to configure the IMap in a specific manner programmatically, do I then need to have the configuration code in all the members, or should it be enough to have the configuration code just once in one of the members?
In other words, are the MapConfigs only member specific or cluster wide?
Why I'm asking is that Hazelcast documentation http://docs.hazelcast.org/docs/latest/manual/html-single/index.html#configuring-programmatically
says that
As dynamically added data structure configuration is propagated across
all cluster members, failures may occur due to conditions such as
timeout and network partition. The configuration propagation mechanism
internally retries adding the configuration whenever a membership
change is detected.
this gives me the impression that the configurations propagate.
Now if member A specifies a certain MapConfig for IMap "testMap", should member B see that config when it does
hzInstance.getConfig.findMapConfig("testMap") #or .getMapConfig("testMap")
In my testing B did not see the MapConfig done by A.
I also tried specifying at A mapConfig.TimeToLiveSeconds(60), and at B mapConfig.TimeToLiveSeconds(10).
It seemed that the items in the IMap that were owned by A were evicted in 60 seconds, while the items owned by B were evicted in 10 seconds. This supports the idea that each member needs to do the same configuration if I want consistent behaviour for the IMap.
Each member owns certain partitions of the IMap. A member's IMap configuration has effect only on its owned partitions.
So it is normal to see different TTL values of the entries of the same map in different members when they have different configurations.
As you said, all members have should have same IMap configuration to have a cluster-wide persistent behavior.
Otherwise, each member will apply its own configuration to its own partitions.
But if add a dynamic configuration as described here, then that configuration is propagated to all members and change their behavior as well.
In brief, if you add the configuration before creating the instance, that is local configuration. But, if you add it after creating the instance, that is dynamic configuration and propagates to all members.

Full repository and partial repository channels in MQ

How the Cluster Sender and Receiver channels usage differs in Full Repository & Partial Repository QManagers in IBM websphere MQ?
On the Full Repository:
- the queue manager's cluster receiver channel must point to itself, this is how other queue managers in the cluster will know how to reach the FR.
- the cluster sender channel must point to another Full Repository.
On the Partial Repository:
- the queue manager's cluster receiver channel must also point to itself, this is how other queue managers in the cluster will know how to reach it.
- the cluster sender channel must point to one of the Full Repository queue managers; this is the FR the PR will rely on for cluster object resolution.
Notes:
1. Your cluster should have 2 Full Repositories; each FR sender channel should point to the other FR.
2. Your Partial Repositories should be configured to point to one of these 2 Full Repositories; a good habit is to equally assign them between the FRs.
A cluster receiver definition is how other qmgrs in the cluster will talk back to that queue manager, it acts like a template of how to talk to the qmgr.
A cluster sender defintion creates the initial channel for one queue manager in a cluster to find a full repository for that cluster. This is a manual cluster sender. It doesnt matter whether you are a full or partial repository, you need to have a manual sender pointing to another full repository.
Subsequent connections from one queue manager to another are done using 'auto' cluster senders. A cluster queue manager queries information about a destination it needs to make a connection to (e.g. it hosts a queue that is the destination for a message) from the full repository. The information retrieved is based on the cluster receiver for the destination, hence m comment that a clusrcvr is the 'template' for the connections to that queue manager.