Get Error storing cluster namespace secret (E0025) trying to bind service to a cluster - containers

I am following Tutorial: Creating Kubernetes clusters in IBM Bluemix Container Service but when I try to bind a service to my cluster I get:
$ bx cs cluster-service-bind kub_cluster myns cloudant
FAILED
Error storing cluster namespace secret (E0025)
Incident ID: ebdbdd0d-5d6a-4373-8e54-b7dd84733a29
I have a worker node:
$ bx cs workers kub_cluster
will list one in State 'normal' and Status 'Ready'.
I tried with different services (messageHub and Cloudant) and different names for the namespace. These are services I already have. Anyone know how to get around this?

I was able to test this out following the same guide. I used the tone analyzer service. For testing I used the default namespace.
Are you able to see the namespace you are using when you list out available kubernetes namespaces? The option "myns" will need to be a kubernetes namespace.
$ kubectl get namespaces
This should print out the default namespace as well as other system namespaces + any namespaces you created.
Earlier in the guide a namespace is setup for the docker registry, it is possible that you are using that namespace.
Other instances of this issue appear to be related to the status of the cluster. It looks like your cluster has an available node(normal and ready), so it should be able to store the secret in an available namespace.

You might be missing the specific namespace in your cluster.
You can create one by calling:
kubectl create namespace <your namespace>

Related

HazelCast Embded cache members not adding in Openshift with multiple pods

I'm trying to expose hazelcast cache with embedded setup in my service, its works fine in my local, even its able to add members if i tried to run application with multiple ports. Now the same is deployed in OCP (OpenShift) with 2 instance and hazelcast members are not added to each other due to this cache is not updated across the pod. below is the config code which i used for hazelcast.
#Bean
public Config hazelcastConfig(){
return new Config().setInstanceName("hazelcast-instance")
.addMapConfig(new MapConfig().setName("mycache")
.setMaxSizeConfig(new MaxSizeConfig(300,MaxSizeConfig.MaxSizePolicy.FREE_HEAP_SIZE))
.setEvictionPolicy(EvictionPolicy.LRU)
.setTimeToLiveSeconds(2000));
}
please let me know any additional configuration need to added, so that members can clustered in openshift
Please read the following resorces:
Hazelcast Kubernetes/OpenShift plugin documentation
Hazelcast Kubernetes Code Samples
Hazelcast OpenShift Code Sample
This should solve all your issues. The simplest configuration you need to add (assuming you added the RBAC) is the following one:
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
config.getNetworkConfig().getJoin().getKubernetesConfig().setEnabled(true);

how to set livy.server.session.timeout on EMR cluster boostrap?

I am creating an EMR cluster, and using jupyter notebook to run some spark tasks.
My tasks die after approximately 1 hour of execution, and the error is:
An error was encountered:
Invalid status code '400' from https://xxx.xx.x.xxx:18888/sessions/0/statements/20 with error payload: "requirement failed: Session isn't active."
My understanding is that it is related to the Livy config livy.server.session.timeout, but I don't know how I can set it in the bootstrap of the cluster (I need to do it in the bootstrap because the cluster is created with no ssh access)
Thanks a lot in advance
On EMR, livy-conf is the classification for the properties for livy's livy.conf file, so when creating an EMR cluster, choose advanced options with Livy as an application chosen to install, please pass this EMR configuration in the Enter Configuration field.
[{'classification': 'livy-conf','Properties': {'livy.server.session.timeout':'5h'}}]
On EMR, Livy binary is located at /etc/livy/, and so the config file is at /etc/livy/conf/livy.conf
To verify this,
Create an EMR cluster with a known ec2 key-pair, Livy and above config
Using the ec2 key-pair, login to the EC2 Master node associated with the cluster ssh -i some-ec2-key-pair.pem hadoop#ec2-00-00-00-0.ca-region-n.compute.amazonaws.com
Navigate to /etc/livy/conf, vim livy.conf & see the updated value of livy.server.session.timeout
If you don't want the Livy session to go down at all, then set the property livy.server.session.timeout-check to false in /etc/livy/conf/livy.conf.
Another way to do that if you don’t want to recreate the cluster is:
go to /etc/livy/conf/livy.conf and set the livy.server.session.timeout property to the value you would like.
After that, run sudo restart livy-server to make the configuration applied.

How to run several IPFS nodes on a single machine?

For testing, I want to be able to run several IPFS nodes on a single machine.
This is the scenario:
I am building small services on top of IPFS core library, following the Making your own IPFS service guide. When I try to put client and server on the same machine (note that each of them will create their own IPFS node), I will get the following:
panic: cannot acquire lock: Lock FcntlFlock of /Users/long/.ipfs/repo.lock failed: resource temporarily unavailable
Usually, when you start with IPFS, you will use ipfs init, which will create a new node. The default data and config stored for that particular node are located at ~/.ipfs. Here is how you can create a new node and config it so it can run besides your default node.
1. Create a new node
For a new node you have to use ipfs init again. Use for instance the following:
IPFS_PATH=~/.ipfs2 ipfs init
This will create a new node at ~/.ipfs2 (not using the default path).
2. Change Address Configs
As both of your nodes now bind to the same ports, you need to change the port configuration, so both nodes can run side by side. For this, open ~/.ipfs2/configand findAddresses`:
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5001",
"Gateway": "/ip4/127.0.0.1/tcp/8080",
"Swarm": [
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001"
]
}
To for example the following:
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5002",
"Gateway": "/ip4/127.0.0.1/tcp/8081",
"Swarm": [
"/ip4/0.0.0.0/tcp/4002",
"/ip6/::/tcp/4002"
]
}
With this, you should be able to run both node .ipfs and .ipfs2 on a single machine.
Notes:
Whenever you use .ipfs2, you need to set the env variable IPFS_PATH=~/.ipfs2
In your example you need to change either your client or server node from ~/.ipfs to ~/.ipfs2
you can also start the daemon on the second node using IPFS_PATH=~/.ipfs2 ipfs daemon &
Hello, I use ipfs2, after running two daemons at the same time, can indeed open localhost:5001 / webui, run the second localhost:5002 / webui has an error, as shown in the attachment
Here are some ways I've used to create multiple nodes/peers ids.
I use windows 10.
1st node go-ipfs (latest version)
2nd node Siderus Orion ifps (connect to Orion node , not local) -- https://orion.siderus.io/
Use VirtualBox to run a minimal ubuntu installation. (You can set up as many as you want)
Repeat the process and you have 4 nodes or as many as you want.
https://discuss.ipfs.io/t/ipfs-manager-download-install-manage-debug-your-ipfs-node/3534 is another gui that installs and lets you manage all ipfs commands without CMD. He just released it a few days ago and it looks well worth lots of reviews.
Disclaimer I am not a coder or computer professional. Just a huge fan of IPFS! I hope we can raise awareness and change the world.

Enable autoscaling on GKE cluster creation

I try to create an autoscaled container cluster on GKE.
When I use the "--enable-autoscaling" option (like the documentation indicates here : https://cloud.google.com/container-engine/docs/clusters/operations#create_a_cluster_with_autoscaling) :
$ gcloud container clusters create mycluster --zone $GOOGLE_ZONE --num-nodes=3 --enable-autoscaling --min-nodes=2 --max-nodes=5
but the MIG (Managed Instanced Group) is not displayed as 'autoscaled' as shown by both the web interface and the result of the following command :
$ gcloud compute instance-groups managed list
NAME SIZE TARGET_SIZE AUTOSCALED
gke-mycluster... 3 3 no
Why ?
Then, I tried the other way indicated in the kubernetes docs (http://kubernetes.io/docs/admin/cluster-management/#cluster-autoscaling) but got an error caused by the '=true' apparently :
$ gcloud container clusters create mytestcluster --zone=$GOOGLE_ZONE --enable-autoscaling=true --min-nodes=2 --max-nodes=5 --num-nodes=3
usage: gcloud container clusters update NAME [optional flags]
ERROR: (gcloud.container.clusters.update) argument --enable-autoscaling: ignored explicit argument 'true'
Is the doc wrong on this ?
Here is my gcloud version results :
$ gcloud version
Google Cloud SDK 120.0.0
beta 2016.01.12
bq 2.0.24
bq-nix 2.0.24
core 2016.07.29
core-nix 2016.03.28
gcloud
gsutil 4.20
gsutil-nix 4.18
kubectl
kubectl-linux-x86_64 1.3.3
Last precision : the autoscaler seems 'on' in the description on the cluster :
$ gcloud container clusters describe mycluster | grep auto -A 3
- autoscaling:
enabled: true
maxNodeCount: 5
minNodeCount: 2
Any idea to explain this behaviour please ?
Kubernetes cluster autoscaling does not use the Managed Instance Group autoscaler. It runs a cluster-autoscaler controller on the Kubernetes master that uses Kubernetes-specific signals to scale your nodes. The code is in the autoscaler repo if you want more info.
I've also sent out a PR to fix the invalid flag usage in the autoscaling docs. Thanks for catching that!

Questions on starting Locator using snappydata/bin> ./spark-shell.sh script

Spark v. 0.5
Here's the command I used to start a Locator:
ubuntu#ip-172-31-8-115:/snappydata-0.5-bin/bin$ ./snappy-shell locator start
Starting SnappyData Locator using peer discovery on:
0.0.0.0[10334] Starting DRDA server for SnappyData at address localhost/127.0.0.1[1527]
Logs generated in /snappydata-0.5-bin/bin/snappylocator.log
SnappyData Locator pid: 9352 status: running
It looks like it starts the DRDA server locally, with no outside interface for a client to connect to. So, I cannot reach my SnappyData Locator using this JDBC URL from an outside client host (e.g. my SquirrelSQL editor).
This does not connect:
jdbc:snappydata://MY-AWS-PUBLIC-IP-HERE:1527/
What property do I pass my ./snappy-shell.sh location start command to get the DRDA Server to start on a public IP address instead of "localhost/127.0.0.1"?
Use -client-bind-address and -client-port options. For locator also use the -peer-discovery-address and -peer-discovery-port options to specify bind address for other locators/servers/leads (that are passed to their -locators=<address>:<port>):
snappy-shell locator start -peer-discovery-address=<internal IP for peers> -client-bind-address=<public IP for clients>
See the output of snappy-shell locator --help for commonly used options.
For SnappyData releases, you may find it much easier to use the global configuration for all of the locators, servers, leads. Check configuring the cluster.
This will allow specifying all options for all JVMs of the cluster in conf/locators, conf/leads, conf/servers then starting with snappy-start-all.sh, status with snappy-status-all.sh and stop all with snappy-stop-all.sh
On a related note, we at SnappyData Inc., are developing scripts to enable users quickly launch SnappyData cluster on AWS.
If you want to try it out, below steps would guide you. We would love to hear your feedback on this.
Download its development branch git clone https://github.com/SnappyDataInc/snappydata.git -b SNAP-864 (You don't need to clone the repo for this, but I could not find a way to attach the scripts here.)
Go to ec2 directory cd snappydata/cluster/ec2
Run snappy-ec2. ./snappy-ec2 -k ec2-keypair-name -i /path/to/keypair/private/key/file launch your-cluster-name
See this README for more details.