get-config key change when upgrading to Linux2 in Elastic Beanstalk - amazon-elastic-beanstalk

I am upgrading my Elastic Beanstalk deploy to Linux2. Several .ebextensions scripts are failing in the new deploy. It appears that the usage for get_config has changed.
Old script variables:
EB_APP_USER=$(/opt/elasticbeanstalk/bin/get-config container -k app_user)
EB_APP_DEPLOY_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k app_deploy_dir)
EB_APP_PID_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k app_pid_dir)
EB_SCRIPT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k script_dir)
EB_SUPPORT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k support_dir)
I was able to find replacements for the first two at https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/custom-platforms-scripts.html
New script variables:
EB_APP_USER=$(/opt/elasticbeanstalk/bin/get-config platformconfig -k AppUser)
EB_APP_DEPLOY_DIR=$(/opt/elasticbeanstalk/bin/get-config platformconfig -k AppDeployDir)
I am not able to find replacements for
app_pid_dir
script_dir
support_dir
Anyone know what I should use?

This is not a very satisfying answer, but it appears the remaining 3 are no longer supported. I see others hardcoding those values as...
EB_APP_PID_DIR="/var/pids"
EB_SUPPORT_DIR="/opt/elasticbeanstalk/support"
In the rewrite of my scripts, I no longer needed the script_dir.
Post where I got these values was setting up sidekiq... https://forums.aws.amazon.com/thread.jspa?threadID=330819

Related

smbclient --authentication-file "session setup failed: NT_STATUS_INVALID_PARAMETER" and "SPNEGO(gse_krb5) NEG_TOKEN_INIT failed: NT_STATUS_NO_MEMORY"

(I have Centos 7 with samba-client.x86_64 4.6.2-8.el7 against windows server 2008 that is in a AD Domain controlled by separate windows server 2008 AD domain controller)
Started with this:
smbclient -W my.domain -U myuser //svr.my.domain/fred mypassword -c list
... which worked great, then decided to move domain,user and password into a file and use -A as described in the smbclient manpage. File windows-credentials, content:
username=myuser
domain=my.domain
password=mypassword
... with command line:
smbclient -A windows-credentials //svr.my.domain/fred -c list
.... did not work, gave error:
SPNEGO(gse_krb5) NEG_TOKEN_INIT failed: NT_STATUS_NO_MEMORY
session setup failed: NT_STATUS_NO_MEMORY
... an hour on the internet suggested lots of people had this trouble and just about each had a different ticked answer, and none of them worked for me. Tried various combinations of their answers - in particular, https://askubuntu.com/questions/1008992/ubuntu-17-10-to-access-windows-files-shares-within-workplace-it, and ended up with...
Created a separate my.smb.conf with just:
[global]
# seems to get rid of
# SPNEGO(gse_krb5) NEG_TOKEN_INIT failed: NT_STATUS_NO_MEMORY
client use spnego = no
# seems to get rid of
# session setup failed: NT_STATUS_NO_MEMORY
client ntlmv2 auth = no
... and used:
smbclient -s my.smb.conf -A windows-credentials //svr.my.domain/fred -c list
... and it looks like it works, but I'm not really sure as there seems to be credentials caching and a complete lack of information on how this stuff works or is supposed to work.
Can anyone actually explain any of this? Even if not, perhaps yet another answer to this problem will help someone somewhere.
This appears to be specific to Windows 2008. Attaching to Windows Server 2016 works without the modified smb.conf file. I have been unable to locate any real details.
In case of problems with smbclient
you can mount smb folder and use it like local folder
mount -t cifs //<ip>/<share folder>$ /mnt -o user=<user>,pass=<password>,domain=<workdomain>

Questions on starting Locator using snappydata/bin> ./spark-shell.sh script

Spark v. 0.5
Here's the command I used to start a Locator:
ubuntu#ip-172-31-8-115:/snappydata-0.5-bin/bin$ ./snappy-shell locator start
Starting SnappyData Locator using peer discovery on:
0.0.0.0[10334] Starting DRDA server for SnappyData at address localhost/127.0.0.1[1527]
Logs generated in /snappydata-0.5-bin/bin/snappylocator.log
SnappyData Locator pid: 9352 status: running
It looks like it starts the DRDA server locally, with no outside interface for a client to connect to. So, I cannot reach my SnappyData Locator using this JDBC URL from an outside client host (e.g. my SquirrelSQL editor).
This does not connect:
jdbc:snappydata://MY-AWS-PUBLIC-IP-HERE:1527/
What property do I pass my ./snappy-shell.sh location start command to get the DRDA Server to start on a public IP address instead of "localhost/127.0.0.1"?
Use -client-bind-address and -client-port options. For locator also use the -peer-discovery-address and -peer-discovery-port options to specify bind address for other locators/servers/leads (that are passed to their -locators=<address>:<port>):
snappy-shell locator start -peer-discovery-address=<internal IP for peers> -client-bind-address=<public IP for clients>
See the output of snappy-shell locator --help for commonly used options.
For SnappyData releases, you may find it much easier to use the global configuration for all of the locators, servers, leads. Check configuring the cluster.
This will allow specifying all options for all JVMs of the cluster in conf/locators, conf/leads, conf/servers then starting with snappy-start-all.sh, status with snappy-status-all.sh and stop all with snappy-stop-all.sh
On a related note, we at SnappyData Inc., are developing scripts to enable users quickly launch SnappyData cluster on AWS.
If you want to try it out, below steps would guide you. We would love to hear your feedback on this.
Download its development branch git clone https://github.com/SnappyDataInc/snappydata.git -b SNAP-864 (You don't need to clone the repo for this, but I could not find a way to attach the scripts here.)
Go to ec2 directory cd snappydata/cluster/ec2
Run snappy-ec2. ./snappy-ec2 -k ec2-keypair-name -i /path/to/keypair/private/key/file launch your-cluster-name
See this README for more details.

Restarting a MySQL server managed by Ambari

I have a scenario where I need to change several parameters of a hadoop cluster managed by Ambari to document performance of a particular application. The change in the configs entails a restart of the affected components.
I am using the Ambari REST API for achieving this. I figured out how to do this for all service components of hadoop. I' am not sure whether the API provides a way to restart the MySQL server that Hive uses.
I have the following questions:-
Is it the case that a mere stop and start of mysqld on the appropriate machine is enough to ensure that the required configuration changes are recognized by Ambari and the application?
I chose the 'New MySQL database' option while installing Hive via Ambari. Does this mean that restarts are reflected in Ambari only when it is carried out from the Ambari UI?
Your inputs would be highly appreciated.
Thanks!
Found a solution to the problem. I used the following commands using the Ambari REST API for changing configurations and restarting services from the backend.
Login to the host on which the ambari server is running and use the already provided config.sh script as described below.
Modifying configuration files
#!/bin/bash
CLUSTER_NAME=$1
CONFIG_FILE=$2
PROPERTY_NAME=$3
PROPERTY_VALUE=$4
/var/lib/ambari-server/resources/scripts/configs.sh -port <ambari-server-port> set localhost $1 $2 "$3" "$4"
where CONFIG_FILE can take values like tez-site, mapred-site, hadoop-site, hive-site etc. PROPERTY_NAME and PROPERTY_VALUE should be set to values relevant to the specified CONFIG_FILE.
Restarting host components
curl -uadmin:admin -H 'X-Requested-By: ambari' -X POST -d '
{
"RequestInfo":{
"command":"RESTART",
"context":"Restart MySQL server used by Hive Metastore on node3.cluster.com and HDFS client on node1.cluster.com",
"operation_level":{
"level":"HOST",
"cluster_name":"c1"
}
},
"Requests/resource_filters":[
{
"service_name":"HIVE",
"component_name":"MYSQL_SERVER",
"hosts":"node3.cluster.com"
},
{
"service_name":"HDFS",
"component_name":"HDFS_CLIENT",
"hosts":"node1.cluster.com"
}
]
}' http://localhost:<ambari-server-port>/api/v1/clusters/c1/requests
Reference Links:
Restarting components
modifying configurations
Hope this helps!

Problems accessing IBM Containers at UK Data Center

Note: This is a question related to Bluemix Container service, it is not generic to Docker.
I have a linux environment with cf and ice tools installed and working correctly with US_SOUTH Data Center. I changed the login parameters to UK Data Center and now, although it login correctly to Container service it fails when executing any command with 404.
Command failed with container cloud service
404 Not Found: Requested route ('api-ice.eu-gb.bluemix.net') does not exist.
I did the login following documentation:
ice login -a https://api.eu-gb.bluemix.net -H https://api-ice.eu-gb.bluemix.net/v2/containers -R registry.eu-gb.bluemix.net
And as I said the login is successful.
Try this for London:
ice login -H containers-api.eu-gb.bluemix.net -R registry.eu-gb.bluemix.net -a api.eu-gb.bluemix.net
For US South:
ice login -H containers-api.ng.bluemix.net -R registry.ng.bluemix.net -a api.ng.bluemix.net
api-ice.eu-gb.bluemix.net should throw a 404. When we closed our our public beta we changed our API server to use the containers-api.{domain} pattern. (While temporarily leaving api-ice.ng.bluemix.net available for folks needing to migrate from the beta.)
We are currently updating the docs. Thanks for pointing this out.

"compute_cluster_for_hadoop start" doesn't work with gcutil 1.10.0

Update: It looks like ./compute_cluster_for_hadoop.py works with gcutil 1.9.1 but not with gcutil 1.10.0. So what I am really asking is how to fix compute_cluster_for_hadoop to work with the current gcutil.
After using compute_cluster_for_hadoop.py for weeks, now starting a cluster hangs, even after I setup the cluster again.
Here are some things I noticed. First, I now get the following message when I run "compute_cluster_for_hadoop.py start ..." that I have to manually type in "yes" to (Before, it never required any user input).
The authenticity of host
'google-compute-engine-instance;project=cspp53013;zone=us-central1-a;instance=hm;id=9724559583598617300
(8.35.196.11)' can't be established. ECDSA key fingerprint is
02:2b:ea:7d:48:27:7d:1b:e2:2a:d4:44:d0:07:95:b4. Are you sure you want
to continue connecting (yes/no)?
It then proceeds for a while. right after it finishes installing the deb_packages, it prints the following
Processing triggers for ca-certificates ... Updating certificates in
/etc/ssl/certs... 0 added, 0 removed; done. Running hooks in
/etc/ca-certificates/update.d.... done. done.
And then it hangs no matter how long I wait.
Any idea what could have changed or how to fix?
Thanks,
Mike
OK. Figured out the answer. We just have to add
--ssh_arg "-o StrictHostKeyChecking=no"
to line 151 of gce_cluster.py and it works with both gcutil-1.9.1 and gcutil-1.10.0