Hadoop Metrics Issue - hadoop2

I am provisioning hadoop cluster with ambari local repository. My ambari server is running in one machine and the cluster nodes are running on remote machines. I am successfully able to create the cluster.
However I am not getting any metrics. Can you suggest your i/ps what needs to be done?

Might be a proxy issue. See the instructions in Ambari manual. If so, fix is to set -Dhttp.nonProxyHosts to list of your remote nodes and restart Ambari Server.

Related

Can AWS Aurora Serverless Clusters be configured via AWS Explorer in DataGrip?

I'm currently having issues setting up the AWS Explorer plugin in DataGrip to recognise the Aurora Serverless Clusters (MySQL). I have set up credentials from IAM in the credentials file, and can access other AWS services (if I select the dropdown "Schemas", for example, I can see the list of schemas in my org) but clicking the RDS dropdown shows "empty", and doesn't even show the list of database engines. I have tried connecting with secrets manager and using the correct secret for the DB cluster but no luck. When I try and add the database cluster as a data source, it just hangs on "Introspecting" and then the endpoint for that cluster.
I found this issue on the aws-toolkit for jetbrains github https://github.com/aws/aws-toolkit-jetbrains/issues/2124
which mentions that it could be a driver problem. I have tried changing to the mySQL driver, and that hasn't seemed to fix it. DataGrip also seems to heavily encourage using the recommended Aurora MySQL driver.
Is this a bug with DataGrip, or AWS Explorer, or am I missing something obvious? Do I need to enable SSL CAs to get AWS Explorer the correct permissions?
Thanks!
EDIT: I have gone through the prerequisites listed on the AWS docs:
I have installed the AWS CLI and AWS SAM CLI
I have installed Docker (but I haven't set up any containers - I think this is
only needed if I'm running localhost?)
I'm running Windows 10.
Aurora serverless can't be accessed from the internet. From docs:
You must create your Aurora Serverless DB cluster in an Amazon Virtual Private Cloud (Amazon VPC). Aurora Serverless DB clusters are accessible only from an Amazon VPC and can't use a public IP address.
Thus, you need to setup VPN or some proxy (e.g. ssh tunnel through a bastion host) to be able to connect to Aurora serverless from outside of AWS.

Openshift OKD 4.5 on VMware

I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.

Connecting to MySQL server via kubernetes pod using Intellij

We have a MySQL server that is running on AWS using AWS RDS service and some Kubernetes pods which run some services that connect to this MySQL instance.
I have been using Intellij Idea (2020.1) to connect to these MySQL servers for quite some time. However, recently we have changed the policy to connect to these instances, and now it's only possible to connect to the MySQL servers from the Kubernetes pods. Hence, I now need to login to these pods and then query MySQL using the command-line MySQL-client.
Is there any way I can still use Intellij to connect to these MySQL instances than having to log in to the pods using something like SSH tunnelling or something like that?
Yes, setting up an SSH tunnel is recently straight forwards, but the setup depends on your VPC and EC2 configuration. There are a lot of how-tos on the net, e.g.: https://medium.com/#michalisantoniou6/connect-to-an-aws-rds-using-an-ssh-tunnel-22f3bd597924

Is mysql/mongodb cluster suitable for installation on kubernetes?

I used to test installed mongodb-shard on kubernetes by helm,but I found that those helm charts are not really a qualified mongodb shard.These helm charts can correctly create Pods with names like mongos-1 mongod-server-1 mongod-shard-1,this seems to be a correct shard cluster configuration, but the appropriate mongos, mongod server instance is not created on the corresponding Pod.They just create a normal mongod instance on the corresponding pod, and there is no connection between them.Do I need to add scripts to execute commands similar to rs.addShard(config)?Encountered the same problem when installing mysql cluster using helm.
What I want to know is, is it not appropriate to install mysql/mongodb cluster on kubernetes in general scenarios?Is the database installed independently or deployed on Kubernetes?
Yes, you can deploy MongoDB instances on Kubernetes clusters.
Use standalone instance if you want to test and develop and replica set for production like deployments.
Also to make things easier you can use MongoDB Enterprise Kubernetes Operator:
The Operator enables easy deploys of MongoDB into Kubernetes clusters,
using our management, monitoring and backup platforms, Ops Manager and
Cloud Manager. By installing this integration, you will be able to
deploy MongoDB instances with a single simple command.
This guide has references to the official MongoDB documentation with more necessary details regarding:
Install Kubernetes Operator
Deploy Standalone
Deploy Replica Set
Deploy Sharded Cluster
Edit Deployment
Kubernetes Resource Specification
Troubleshooting Kubernetes Operator
Known Issues for Kubernetes Operator
So basically all you need to know in this topic.
Please let me know if that helped.

Can we set up a database on Amazon EC2 similar to how XAMPP is configured on my local system

Can you install MySQL for AWS Elastic Cloud Compute (EC2) directly on the instance? I can't afford to purchase a separate RDS instance at the moment.
My website is setup on AWS EC2 already and now I'm going to try out some features with a database. I need to set up the instance to run on the EC2 localhost and connect it to my website to store my user data.
So first you need to separate XAMPP from mysql in your thought process. XAMPP is a tool only for your local development. You can set up a database on the Elastic Cloud Compute (EC2) instance similarly to how you set up your XAMPP config locally.
Here are the official docs on how to install a full LAMP stack on an EC2 instance running the Amazon Linux AMI - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html