cant list Hadoop filesystem - exception

I'm trying to run Hadoop on a banana pi and I don't understand why my Hadoop isn't working.
I have an exception on running the very basic command
root#bananapi:/opt/hadoop# hadoop fs -ls
14/12/17 10:27:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ls: Call From bananapi/10.0.2.150 to localhost:9000 failed on connection exception:
java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
root#bananapi:/opt/hadoop#
I have installed the followed jdk
root#bananapi:/opt/hadoop# java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-2~deb7u1)
OpenJDK Zero VM (build 24.65-b04, mixed mode)
My etc/hosts file is looking like this:
root#bananapi:/opt/hadoop# cat /etc/hosts
127.0.0.1 localhost
10.0.2.150 bananapi # wlan0
10.0.2.119 bananapi # eth0
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
The services (jps) is also running:
root#bananapi:/opt/hadoop# jps
3732 NodeManager
3317 NameNode
3644 ResourceManager
3401 DataNode
3513 SecondaryNameNode
3860 Jps
What I'm doinging wrong or why is commands not working?
P.S. I have also tried to run hdfs dfs -ls
UPDATE
my confs in the core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
I also can't format the namenode
hadoop namenode -format

Related

How to connect QEMU qmp-shell to a VM via unix socket?

I followed this tutorial to connect qmp-shell to a QEMU VM instance.
1. Start QMP on a unix socket
# qemu-system-aarch64 -M virt -qmp unix:./qmp-sock,server,wait=off
2. Run the script
# qmp-shell ./qmp-sock
3. You should get the following prompt
(QEMU)
But step 2 gives below error:
ERROR: Couldn't connect to ./qmp-sock: Failed to establish connection: [Errno 2] No such file or directory
What could be wrong?

Connect SpringBoot to MySQL hosted in the cloud requires SSL

I am successful using MySQL Workbench to do full crud on a Bluemix hosted MySQL Compose service.
I then built a simple Microservice with SpringBoot on my local laptop with Apache Derby... successful.
My next step was to use the MySQL Compose hosted in Bluemix.
I edited application.properties and ran into this error
"PKIX path building failed: ...."
"SunCertPathBuilderException: unable to find valid certification path to request target"
application.properties file
spring.jpa.hibernate.ddl-auto=create
spring.jpa.database-platform=org.hibernate.dialect.MySQLDialect
spring.datasource.url=jdbc:mysql://somedomain:port/compose?useSSL=true?requireSSL=true
spring.datasource.username=myname
spring.datasource.password=mypassword
Bluemix provided me these credentials in json:
{
"db_type": "mysql",
"name": "bmix-dal-yp-xxxxxxx-",
"uri_cli": "mysql -u myname -p --host somedomain.com --port 5555 --ssl-mode=REQUIRED",
"ca_certificate_base64": "LS0tLS1CRUd......",
"deployment_id": "58fexxxxxxxxxxx",
"uri": "mysql://myname:mypassword#somedomain.com:55555/compose"
}
Am I supposed to use the ca certificate somewhere in my application.properties?
Do I need to enable ssl on my embedded tomcat server running by default with springBoot?
How can I configure my springBoot application to connect to my cloud providers MySQL instance with SSL with the json they provided?
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Add the following to your pom.xml (or equivalent):
...
<repositories>
<repository>
<id>jcenter</id>
<url>http://jcenter.bintray.com </url>
<snapshots>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
<checksumPolicy>warn</checksumPolicy>
</snapshots>
<releases>
<enabled>true</enabled>
<checksumPolicy>warn</checksumPolicy>
</releases>
</repository>
</repositories>
...
<dependency>
<groupId>com.orange.clara.cloud.boot.ssl-truststore-gen</groupId>
<artifactId>spring-boot-ssl-truststore-gen</artifactId>
<version>2.0.21</version>
</dependency>
...
Add the following to your manifest.yml
env:
# Add the certificate from VCAP_SERVICES ca_certificate_base64
# You need to base64 decode the certificate and add it below
# E.g. echo '<<ca_certificate_base64>>' | base64 -D
TRUSTED_CA_CERTIFICATE: |-
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
For more information, see https://github.com/orange-cloudfoundry/spring-boot-ssl-truststore-gen
Also see a minimal app here: https://github.com/snowch/hello-spring-cloud/tree/8b9728a826dcc1995a7ccb19a852ac8face21147
This is my first answer - this did not work. Ignore this section.
One option is:
Import the cert to Java truststore file, pack the file into Java
application and specify its path via JAVA_OPTS environment variable;
the truststore file can be placed under resource directory. This can
be used for single applications:
By using the 'cf set-env' command:
cf set-env <app> JAVA_OPTS '-Djavax.net.ssl.TrustStore=classpath:resources/config/truststore -Djavax.net.ssl.trustStorePassword=changeit'
or, by using manifest.yml
applications:
- name: java-app
...
env:
JAVA_OPTS: '-Djavax.net.ssl.TrustStore=classpath:resources/config/truststore -Djavax.net.ssl.trustStorePassword=changeit'
Note that the certificate in the field ca_certificate_base64 is base64 encoded so you will need to decode it before adding it to your truststore, e.g.
Decode the certificate:
echo '<<ca_certificate_base64>>' | base64 -D > ca_certificate.pem
Create a truststore:
keytool -import -trustcacerts -file ca_certificate.pem -alias compose_cert -keystore resources/config/truststore -storepass changeit -noprompt
Note that the keystore location (resources/config/truststore) and the storepass (changeit) are set in the JAVA_OPTS.
There are a few different options you can try. See this documentation for more information: https://discuss.pivotal.io/hc/en-us/articles/223454928-How-to-tell-application-containers-running-Java-apps-to-trust-self-signed-certs-or-a-private-or-internal-CA

Hadoop 2.3.0 Issue : -ls: For input string: "false"

I'm getting errors running simple hadoop fs commands. I'm on a Mac running OS X 10.10.5, I've configured hadoop as a standalone cluster.
$ hadoop fs -ls
2015-09-26 06:59:20,531 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-ls: For input string: "false"
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [<path> ...]
$ hadoop fs -ls /
2015-09-26 07:26:16,629 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-ls: For input string: "false"
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [<path> ...]
$ hadoop fs -mkdir /user/hadoop
2015-09-26 07:01:05,356 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-mkdir: For input string: "false"
Usage: hadoop fs [generic options] -mkdir [-p] <path> ...
I'm running a standalone hadoop 2.3.0 on OS X 10.10.5.
$ hadoop version
Hadoop 2.3.0
Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1567123
Compiled by jenkins on 2014-02-11T13:40Z
Compiled with protoc 2.5.0
From source with checksum dfe46336fbc6a044bc124392ec06b85
This command was run using /Users/davidlaxer/hadoop-2.3.0/share/hadoop/common/hadoop-common-2.3.0.jar
$ java -version
java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
I tried to fix the warning (so far without success). I suspect the warning is an unrelated issue:
WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
$ env | grep HADOOP
HADOOP_HOME=/Users/davidlaxer/hadoop-2.3.0
HADOOP_COMMON_LIB_NATIVE_DIR=/Users/davidlaxer/hadoop-2.3.0/lib/native
HADOOP_CONF_DIR=/Users/davidlaxer/hadoop-2.3.0/etc/hadoop/conf
HADOOP_OPTS=-Djava.library.path=/Users/davidlaxer/hadoop-2.3.0/lib
Here are my hadoop config files which are in /Users/davidlaxer/hadoop-2.3.0/etc/hadoop/conf:
$ cat hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>false</value>
</property>
</configuration>
$ cat core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
$ cat mapred_site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
$ cat yarn_site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
$ jps
99664 NameNode
4997 Jps
2202 ZeppelinServer
2283 RemoteInterpreterServer
2158 JupyterScala
99006 SecondaryNameNode
Same issue with hadoop 2.6.1 (downloaded as binary):
$ bin/hadoop version
Hadoop 2.6.1
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r b4d876d837b830405ccdb6af94742f99d49f9c04
Compiled by jenkins on 2015-09-16T21:07Z
Compiled with protoc 2.5.0
From source with checksum ba9a9397365e3ec2f1b3691b52627f
This command was run using /Users/davidlaxer/hadoop-2.6.1/share/hadoop/common/hadoop-common-2.6.1.jar
$ bin/hadoop fs -ls /
2015-09-26 07:44:55,977 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-ls: For input string: "false"
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [<path> ...]
The issue was in the config file: hdfs-site.xml
I had:
false.
I changed it to:
0
<configuration>
<property>
<name>dfs.replication</name>
<value>0</value>
</property>
</configuration>

Installing mysql using chef-solo

I have a VM to train myself with chef solo.
Installed it and configured a kitchen. Configured my own VM as the only node in the kitchen.
Used Librarian to download the mysql cookbook and updated the runlist.
What is the command to use - to install the mysql on my node?
Thanks,
Liora
You can use the following command :
chef-solo -j JSON_ATTRIBS -c CONFIG
here
JSON_ATTRIB is the json data for the VM configuration
CONFIG is the configuration file which contains the runlist to be run on the VM
More help can be found using chef-solo --help

hadoop examples not running on amazon ec2

I am using hadoop-1.0.4 on amazon ec2 of 3 ubuntu 12.10 instances, 1 master and 2 slaves, just under ~ directory.
Now start-all.sh and stop-all.sh is ok, but when i run jps on master or slaves, it prints nothing. Then i tested hadoop examples:
~/hadoop$ bin/hadoop jar hadoop-examples-1.0.4.jar pi 10 10000
It shows
Exception in thread "main" java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createTempFile(File.java:1879)
at org.apache.hadoop.util.RunJar.main(RunJar.java:115)
However i've chmod 777 -R tmp to tmp folders.
~/hadoop$ sudo bin/hadoop jar hadoop-examples-1.0.4.jar pi 10 10000
With sudo, it produces
13/05/12 03:58:11 WARN conf.Configuration: DEPRECATED: hadoop-site.xml
found in the classpath. Usage of hadoop-site.xml is deprecated.
Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to
override properties of core-default.xml, mapred-default.xml
and hdfs-default.xml respectively
Number of Maps = 10
Samples per Map = 10000
13/05/12 03:58:12 WARN fs.FileSystem: "54.235.101.85:50001" is a deprecated
filesystem name. Use "hdfs://54.235.101.85:50001/" instead.
13/05/12 03:58:13 INFO ipc.Client: Retrying connect to server:
hdmaster/54.235.101.85:50001. Already tried 0 time(s).
13/05/12 03:58:14 INFO ipc.Client: Retrying connect to server:
hdmaster/54.235.101.85:50001. Already tried 1 time(s).
13/05/12 03:58:15 INFO ipc.Client: Retrying connect to server:
hdmaster/54.235.101.85:50001. Already tried 2 time(s).
Then failed to connect. So what is the problem? should i put sudo to run the examples? Thanks a lot.
I think, the problem is, 54.235.101.85 is suppose to be a public IP address. Use ifconfig in all the nodes to get a list of IP address and check for IP beginning with 10.x.x.x/172.x.x.x/192.x.x.x. If you find any, modify your configuration files in all the nodes accordingly.