Can't start datanode and name node. - hadoop2

all.sh, I get this error:
Incorrect configuration: namenode address
dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not
configured. Stopping namenodes on []
I looked at all the .xml file and I can't seem to find any problem. Please find the xml files configuration below.
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/datanode</value>
</property>
</configuration>
core-site.xml
<property>
<name>fs.default.name</name>
<value> hdfs://localhost:9000</value>
</property>
</configuration>
mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.Shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
Thank you.

Related

Hadoop UI Browse Directory error

When I open hadoop UI in browsers. I get this error:
Path does not exist on HDFS or WebHDFS is disabled. Please check your path or enable WebHDFS
Can you tell me what I am missing and how I can fix this error?
My config:
hdfs-site.xml
<configuration>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///opt/volume/datanode</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///opt/volume/namenode</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
By default, it will auto selct path = '/'. But it does not.

Get java.net.UnknownHostException when use hadoop-ha?

I got an exception when i execute the command sudo -u hdfs hdfs balancer -threshold 5.
Here is the Exception.
RuntimeException: java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice1
Here is my core-site.xml.
<property>
<name>fs.defaultFS</name>
<value>hdfs://nameservice1</value>
</property>
Here is my hdfs-site.xml.
<property>
<name>dfs.nameservices</name>
<value>nameservice1</value>
</property>
<property>
<name>dfs.ha.namenodes.nameservice1</name>
<value>nn1,nn2</value>
</property>
Someone help me?
I ran into this problem when setting up HA. The problem is that I set dfs.client.failover.proxy.provider.mycluster based on the reference documentation. When I replaced mycluster with my nameservice name, everything worked!
Reference: https://issues.apache.org/jira/browse/HDFS-12109
You can try after putting the port number at core-site.xml.
<property>
<name>fs.defaultFS</name>
<value>hdfs://nameservice1:9000</value>
</property>
And make sure your machine's /etc/hosts file has entry for nameservice1.
For Example (let you machine IP is 192.168.30.102)
127.0.0.1 localhost
192.168.30.102 nameservice1
<property>
<name>dfs.client.failover.proxy.provider.nameservice1</name>
<value>
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
</value>
</property>

hive-metastore is not able to start in the cloudera manager installation process

We are installing Cloudera CDH4 in Ubuntu 12.04 LTS, In the installation step we are stuck at hive meta-store start. We have configured the meta-store with MySQL as recommended in download documentation.
Its giving us the following error:
/usr/lib/hive/conf$ sudo service hive-metastore status
* Hive Metastore is dead and pid file exists
In the log file its showing the following error:
ERROR metastore.HiveMetaStore (HiveMetaStore.java:main(4153)) - Metastore Thrift Server threw an exception...
org.apache.thrift.transport.TTransportException: No keytab specified
Following is out hive-site.xml file:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://my-local-system-ip:3306/metastore?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.apache.derby.jdbc.EmbeddedDriver</value>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>my-password</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoStartMechanism</name>
<value>SchemaTable</value>
</property>
<property>
<name>hive.aux.jars.path</name>
<value>file:///usr/share/java/mysql-connector-java.jar</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://<FQDN>:9083</value>
</property>
<property>
<name>hive.support.concurrency</name>
<description>Enable Hive's Table Lock Manager Service</description>
<value>true</value>
</property>
<property>
<name>hive.metastore.local</name>
<description>Enable Hive's Table Lock Manager Service</description>
<value>false</value>
</property>
<property>
<name>hive.server2.authentication</name>
<value>KERBEROS</value>
</property>
<property>
<name>hive.server2.authentication.kerberos.principal</name>
<value>hive/_HOST#<my-domain-name></value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10001</value>
<description>TCP port number to listen on, default 10000</description>
</property>
<property>
<name>hive.server2.authentication.kerberos.keytab</name>
<value>/etc/hive/conf/hive.keytab</value>
</property>
<property>
<name>hive.zookeeper.quorum</name>
<description>Zookeeper quorum used by Hive's Table Lock Manager</description>
<value>FQDN</value>
</property>
<property>
<name>hive.metastore.sasl.enabled</name>
<value>true</value>
</property>
<property>
<name>hive.zookeeper.client.port</name>
<value>2181</value>
<description>
The port at which the clients will connect.
</description>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<property>
<name>hive.server2.thrift.sasl.qop</name>
<value>auth</value>
<description>Sasl QOP value; one of 'auth', 'auth-int' and 'auth-conf'</description>
</property>
<property>
<name>hive.metastore.client.socket.timeout</name>
<value>3600</value>
<description>MetaStore Client socket timeout in seconds</description>
</property>
Our main focus is to install impala. If we use default derby. Hive meta-store is working perfectly. But when we start impala-shell. It shows us Not Connected. What can we do rectify this ?
Can anybody help us out to this error.
I think the issue is that you're missing the following parameter:
<property>
<name>hive.metastore.kerberos.keytab.file</name>
<value>/etc/hive/conf/hive.keytab</value>
<description>The path to the Kerberos Keytab file containing the metastore thrift server's service principal.</description>
</property>
I see you do have hive.server2.authentication.kerberos.keytab, but it appears this is not enough.
replace "my-domain-name" in hive.server2.authentication.kerberos.principal property with your domain name. That is the third part which is missing in hive principal.

hadoop/name is in an inconsistent state: storage directory does not exist or is not accessible

At first i cannot let jobtrackers and tasktrackers run, then i replaced all ips like 10.112.57.243 with hdmaster in xml files, and changed mapred.job.tracker into hdfs:// one. Later i formated namenode while hadoop running, then it turned into a disaster. I found the error msg as the title in logs, then i tried even remove all in /tmp and hdfs tmp, then restart, it's still like this. So how can i get rid of this error and let namenode run again? Thanks a lot.
core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://hdmaster:50000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/ubuntu/hadoop/tmp</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
<description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.</description>
</property>
</configuration>
hadoop-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/ubuntu/hadoop/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://hdmaster:50000</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://hdmaster:50001</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://hdmaster:50001</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>/home/ubuntu/hadoop/system</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/home/ubuntu/hadoop/var</value>
</property>
</configuration>

Adding mySql to eclipse project with maven tomcat plugin

I have a spring mvc project downloaded from the web and fully working. I'm using maven and maven tomcat plugin to manage dependencies and to run the webapp in the built-in tomcat. I'm trying to add mySql support in my project. Since i'm new to maven and maven tomcat plugin, I don't know hot to do this. Before i tried to add mysql, all was working and i was able to launch my web app simply executing a tomcat:run maven goal.
For now, when i execute tomcat:run i get a
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
Here is what i've already done after some reading around the web:
I added dependencies for mysql driver (and Hibernate annotations too since i want to use it) in my pom.xml, and specified the dependency for tomcat plugin:
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.9</version>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>tomcat-maven-plugin</artifactId>
<version>1.1</version>
<configuration>
<mode>context</mode>
</configuration>
<dependencies>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.9</version>
</dependency>
</dependencies>
</plugin>
You can also notice a tag to specify to use a context.xml file. But I don't know where to put this file. I readed it should be generated automatically in tomcat/conf, but it's not present. So i added it manually with this content:
<?xml version="1.0" encoding="UTF-8"?>
<Context>
<Resource name="jdbc/mkyongdb" auth="Container" type="javax.sql.DataSource"
maxActive="50" maxIdle="30" maxWait="10000"
username="root" password="password"
driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/mkyongdb"/>
</Context>
Then in web.xml, located in tomcat/conf i added:
<resource-ref>
<description>MySQL Datasource example</description>
<res-ref-name>jdbc/mkyongdb</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<res-auth>Container</res-auth>
</resource-ref>
I placed the same content in src/main/webapp/META-INF/context.xml and in src/main/webapp/WEB-INF/web.xml
With all these configuration, the error mentioned above doesn't appears. But if i try to use hibernate adding
<bean id="dataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="jdbc:mysql://localhost:3306/mkyongdb" />
<property name="username" value="root" />
<property name="password" value="password" />
</bean>
<bean
id="sessionFactory"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean" >
<property name="dataSource" >
<ref bean="dataSource" />
</property>
<property name="hibernateProperties" >
<props>
<prop key="hibernate.hbm2ddl.auto" >create-drop</prop>
<prop key="hibernate.dialect" >org.hibernate.dialect.MySQLDialect</prop>
<prop key="hibernate.show_sql" >true</prop>
</props>
</property>
<property name="annotatedClasses" >
<list>
<value>org.mose.grouporganizer.entity.AccelerometerFeatures</value>
</list>
</property>
</bean>
then i get the comunication link failure. What i'm missing?
If it's needed i can add the full stack trace.
If your application works fine and you want to use MySQL as your database then you should add MySQL driver in your pom.xml and change Hibernate configuration. That it.
First upgrade to last tomcat maven plugin which is now at Apache.
See http://tomcat.apache.org/maven-plugin-2.0/
Regarding the context use
<plugin>
<groupId>org.apache.tomcat.maven</groupId>
<artifactId>tomcat7-maven-plugin</artifactId>
<configuration>
<contextFile>path to your context file</contextFile>
</configuration>
</plugin>