I have installed PredictionIo engine from the following link using the first method. Now I want to run the engine using MYSQL as a datasource.
So I have configured env.sh file as described below :
#!/usr/bin/env bash
#
# Copy this file as pio-env.sh and edit it for your site's configuration.
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# PredictionIO Main Configuration
#
# This section controls core behavior of PredictionIO. It is very likely that
# you need to change these to fit your site.
# SPARK_HOME: Apache Spark is a hard dependency and must be configured.
# SPARK_HOME=$PIO_HOME/vendors/spark-2.0.2-bin-hadoop2.7
SPARK_HOME=$PIO_HOME/vendors/spark-2.1.1-bin-hadoop2.6
POSTGRES_JDBC_DRIVER=$PIO_HOME/lib/postgresql-42.0.0.jar
MYSQL_JDBC_DRIVER=$PIO_HOME/lib/mysql-connector-java-8.0.11.jar
# PredictionIO Storage Configuration
#
# This section controls programs that make use of PredictionIO's built-in
# storage facilities. Default values are shown below.
#
# For more information on storage configuration please refer to
# http://predictionio.apache.org/system/anotherdatastore/
# Storage Repositories
# Default is to use PostgreSQL
PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta
PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=MYSQL
PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event
PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=MYSQL
PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model
PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=MYSQL
# Storage Data Sources
# PostgreSQL Default Settings
# Please change "pio" to your database name in PIO_STORAGE_SOURCES_PGSQL_URL
# Please change PIO_STORAGE_SOURCES_PGSQL_USERNAME and
# PIO_STORAGE_SOURCES_PGSQL_PASSWORD accordingly
PIO_STORAGE_SOURCES_PGSQL_TYPE=jdbc
PIO_STORAGE_SOURCES_PGSQL_URL=jdbc:postgresql://localhost/pio
PIO_STORAGE_SOURCES_PGSQL_USERNAME=pio
PIO_STORAGE_SOURCES_PGSQL_PASSWORD=pio
# MySQL Example
PIO_STORAGE_SOURCES_MYSQL_TYPE=jdbc
PIO_STORAGE_SOURCES_MYSQL_URL=jdbc:mysql://localhost:3306/pio
PIO_STORAGE_SOURCES_MYSQL_USERNAME=root
PIO_STORAGE_SOURCES_MYSQL_PASSWORD=root
I have also placed the mysql-java-connector jar in pio_home/lib directory.
However when I run pio status command, I get the following error :
[INFO] [Management$] PredictionIO 0.12.1 is installed at /home/oodles/predictionio/PredictionIO-0.12.1
[INFO] [Management$] Inspecting Apache Spark...
[INFO] [Management$] Apache Spark is installed at /home/oodles/predictionio/PredictionIO-0.12.1/vendors/spark-2.1.1-bin-hadoop2.6
[INFO] [Management$] Apache Spark 2.1.1 detected (meets minimum requirement of 1.3.0)
[INFO] [Management$] Inspecting storage backend connections...
[INFO] [Storage$] Verifying Meta Data Backend (Source: MYSQL)...
[ERROR] [Management$] Unable to connect to all storage backends successfully.
The following shows the error message from the storage backend.
No suitable driver found for jdbc:mysql://localhost:3306/pio (java.sql.SQLException)
Dumping configuration of initialized storage backend sources.
Please make sure they are correct.
Source Name: MYSQL; Type: jdbc; Configuration: PASSWORD -> root, URL -> jdbc:mysql://localhost:3306/pio, TYPE -> jdbc, USERNAME -> root
Can someone please help me out with this?
change jdbc:mysql://localhost:3306/pio to jdbc:mysql://localhost/pio in env.sh file
I have had this same issue and this is because the JDBC driver is not installed although you would think it is from the pio-env.sh saying there is a jar file of some sort. What you want to do is go to this site https://dev.mysql.com/downloads/connector/j/ and then choose the "Platform Independent" option and then click the download button. This Oracle website is trying to make you sign up for something but don't do it. Go to the bottom of the page where it says "No thanks, just start my download" and right click and save that link. You can now "wget" command to get the file and then unzip that file in your linux/ubuntu environment. I forget what the unzip command file is. I am noob to linux/unix environments but this did work for me with PIO.
Related
I am trying to set up the Data Visualization extension to use data from csv file for the sensors based on this example:
https://forge.autodesk.com/en/docs/dataviz/v1/developers_guide/advanced_topics/csv_adapter/
So the csv data I am trying to use is the default Hyperion-1.csv in folder server\gateways\csv. Do I need to add/change some other settings as well?
It is showing the following error in Chrome console:
I have these settings for the csv in .env file.
And these in devices.json in server\gateways\synthetic-data folder.
I've just taken the following steps to enable the CSV data adapter which seemed to work fine:
Clone the repo: git clone https://github.com/Autodesk-Forge/forge-dataviz-iot-reference-app
Install dependencies: npm install
Create a copy of server/env_template and rename it to server/.env
Modify the contents of server/.env, commenting out all the initial env. variables, uncommenting the CSV-related env. vars, and setting their corresponding values:
# FORGE_CLIENT_ID=
# FORGE_CLIENT_SECRET=
# FORGE_ENV=AutodeskProduction
# FORGE_API_URL=https://developer.api.autodesk.com
# FORGE_CALLBACK_URL=http://localhost:9000/oauth/callback
#
# FORGE_BUCKET=
# ENV=local
# ADAPTER_TYPE=local
## Please uncomment the following part if you want to connect to Azure IoTHub and Time Series Insights
## Connect to Azure IoTHub and Time Series Insights
# ADAPTER_TYPE=azure
# AZURE_IOT_HUB_CONNECTION_STRING=
# AZURE_TSI_ENV=
#
## Azure Service Principle
# AZURE_CLIENT_ID=
# AZURE_APPLICATION_SECRET=
# AZURE_TENANT_ID=
# AZURE_SUBSCRIPTION_ID=
#
## Path to Device Model configuration File
# DEVICE_MODEL_JSON=
## End - Connect to Azure IoTHub and Time Series Insights
## Please uncomment the following part if you want to use a CSV file as the time series provider
ADAPTER_TYPE=csv
CSV_MODEL_JSON=server/gateways/synthetic-data/device-models.json
CSV_DEVICE_JSON=server/gateways/synthetic-data/devices.json
CSV_FOLDER=server/gateways/csv/
CSV_DATA_START=2011-02-01T08:00:00.000Z
CSV_DATA_END=2011-02-20T13:51:10.511Z
CSV_DELIMITER="\t"
CSV_LINE_BREAK="\n"
CSV_TIMESTAMP_COLUMN="time"
CSV_FILE_EXTENSION=".csv"
## End - Please uncomment the following part if you want to use a CSV file as the time series provider
Run the app with ENV set to "local": ENV=local npm run dev
After these steps the app is running successfully, however you'll get some other errors because the server/gateways/csv folder only contains data for a single sensor (Hyperion-1).
Btw. I've been working on an alternative DataViz sample app that aims to be simpler and easier to reuse: https://github.com/petrbroz/forge-iot-extensions-demo (which uses https://github.com/petrbroz/forge-iot-extensions under the hood).
I have issue in configuring the POET Engine to pickup the /etc/sawtooth/poet_engine_log_config.toml file.
Has anyone tried to change the logging format of the SAWTOOTH POET Engine on Ubuntu?
The sawtooth documentation Log Configuration describes the logs can be configured in the /etc/sawtooth/ but even if I create the log file POET seems to ignore this file. Rest of all the services pickup the respective files e.g. intkey, rest_api, validator etc.
It so happens that the file name is this poet-engine-log-config.toml and not poet_engine_log_config.toml.
Looked at the code line 83
Does anyone know how to enable kerberos with Apache Drill? Is it possible. I can't seem to find any documentation on it, or any questions/answers floating around with the information on it. I am currently running a CDH cluster.
I am getting this error when trying to use HDFS with Drill:
Error: PERMISSION ERROR: SIMPLE authentication is not enabled.
Available:[TOKEN, KERBEROS]
HDFS + Kerberos integration isn't currently supported / tested / documented. Vote on this ticket to track when it becomes available:
https://issues.apache.org/jira/browse/DRILL-3584
There isn't any documentation that the Drill team provides about how to enable kerberos and they haven't tested kerberos with Drill. Drill Eng. does believe that it should work.
In order to gain access onto the cluster once Kerberized, you must configure certain files in order to gain access.
Make an HDFS Superuser account as indicated in this Cloudera doc. On the Main Node, run
•sudo kadmin.local
In addition, add an 'hdfs' principal with this command
•addprinc hdfs#LOCALDOMAIN -- Where localdomain is the principal name
In order to enable authentication with Kerberos, we also need to copy the file hadoop-yarn-api.jar into Drill's class path. Example given below
•cp /opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/hadoop/client/hadoop-yarn-api.jar ~/apache-drill/jars/
The above step and the three following must be performed on each node of the cluster that an Apache Drill is installed.
Next, Drill's conf/core-site.xml file should be edited to contain the following snippet of xml. You might have to copy this file from /etc/hadoop/conf.cloudera.yarn/core-site.xml, etc or a similar path.
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
After this step, you will also need to add the following xml snippet below to the drill core-site.xml file. In this instance, hdfs/_HOST#LOCALDOMAIN is my principal property. The property can be found on the hdfs-site.xml file
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/_HOST#LOCALDOMAIN</value>
</property>
All that is left to do is create an 'hdfs' Kerberos ticket for the user that we're logged into
•kinit hdfs -- hdfs is the super user
Then start up each of the drillbits
•/opt/apachedrillfolder/bin/Drillbit.sh start
So now, Drill has both the configuration and the authority to use our kerberized HDFS store. Give it a shot by opening up a Drill prompt (drill-conf) and trying a query
I have an issue with my gentoo. I tried to install BIND into my gentoo but everytime i want to install it, i will get an error message.
Here is whats happen in my Konsole :
emerge --ask net-dns/bind
* IMPORTANT: 3 config files in '/etc/portage' need updating.
* See the CONFIGURATION FILES section of the emerge
* man page to learn how to update config files.
These are the packages that would be merged, in order:
Calculating dependencies... done!
[ebuild R ] dev-libs/openssl-1.0.1g USE="-bindist*"
[ebuild N ] net-dns/bind-9.9.4_p2 USE="berkdb dlz gost ipv6 ldap odbc ssl -caps -doc -filter-aaaa -fixed-rrset -geoip -gssapi -idn -mysql -postgres -python -rpz -rrl -sdb-ldap (-selinux) -static-libs -threads -urandom -xml"
!!! Multiple package instances within a single package slot have been pulled
!!! into the dependency graph, resulting in a slot conflict:
dev-libs/openssl:0
(dev-libs/openssl-1.0.1g::gentoo, ebuild scheduled for merge) pulled in by
>=dev-libs/openssl-1.0.0:0[-bindist] required by (net-dns/bind-9.9.4_p2::gentoo, ebuild scheduled for merge)
dev-libs/openssl:0[-bindist] required by (net-dns/bind-9.9.4_p2::gentoo, ebuild scheduled for merge)
(dev-libs/openssl-1.0.1g::gentoo, installed) pulled in by
>=dev-libs/openssl-0.9.6d:0[bindist] required by (net-misc/openssh-5.9_p1-r4::gentoo, installed)
It may be possible to solve this problem by using package.mask to
prevent one of those packages from being selected. However, it is also
possible that conflicting dependencies exist such that they are
impossible to satisfy simultaneously. If such a conflict exists in
the dependencies of two different packages, then those packages can
not be installed simultaneously. You may want to try a larger value of
the --backtrack option, such as --backtrack=30, in order to see if
that will solve this conflict automatically.
For more information, see MASKED PACKAGES section in the emerge man
page or refer to the Gentoo Handbook.
!!! The following installed packages are masked:
- media-libs/mesa-9.0::gentoo (masked by: package.mask)
/usr/portage/profiles/package.mask:
# Chí-Thanh Christopher Nguyễn <chithanh#gentoo.org> (26 Mar 2014)
# Affected by multiple vulnerabilities, #445916, #471098 and #472280
For more information, see the MASKED PACKAGES section in the emerge
man page or refer to the Gentoo Handbook.
Can anyone show me how to resolve this issue in my Gentoo. I have a hard time to install anything.
UPDATED
emerge --ask net-dns/bind
* IMPORTANT: 3 config files in '/etc/portage' need updating.
* See the CONFIGURATION FILES section of the emerge
* man page to learn how to update config files.
These are the packages that would be merged, in order:
Calculating dependencies... done!
[ebuild R ] dev-libs/openssl-1.0.1g USE="-bindist*"
[ebuild N ] net-dns/bind-9.9.4_p2 USE="berkdb dlz gost ipv6 ldap odbc ssl -caps -doc -filter-aaaa -fixed-rrset -geoip -gssapi -idn -mysql -postgres -python -rpz -rrl -sdb-ldap (-selinux) -static-libs -threads -urandom -xml"
The following USE changes are necessary to proceed:
see "package.use" in the portage(5) man page for more details)
# required by net-dns/bind-9.9.4_p2[ssl]
# required by net-dns/bind (argument)
=dev-libs/openssl-1.0.1g -bindist
Use --autounmask-write to write changes to config files (honoring
CONFIG_PROTECT). Carefully examine the list of proposed changes,
paying special attention to mask or keyword changes that may expose
experimental or unstable packages.
!!! The following installed packages are masked:
- media-libs/mesa-9.0::gentoo (masked by: package.mask)
/usr/portage/profiles/package.mask:
# Chí-Thanh Christopher Nguyễn <chithanh#gentoo.org> (26 Mar 2014)
# Affected by multiple vulnerabilities, #445916, #471098 and #472280
For more information, see the MASKED PACKAGES section in the emerge
man page or refer to the Gentoo Handbook.
2 steps to solve this problems:
package.use/bind
net-dns/bind -ipv6 dlz
dev-libs/openssl -bindist
net-misc/openssh -bindist
recompile ssl ssh, and install bind
emerge -Uav dev-libs/openssl net-misc/openssh
emerge -av net-dns/bind
the uses for bind:
equery uses bind -i
[ Legend : U - final flag setting for installation]
[ : I - package is installed with flag ]
[ Colors : set, unset ]
* Found these USE flags for net-dns/bind-9.10.2_p2:
U I
+ + berkdb : Add support for sys-libs/db (Berkeley DB
for MySQL)
+ + caps : Use Linux capabilities library to control
privilege
+ + dlz : Enables dynamic loaded zones, 3rd party
extension
- - doc : Add extra documentation (API, Javadoc,
etc). It is recommended to enable per
package instead of globally
- - filter-aaaa : Enable filtering of AAAA records over IPv4
- - fixed-rrset : Enables fixed rrset-order option
- - geoip : Add geoip support for country and city
lookup based on IPs
- - gost : Enables gost OpenSSL engine support
- - gssapi : Enable gssapi support
- - idn : Enable support for Internationalized Domain
Names
- - ipv6 : Add support for IP version 6
- - json : Enable JSON statistics channel
- - ldap : Add LDAP support (Lightweight Directory
Access Protocol)
- - mysql : Add mySQL Database support
- - nslint : Build and install the nslint util
- - odbc : Add ODBC Support (Open DataBase
Connectivity)
- - postgres : Add support for the postgresql database
- - python : Add optional support/bindings for the
Python language
+ + python_targets_python2_7 : Build with Python 2.7
+ + python_targets_python3_3 : Build with Python 3.3
- - python_targets_python3_4 : Build with Python 3.4
- - rpz : Enable response policy rewriting (rpz)
- - seccomp : Enable seccomp for system call filtering
+ + ssl : Add support for Secure Socket Layer
connections
- - static-libs : Build static versions of dynamic libraries
as well
+ + threads : Add threads support for various packages.
Usually pthreads
- - urandom : Use /dev/urandom instead of /dev/random
- - xml : Add support for XML files
maybe this help:
# vi /etc/portage/package.use
and add this line:(this line was changed)
dev-libs/openssl -bindist
I have no other way if it doesn't work, Sorry :(
maybe you can get help from gentoo forums.
good luck.
emerge net-dns/bind --autounmask-write
etc-update
emerge net-dns/bind
remove -bindist from USE flags
Just to help other people having the same error, you need add the line under "# Required by" to your package.use file.
echo "=dev-libs/openssl-1.0.1g -bindist" >> /etc/portage/package.use/zz-autounmask
or
nano -w /etc/portage/package.use/zz-autounmask
and then manually copy the line into the file.
Replace "=dev-libs/openssl-1.0.1g -bindist" with what's required to be added to your package.use
I have the following errors while trying to run hbase in pseudodistributed mode
Error:KeeperErrorCode = NoNode for /hbase/backup-masters/VirtualBox,43390,137692277602
Error:KeeperErrorCode = NodeExists for /hbase/online-snapshot/acquired
Error:KeeperErrorCode = NoNode for /hbase/online-snapshot
Error:KeeperErrorCode = NoNode for /hbase/root-region-server
Error:KeeperErrorCode = NoNode for /hbase/table92/-ROOT-2013-08-19 16:38:34,281 WARN
and following exceptions
org.apache.hadoop.hbase.client.RetriesExhaustedException
org.apache.zookeeper.server.NIOServerCnxn:caught end of stream exception
Hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:54310/hbase</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>hbase.zookeeper.distributed</name>
<value>true</value>
</property>
</configuration>
The hbase-env.sh looks like
#
#/**
# * Copyright 2007 The Apache Software Foundation
# *
# * Licensed to the Apache Software Foundation (ASF) under one
# * or more contributor license agreements. See the NOTICE file
# * distributed with this work for additional information
# * regarding copyright ownership. The ASF licenses this file
# * to you under the Apache License, Version 2.0 (the
# * "License"); you may not use this file except in compliance
# * with the License. You may obtain a copy of the License at
# *
# * http://www.apache.org/licenses/LICENSE-2.0
# *
# * Unless required by applicable law or agreed to in writing, software
# * distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
# */
# Set environment variables here.
# This script sets variables multiple times over the course of starting an hbase process,
# so try to keep things idempotent unless you want to take an even deeper look
# into the startup scripts (bin/hbase, etc.)
# The java implementation to use. Java 1.6 required.
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
# Extra Java CLASSPATH elements. Optional.
#export HBASE_CLASSPATH=/home/hduser/Desktop/hbase-0.94.10
# The maximum amount of heap to use, in MB. Default is 1000.
export HBASE_HEAPSIZE=400
# Extra Java runtime options.
# Below are what we set by default. May only work with SUN JVM.
# For more on why as well as other possible settings,
# see http://wiki.apache.org/hadoop/PerformanceTuning
export HBASE_OPTS="-XX:+UseConcMarkSweepGC"
# Uncomment one of the below three options to enable java garbage collection logging for the server-side processes.
# This enables basic gc logging to the .out file.
# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"
# This enables basic gc logging to its own file.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"
# This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"
# Uncomment one of the below three options to enable java garbage collection logging for the client processes.
# This enables basic gc logging to the .out file.
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"
# This enables basic gc logging to its own file.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"
# This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"
# Uncomment below if you intend to use the EXPERIMENTAL off heap cache.
# export HBASE_OPTS="$HBASE_OPTS -XX:MaxDirectMemorySize="
# Set hbase.offheapcache.percentage in hbase-site.xml to a nonzero value.
# Uncomment and adjust to enable JMX exporting
# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.
# More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
#
# export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10101"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102"
# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103"
# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104"
# File naming hosts on which HRegionServers will run. $HBASE_HOME/conf/regionservers by default.
export HBASE_REGIONSERVERS=/home/hduser/Desktop/hbase-0.94.10/conf/regionservers
# File naming hosts on which backup HMaster will run. $HBASE_HOME/conf/backup-masters by default.
# export HBASE_BACKUP_MASTERS=${HBASE_HOME}/conf/backup-masters
# Extra ssh options. Empty by default.
# export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"
# Where log files are stored. $HBASE_HOME/logs by default.
export HBASE_LOG_DIR=/home/hduser/Desktop/hbase-0.94.10/logs
# Enable remote JDWP debugging of major HBase processes. Meant for Core Developers
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071"
# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072"
# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8073"
# A string representing this instance of hbase. $USER by default.
# export HBASE_IDENT_STRING=$USER
# The scheduling priority for daemon processes. See 'man nice'.
# export HBASE_NICENESS=10
# The directory where pid files are stored. /tmp by default.
export HBASE_PID_DIR=/home/hduser/Desktop/hbase-0.94.10/pids
# Seconds to sleep between slave commands. Unset by default. This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HBASE_SLAVE_SLEEP=0.1
# Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=true
My /etc/hosts has the following content
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
127.0.1.1 VirtualBox
# The following lines are desirable for IPv6 capable hosts
#::1 ip6-localhost ip6-loopback
#fe00::0 ip6-localnet
#ff00::0 ip6-mcastprefix
#ff02::1 ip6-allnodes
#ff02::2 ip6-allrouters
in your /etc/hosts what configuration you have
make
127.0.1.1 localhost on HBase client
Because of this ip's you are facing this problem. 127.0.0.1 127.0.1.1
try to configure real ip with proper gateway, submask number etc in(Ubuntu) /etc/network/interfaces file