SCOOP ERROR of database name does not exists even though it exists - mysql

I have 3 nodes , one namenode1, datanode1 and datanode2.
Scoop and mysql are installed on namenode1.
when can c the list of database as test.
hadoop#namenode1:/usr/local/sqoop/lib$ sqoop list-databases --connect jdbc:mysql://localhost/ --username root --password hadoop
18/08/25 19:49:58 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
18/08/25 19:49:58 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/08/25 19:49:58 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
information_schema
metastore
mysql
performance_schema
test
but when i run
sqoop import --connect jdbc:mysql://localhost:3306/test --username root --password hadoop --table student;
hadoop#namenode1:/usr/local/sqoop/lib$ sqoop import --connect jdbc:mysql://localhost:3306/test --username root --password hadoop --table student;
18/08/25 19:43:37 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
18/08/25 19:43:37 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/08/25 19:43:37 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
18/08/25 19:43:37 INFO tool.CodeGenTool: Beginning code generation
18/08/25 19:43:38 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `student` AS t LIMIT 1
18/08/25 19:43:38 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `student` AS t LIMIT 1
18/08/25 19:43:38 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/hadoop
Note: /tmp/sqoop-hadoop/compile/2a56cd695f49348fad38af086755acd8/student.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
18/08/25 19:43:41 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/2a56cd695f49348fad38af086755acd8/student.jar
18/08/25 19:43:41 WARN manager.MySQLManager: It looks like you are importing from mysql.
18/08/25 19:43:42 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
18/08/25 19:43:42 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
18/08/25 19:43:42 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
18/08/25 19:43:42 INFO mapreduce.ImportJobBase: Beginning import of student
18/08/25 19:43:42 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
18/08/25 19:43:43 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
18/08/25 19:43:43 INFO client.RMProxy: Connecting to ResourceManager at namenode1/192.168.1.2:8050
18/08/25 19:43:46 INFO db.DBInputFormat: Using read commited transaction isolation
18/08/25 19:43:46 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`id`), MAX(`id`) FROM `student`
18/08/25 19:43:46 INFO db.IntegerSplitter: Split size: 0; Num splits: 4 from: 1 to: 2
18/08/25 19:43:46 INFO mapreduce.JobSubmitter: number of splits:2
18/08/25 19:43:47 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
18/08/25 19:43:47 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1535200795373_0015
18/08/25 19:43:48 INFO impl.YarnClientImpl: Submitted application application_1535200795373_0015
18/08/25 19:43:48 INFO mapreduce.Job: The url to track the job: http://namenode1:8088/proxy/application_1535200795373_0015/
18/08/25 19:43:48 INFO mapreduce.Job: Running job: job_1535200795373_0015
18/08/25 19:43:57 INFO mapreduce.Job: Job job_1535200795373_0015 running in uber mode : false
18/08/25 19:43:57 INFO mapreduce.Job: map 0% reduce 0%
18/08/25 19:44:03 INFO mapreduce.Job: Task Id : attempt_1535200795373_0015_m_000000_0, Status : FAILED
**Error: java.lang.RuntimeException: java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown database 'test'**
at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:170)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:161)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:755)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:177)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:171)
**Caused by: java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown database 'test'**
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:223)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:168)
... 10 more
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown database 'test'
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.Util.getInstance(Util.java:386)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1054)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4237)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4169)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:928)
at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1750)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1290)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2493)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2526)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2311)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:834)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
I have given grant
GRANT ALL PRIVILEGES ON *.* TO 'root'#'%';
GRANT ALL PRIVILEGES ON *.* TO 'root'#'localhost';
GRANT ALL PRIVILEGES ON *.* TO 'root'#'127.0.0.1';
FLUSH PRIVILEGES;
but i am not sure why this error is coming . when database is present.

MYSQL needs to be installed on all nodes
when we run the mysql command in distributed platform then sqoop expects mysql command should be interpreted on all nodes and thats the reason why we need to install it on all nodes. hope this explains the answer

Related

Unable to import data using sqoop

I want to import data from MySQL to remote Hive using sqoop. I have installed Sqoop on a middleware machine. When i run this command:
sqoop import --driver com.mysql.jdbc.Driver --connect jdbc:mysql://192.168.2.146:3306/fir --username root -P -m 1 --table beard_size_list --connect jdbc:hive2://192.168.2.141:10000/efir --username oracle -P -m 1 --hive-table lnd_beard_size_list --hive-import;
Is this command correct can i import data from remote MySQL to remote Hive?
When i ran this command it keeps on trying to connect to resource manager:
17/11/01 10:54:05 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.6.1.0-129
Enter password:
17/11/01 10:54:10 INFO tool.BaseSqoopTool: Using Hive-specific delimiters
for output. You can override
17/11/01 10:54:10 INFO tool.BaseSqoopTool: delimiters with --fields-
terminated-by, etc.
17/11/01 10:54:10 WARN sqoop.ConnFactory: Parameter --driver is set to an
explicit driver however appropriate connection manager is not being set (via
--connection-manager). Sqoop is going to fall back to
org.apache.sqoop.manager.GenericJdbcManager. Please specify explicitly which
connection manager should be used next time.
17/11/01 10:54:10 INFO manager.SqlManager: Using default fetchSize of 1000
17/11/01 10:54:10 INFO tool.CodeGenTool: Beginning code generation
17/11/01 10:54:11 INFO manager.SqlManager: Executing SQL statement: SELECT
t.* FROM beard_size_list AS t WHERE 1=0
17/11/01 10:54:11 INFO manager.SqlManager: Executing SQL statement: SELECT
t.* FROM beard_size_list AS t WHERE 1=0
17/11/01 10:54:11 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is
/usr/hdp/2.6.1.0-129/hadoop-mapreduce
Note: /tmp/sqoop-
oracle/compile/d93080265a09913fbfe9e06e92d314a3/beard_size_list.java uses or
overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
17/11/01 10:54:15 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-
oracle/compile/d93080265a09913fbfe9e06e92d314a3/beard_size_list.jar
17/11/01 10:54:15 INFO mapreduce.ImportJobBase: Beginning import of
beard_size_list
17/11/01 10:54:15 INFO Configuration.deprecation: mapred.jar is deprecated.
Instead, use mapreduce.job.jar
17/11/01 10:54:15 INFO manager.SqlManager: Executing SQL statement: SELECT
t.* FROM beard_size_list AS t WHERE 1=0
17/11/01 10:54:17 INFO Configuration.deprecation: mapred.map.tasks is
deprecated. Instead, use mapreduce.job.maps
17/11/01 10:54:17 INFO client.RMProxy: Connecting to ResourceManager at
hortonworksn2.com/192.168.2.191:8050
17/11/01 10:54:17 INFO client.AHSProxy: Connecting to Application History
server at hortonworksn2.com/192.168.2.191:10200
17/11/01 10:54:19 INFO ipc.Client: Retrying connect to server:
hortonworksn2.com/192.168.2.191:8050. Already tried 0 time(s); retry policy
is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000
MILLISECONDS)
17/11/01 10:54:20 INFO ipc.Client: Retrying connect to server:
hortonworksn2.com/192.168.2.191:8050. Already tried 1 time(s); retry policy
is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000
MILLISECONDS)
17/11/01 10:54:21 INFO ipc.Client: Retrying connect to server:
hortonworksn2.com/192.168.2.191:8050. Already tried 2 time(s); retry policy
is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000
MILLISECONDS)
17/11/01 10:54:22 INFO ipc.Client: Retrying connect to server:
hortonworksn2.com/192.168.2.191:8050. Already tried 3 time(s); retry policy
is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000
MILLISECONDS)
17/11/01 10:54:23 INFO ipc.Client: Retrying connect to server:
hortonworksn2.com/192.168.2.191:8050. Already tried 4 time(s); retry policy
is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000
MILLISECONDS)
The port it is trying to connect is 8050 but the actual port is 8033. How can i fix this?
try this below command :
sqoop import --driver com.mysql.jdbc.Driver --connect jdbc:mysql://192.168.2.146:3306/fir --username root -P -m 1 --table beard_size_list ;
Please check the below property is set to yarn-site.xml correctly
<name>yarn.resourcemanager.address</name>
<value>192.168.2.191:8033</value>
Why you have added -connect statement twice in your code? Try with below code:
sqoop import
--driver com.mysql.jdbc.Driver
--connect jdbc:mysql://192.168.2.146:3306/fir
--username root -P -m 1
--split-by beard_size_list_table_primary_key
--table beard_size_list
--target-dir /user/data/raw/beard_size_list
--fields-terminated-by ","
--hive-import
--create-hive-table
--hive-table dbschema.beard_size_list
Note:
create-hive-table – Determines if set job will fail if a Hive table already exists. It will work in this case other wise you have create hive external table and set the target-dir path

Importing data using sqoop in HDFS beugs

I am following this tutorial http://hadooped.blogspot.fr/2013/05/apache-sqoop-for-data-integration.html. And I have installed hadoop services (hdfs, hive, sqoop, hue, ...) using cloudera manager.
I am using Ubuntu 12.04 TLS.
When trying to import data from Mysql to HDFS, mapreduce jobs takes infinite time without returning any error. Knowing that the imported table has 4 columns and 10 rows.
this is what I do:
sqoop import --connect jdbc:mysql://localhost/employees --username hadoop --password password --table departments -m 1 --target-dir /user/sqoop2/sqoop-mysql/department
Warning: /opt/cloudera/parcels/CDH-5.5.2-1.cdh5.5.2.p0.4/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
16/02/23 17:49:09 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.5.2
16/02/23 17:49:09 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
16/02/23 17:49:10 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
16/02/23 17:49:10 INFO tool.CodeGenTool: Beginning code generation
16/02/23 17:49:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `departments` AS t LIMIT 1
16/02/23 17:49:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `departments` AS t LIMIT 1
16/02/23 17:49:11 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
Note: /tmp/sqoop-root/compile/6bdeb198a0c249392703e3fc0070cb64/departments.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
16/02/23 17:49:19 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/6bdeb198a0c249392703e3fc0070cb64/departments.jar
16/02/23 17:49:19 WARN manager.MySQLManager: It looks like you are importing from mysql.
16/02/23 17:49:19 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
16/02/23 17:49:19 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
16/02/23 17:49:19 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
16/02/23 17:49:19 INFO mapreduce.ImportJobBase: Beginning import of departments
16/02/23 17:49:20 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
16/02/23 17:49:24 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
16/02/23 17:49:24 INFO client.RMProxy: Connecting to ResourceManager at hadoopUser/10.0.2.15:8032
16/02/23 17:49:31 INFO db.DBInputFormat: Using read commited transaction isolation
16/02/23 17:49:31 INFO mapreduce.JobSubmitter: number of splits:1
16/02/23 17:49:33 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1456236806433_0004
16/02/23 17:49:34 INFO impl.YarnClientImpl: Submitted application application_1456236806433_0004
16/02/23 17:49:34 INFO mapreduce.Job: The url to track the job: http://hadoopUser:8088/proxy/application_1456236806433_0004/
16/02/23 17:49:34 INFO mapreduce.Job: Running job: job_1456236806433_0004
Job_image
regards,
The MapReduce Job is not getting Launched. You need to run a test wordcount job on the cluster.

Facing an issue while executing Sqoop import command in Hadoop 2.6.0

I am using MAC Osx for Hadoop Stack and using MySQL as the database for it.I am trying to execute a Sqoop import command:
sqoop import --connect jdbc:mysql://127.0.0.1/emp --table employee --username root --password reality --m 1 --target-dir /sqoop_import
But I am facing below-mentioned issue while executing it. Even in /etc/hosts, localhost is at 127.0.0.1.host file screenshot. I have tried pinging localhost and it works but the error of host is down, still prevails. Please help.
2016-02-06 17:42:38,267 INFO [main] mapreduce.Job: Job job_1454152643692_0010 failed with state FAILED due to: Application application_1454152643692_0010 failed 2 times due to Error launching appattempt_1454152643692_0010_000002. Got exception: java.io.IOException: Failed on local exception: java.net.SocketException: Host is down; Host Details : local host is: "Mohits-MacBook-Pro.local/192.168.0.103"; destination host is: "192.168.0.105":38183;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy32.startContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Host is down
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
... 9 more
. Failing the application.
2016-02-06 17:42:38,304 INFO [main] mapreduce.Job: Counters: 0
2016-02-06 17:42:38,321 WARN [main] mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2016-02-06 17:42:38,326 INFO [main] mapreduce.ImportJobBase: Transferred 0 bytes in 125.7138 seconds (0 bytes/sec)
2016-02-06 17:42:38,349 WARN [main] mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-02-06 17:42:38,350 INFO [main] mapreduce.ImportJobBase: Retrieved 0 records.
2016-02-06 17:42:38,351 ERROR [main] tool.ImportTool: Error during import: Import job failed!
I see that you are having network issues. From the log above it says that local host translates to Mohits-MacBook-Pro.local/192.168.0.103 in you unix system and destination which it is trying to connect is "192.168.0.105":38183. Please go to unix system and see /etc/hosts file and make sure you change localhost to 127.0.0.1.

ERROR: When running sqoop import command on master node

I have configured hadoop multi node cluster. When i am trying to import a table from mysql database to hive using sqoop in master node , it's throwing following error,
sqoop import --connect jdbc:mysql://master:3306/mysql --username root --password admin --table payment --hive-import -- --null-string '\\N' \ --null-non-string '\\N'
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: $HADOOP_HOME is deprecated.
14/04/14 16:17:32 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
14/04/14 16:17:32 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
14/04/14 16:17:32 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
14/04/14 16:17:32 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
14/04/14 16:17:32 INFO tool.CodeGenTool: Beginning code generation
14/04/14 16:17:33 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `payment` AS t LIMIT 1
14/04/14 16:17:33 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `payment` AS t LIMIT 1
14/04/14 16:17:33 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/hadoop
Note: /tmp/sqoop-hduser/compile/e31d2917f0d797c58258a17ed005633c/payment.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
14/04/14 16:17:35 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hduser/compile/e31d2917f0d797c58258a17ed005633c/payment.jar
14/04/14 16:17:35 WARN manager.MySQLManager: It looks like you are importing from mysql.
14/04/14 16:17:35 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
14/04/14 16:17:35 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
14/04/14 16:17:35 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
14/04/14 16:17:35 INFO mapreduce.ImportJobBase: Beginning import of payment
14/04/14 16:17:36 INFO ipc.Client: Retrying connect to server: master/10.10.3.74:54311. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/04/14 16:17:37 INFO ipc.Client: Retrying connect to server: master/10.10.3.74:54311. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/04/14 16:17:38 INFO ipc.Client: Retrying connect to server: master/10.10.3.74:54311. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/04/14 16:17:39 INFO ipc.Client: Retrying connect to server: master/10.10.3.74:54311. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/04/14 16:17:41 INFO ipc.Client: Retrying connect to server: master/10.10.3.74:54311. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/04/14 16:17:42 INFO ipc.Client: Retrying connect to server: master/10.10.3.74:54311. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/04/14 16:17:43 INFO ipc.Client: Retrying connect to server: master/10.10.3.74:54311. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/04/14 16:17:44 INFO ipc.Client: Retrying connect to server: master/10.10.3.74:54311. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/04/14 16:17:45 INFO ipc.Client: Retrying connect to server: master/10.10.3.74:54311. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/04/14 16:17:46 INFO ipc.Client: Retrying connect to server: master/10.10.3.74:54311. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/04/14 16:17:46 **ERROR security.UserGroupInformation: PriviledgedActionException as:hduser cause:java.net.ConnectException: Call to master/10.10.3.74:54311 failed on connection exception: java.net.ConnectException: Connection refused**
14/04/14 16:17:46 ERROR tool.ImportTool: Encountered IOException running import job: java.net.ConnectException: Call to master/10.10.3.74:54311 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1142)
at org.apache.hadoop.ipc.Client.call(Client.java:1118)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at org.apache.hadoop.mapred.$Proxy1.getProtocolVersion(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at org.apache.hadoop.mapred.$Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
at org.apache.hadoop.mapred.JobClient.createProxy(JobClient.java:559)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:498)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:479)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:563)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:561)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:549)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186)
at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:239)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:600)
at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:118)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:413)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:601)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:457)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:583)
at org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1249)
at org.apache.hadoop.ipc.Client.call(Client.java:1093)
... 33 more
I tried giving permissions to that folder , but it's not working.
versions of technologies
========== ===============
hadoop-1.2.1
Hive-0.11
sqoop-1.4.4
Please Help me.Let me know , want more info.
Thanks in advance.
You should not gave your password in the command line, try it without it.

Datastax hadoop InvalidRequestException(why:You have not logged in)

I have installed DataStax Enterprise "dse-4.0.1", but when I tried to do the demo per the below link
http://www.datastax.com/docs/datastax_enterprise2.0/sqoop/sqoop_demo, I am getting the below error, can anybody please help me with the issue, I am facing, log file attached for your reference.
[root#chbslx0624 bin]# ./dse sqoop import --connect jdbc:mysql://127.0.0.1/npa_nxx_demo --username root --password poc123 --table npa_nxx --target-dir /npa_nxx
14/04/14 10:44:14 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
14/04/14 10:44:14 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
14/04/14 10:44:14 INFO tool.CodeGenTool: Beginning code generation
14/04/14 10:44:14 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `npa_nxx` AS t LIMIT 1
14/04/14 10:44:14 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `npa_nxx` AS t LIMIT 1
14/04/14 10:44:14 INFO orm.CompilationManager: HADOOP_HOME is /opt/cassandra/dse-4.0.1/resources/hadoop/bin/..
Note: /tmp/sqoop-root/compile/b0fc8093d30c07f252da42678679e461/npa_nxx.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
14/04/14 10:44:15 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/b0fc8093d30c07f252da42678679e461/npa_nxx.jar
14/04/14 10:44:15 WARN manager.MySQLManager: It looks like you are importing from mysql.
14/04/14 10:44:15 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
14/04/14 10:44:15 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
14/04/14 10:44:15 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
14/04/14 10:44:15 INFO mapreduce.ImportJobBase: Beginning import of npa_nxx
14/04/14 10:44:17 INFO snitch.Workload: Setting my workload to Cassandra
14/04/14 10:44:18 ERROR security.UserGroupInformation: PriviledgedActionException as:root cause:java.io.IOException: InvalidRequestException(why:You have not logged in)
14/04/14 10:44:18 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: InvalidRequestException(why:You have not logged in)
at com.datastax.bdp.util.CassandraProxyClient.initialize(CassandraProxyClient.java:453)
at com.datastax.bdp.util.CassandraProxyClient.<init>(CassandraProxyClient.java:376)
at com.datastax.bdp.util.CassandraProxyClient.newProxyConnection(CassandraProxyClient.java:259)
at com.datastax.bdp.util.CassandraProxyClient.newProxyConnection(CassandraProxyClient.java:306)
at com.datastax.bdp.hadoop.cfs.CassandraFileSystemThriftStore.initialize(CassandraFileSystemThriftStore.java:230)
at com.datastax.bdp.hadoop.cfs.CassandraFileSystem.initialize(CassandraFileSystem.java:73)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:97)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:856)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:141)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:202)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:475)
at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:108)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:403)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:476)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
Caused by: InvalidRequestException(why:You have not logged in)
at org.apache.cassandra.thrift.Cassandra$describe_keyspaces_result$describe_keyspaces_resultStandardScheme.read(Cassandra.java:31961)
at org.apache.cassandra.thrift.Cassandra$describe_keyspaces_result$describe_keyspaces_resultStandardScheme.read(Cassandra.java:31928)
at org.apache.cassandra.thrift.Cassandra$describe_keyspaces_result.read(Cassandra.java:31870)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_describe_keyspaces(Cassandra.java:1181)
at org.apache.cassandra.thrift.Cassandra$Client.describe_keyspaces(Cassandra.java:1169)
at com.datastax.bdp.util.CassandraProxyClient.initialize(CassandraProxyClient.java:425)
... 32 more
It looks like you have another local running Cassandra node which is username/password enable. You can follow the latest document, and start DSE as -t then run the import script.