sqoop export failed- Export job failed - mysql

Hi I am trying to run the sqoop export command. There is an empty table called empolyee in mysql under the username newuser and the database is db. I have created a csv file by taking care of the datatypes that the table employee in mysql has. I have put that file in the hdfs directory called /sqoop/export/emp.csv. I have checked the file in hdfs. It is present in that directory. Then I have run the export command as
sqoop export --connect jdbc:mysql://localhost:3306/db --username newuser --table employee --export-dir /sqoop/export/emp.csv --driver com.mysql.jdbc.Driver
It is mapping 100% and then I am getting the error as Export job failed as in
20/06/26 15:18:37 INFO mapreduce.Job: map 100% reduce 0%
20/06/26 15:18:38 INFO mapreduce.Job: Job job_1593163228066_0003 failed with state FAILED due to: Task failed task_1593163228066_0003_m_000003
Job failed as tasks failed. failedMaps:1 failedReduces:0
20/06/26 15:18:38 INFO mapreduce.Job: Counters: 12
Job Counters
Failed map tasks=1
Killed map tasks=3
Launched map tasks=4
Data-local map tasks=4
Total time spent by all maps in occupied slots (ms)=23002
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=23002
Total vcore-milliseconds taken by all map tasks=23002
Total megabyte-milliseconds taken by all map tasks=23554048
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
20/06/26 15:18:38 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
20/06/26 15:18:38 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 22.6316 seconds (0 bytes/sec)
20/06/26 15:18:38 INFO mapreduce.ExportJobBase: Exported 0 records.
20/06/26 15:18:38 ERROR mapreduce.ExportJobBase: Export job failed!
20/06/26 15:18:38 ERROR tool.ExportTool: Error during export:
Export job failed!
at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:445)
at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
So, what may be the problem and how could I fix this? Thanks

Related

Unable to export hive table to mysql

I am trying to export a hive table to mysql database whose data is tab delimited as stored in HDFS but the job is failing every time after mapper phase.
I have referred to many link and resources and cross checked my export command like export-directory, table name and other factors. Also the schema of both the tables are same but still didn't have any idea why the jobs are failing every time.
Schema in hive :
display_id int
employment_type string
edu_qualification string
marital_status string
job_type string
working_hours_per_week int
country string
salary string
Schema in Mysql
display_id int
employment_type varchar(100)
edu_qualification varchar(100)
marital_status varchar(100)
job_type varchar(100)
working_hours_per_week int
country varchar(100)
salary varchar(100)
Command used for exporting table
sqoop export \
--connect <<jdbcURL>> \
--username root \
--password **** \
--table census_table \
--export-dir <<hdfs_dir>> \
--input-fields-terminated-by '\t' \
--columns "display_id,employment_type,edu_qualification,marital_status,job_type,working_hours_per_week,country,salary" \
--num-mappers 1
Sample data in file
39 StateGov BachelorDegree Unmarried Clerical 45 US <=55K
logs of export operation :
Warning: HBASE_HOME and HBASE_VERSION not set.
Warning: HCAT_HOME not set
Warning: HCATALOG_HOME does not exist HCatalog imports will fail.
Please set HCATALOG_HOME to the root of your HCatalog installation.
Warning: ACCUMULO_HOME not set.
Warning: ZOOKEEPER_HOME not set.
Warning: HBASE_HOME does not exist HBase imports will fail.
Please set HBASE_HOME to the root of your HBase installation.
Warning: ACCUMULO_HOME does not exist Accumulo imports will fail.
Please set ACCUMULO_HOME to the root of your Accumulo installation.
Warning: ZOOKEEPER_HOME does not exist Accumulo imports will fail.
Please set ZOOKEEPER_HOME to the root of your Zookeeper installation.
20/04/24 12:03:59 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
20/04/24 12:03:59 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
20/04/24 12:03:59 WARN sqoop.SqoopOptions: Character argument '\t' has multiple characters; only the first will be used.
20/04/24 12:04:00 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
20/04/24 12:04:00 INFO tool.CodeGenTool: Beginning code generation
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual l
20/04/24 12:04:01 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `census_table` AS t LIMIT 1
20/04/24 12:04:01 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `census_table` AS t LIMIT 1
20/04/24 12:04:01 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is C:\hadoop-2.10.0
Note: \tmp\sqoop-Anand\compile\8987413e3b916524daa02124c4829b87\census_table.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
20/04/24 12:04:08 INFO orm.CompilationManager: Writing jar file: \tmp\sqoop-Anand\compile\8987413e3b916524daa02124c4829b87\census_table.jar
20/04/24 12:04:08 INFO mapreduce.ExportJobBase: Beginning export of census_table
20/04/24 12:04:08 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
20/04/24 12:04:10 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
20/04/24 12:04:10 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
20/04/24 12:04:10 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
20/04/24 12:04:10 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
20/04/24 12:04:23 INFO input.FileInputFormat: Total input files to process : 1
20/04/24 12:04:23 INFO input.FileInputFormat: Total input files to process : 1
20/04/24 12:04:24 INFO mapreduce.JobSubmitter: number of splits:1
20/04/24 12:04:25 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
20/04/24 12:04:25 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1587709503815_0002
20/04/24 12:04:25 INFO conf.Configuration: resource-types.xml not found
20/04/24 12:04:25 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
20/04/24 12:04:25 INFO resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE
20/04/24 12:04:25 INFO resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE
20/04/24 12:04:26 INFO impl.YarnClientImpl: Submitted application application_1587709503815_0002
20/04/24 12:04:26 INFO mapreduce.Job: The url to track the job: http://Watson:8088/proxy/application_1587709503815_0002/
20/04/24 12:04:26 INFO mapreduce.Job: Running job: job_1587709503815_0002
20/04/24 12:04:38 INFO mapreduce.Job: Job job_1587709503815_0002 running in uber mode : false
20/04/24 12:04:38 INFO mapreduce.Job: map 0% reduce 0%
20/04/24 12:04:47 INFO mapreduce.Job: map 100% reduce 0%
20/04/24 12:04:48 INFO mapreduce.Job: Job job_1587709503815_0002 failed with state FAILED due to: Task failed task_1587709503815_0002_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
20/04/24 12:04:48 INFO mapreduce.Job: Counters: 8
Job Counters
Failed map tasks=1
Launched map tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=7232
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=7232
Total vcore-milliseconds taken by all map tasks=7232
Total megabyte-milliseconds taken by all map tasks=7405568
20/04/24 12:04:48 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
20/04/24 12:04:48 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 38.2364 seconds (0 bytes/sec)
20/04/24 12:04:48 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
20/04/24 12:04:48 INFO mapreduce.ExportJobBase: Exported 0 records.
20/04/24 12:04:48 ERROR mapreduce.ExportJobBase: Export job failed!
20/04/24 12:04:48 ERROR tool.ExportTool: Error during export:
Export job failed!
at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:445)
at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
Error Logs as in hadoop application tracker
2020-04-24 16:06:00,662 FATAL [IPC Server handler 11 on default port 51507] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1587719571504_0006_m_000000_0 - exited : java.io.IOException: Can't export data, please check failed map task logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:122)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:793)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:177)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:171)
Caused by: java.lang.RuntimeException: Can't parse input data: '39 StateGov BachelorDegree Unmarried Clerical 45 US <=55K'
at census_table.__loadFromFields(census_table.java:593)
at census_table.parse(census_table.java:474)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:89)
... 10 more
Caused by: java.lang.NumberFormatException: For input string: "39 StateGov BachelorDegree Unmarried Clerical 45 US <=55K"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.valueOf(Integer.java:766)
at census_table.__loadFromFields(census_table.java:527)
... 12 more
I have tried almost all possible combinations for export commmand but it didn't get the job done and now i have no idea how to proceed and what i'm doinng wrong. please help or suggest the changes.
NOTE : i'm able to import table from mysql indicating that username, password and jdbcURL that i'm using for --connnect parameters are also verified and working.
Thanks
It can be failing for many reasons, please follow this link to track the log to see why the process is failing
20/04/24 12:04:26 INFO mapreduce.Job: The url to track the job: http://Watson:8088/proxy/application_1587709503815_0002/
then you could see something like this
clik in logs and you should be able to see
and you could explore why the process is failing.

Error when import a mysql table to hdfs using sqoop

I started learning hadoop and am working on some practice myself, here is an issue when I try to use sqoop to import a mysql table to hdfs:
sqoop import --connect jdbc:mysql://localhost/employees --username=root -P --table=dept_emp --warehouse-dir=dept_emp -where dept_no='d001' --m 1;
The dept_emp has 20k records roughly.
The output is as the following:
2016-09-26 16:42:26,467 INFO [main] ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-09-26 16:42:27,470 INFO [main] ipc.Client: Retrying connect to
server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
The "Already tried x time(s)" increased from 0 to 9, and then looped again from 0 to 9, hanging there now.
Can someone shed me some light?
Thank you very much.
Please correct the syntax:
--where "dept_no='d001'"
-m 1;

Importing data using sqoop in HDFS beugs

I am following this tutorial http://hadooped.blogspot.fr/2013/05/apache-sqoop-for-data-integration.html. And I have installed hadoop services (hdfs, hive, sqoop, hue, ...) using cloudera manager.
I am using Ubuntu 12.04 TLS.
When trying to import data from Mysql to HDFS, mapreduce jobs takes infinite time without returning any error. Knowing that the imported table has 4 columns and 10 rows.
this is what I do:
sqoop import --connect jdbc:mysql://localhost/employees --username hadoop --password password --table departments -m 1 --target-dir /user/sqoop2/sqoop-mysql/department
Warning: /opt/cloudera/parcels/CDH-5.5.2-1.cdh5.5.2.p0.4/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
16/02/23 17:49:09 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.5.2
16/02/23 17:49:09 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
16/02/23 17:49:10 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
16/02/23 17:49:10 INFO tool.CodeGenTool: Beginning code generation
16/02/23 17:49:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `departments` AS t LIMIT 1
16/02/23 17:49:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `departments` AS t LIMIT 1
16/02/23 17:49:11 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
Note: /tmp/sqoop-root/compile/6bdeb198a0c249392703e3fc0070cb64/departments.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
16/02/23 17:49:19 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/6bdeb198a0c249392703e3fc0070cb64/departments.jar
16/02/23 17:49:19 WARN manager.MySQLManager: It looks like you are importing from mysql.
16/02/23 17:49:19 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
16/02/23 17:49:19 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
16/02/23 17:49:19 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
16/02/23 17:49:19 INFO mapreduce.ImportJobBase: Beginning import of departments
16/02/23 17:49:20 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
16/02/23 17:49:24 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
16/02/23 17:49:24 INFO client.RMProxy: Connecting to ResourceManager at hadoopUser/10.0.2.15:8032
16/02/23 17:49:31 INFO db.DBInputFormat: Using read commited transaction isolation
16/02/23 17:49:31 INFO mapreduce.JobSubmitter: number of splits:1
16/02/23 17:49:33 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1456236806433_0004
16/02/23 17:49:34 INFO impl.YarnClientImpl: Submitted application application_1456236806433_0004
16/02/23 17:49:34 INFO mapreduce.Job: The url to track the job: http://hadoopUser:8088/proxy/application_1456236806433_0004/
16/02/23 17:49:34 INFO mapreduce.Job: Running job: job_1456236806433_0004
Job_image
regards,
The MapReduce Job is not getting Launched. You need to run a test wordcount job on the cluster.

Facing an issue while executing Sqoop import command in Hadoop 2.6.0

I am using MAC Osx for Hadoop Stack and using MySQL as the database for it.I am trying to execute a Sqoop import command:
sqoop import --connect jdbc:mysql://127.0.0.1/emp --table employee --username root --password reality --m 1 --target-dir /sqoop_import
But I am facing below-mentioned issue while executing it. Even in /etc/hosts, localhost is at 127.0.0.1.host file screenshot. I have tried pinging localhost and it works but the error of host is down, still prevails. Please help.
2016-02-06 17:42:38,267 INFO [main] mapreduce.Job: Job job_1454152643692_0010 failed with state FAILED due to: Application application_1454152643692_0010 failed 2 times due to Error launching appattempt_1454152643692_0010_000002. Got exception: java.io.IOException: Failed on local exception: java.net.SocketException: Host is down; Host Details : local host is: "Mohits-MacBook-Pro.local/192.168.0.103"; destination host is: "192.168.0.105":38183;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy32.startContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Host is down
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
... 9 more
. Failing the application.
2016-02-06 17:42:38,304 INFO [main] mapreduce.Job: Counters: 0
2016-02-06 17:42:38,321 WARN [main] mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2016-02-06 17:42:38,326 INFO [main] mapreduce.ImportJobBase: Transferred 0 bytes in 125.7138 seconds (0 bytes/sec)
2016-02-06 17:42:38,349 WARN [main] mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-02-06 17:42:38,350 INFO [main] mapreduce.ImportJobBase: Retrieved 0 records.
2016-02-06 17:42:38,351 ERROR [main] tool.ImportTool: Error during import: Import job failed!
I see that you are having network issues. From the log above it says that local host translates to Mohits-MacBook-Pro.local/192.168.0.103 in you unix system and destination which it is trying to connect is "192.168.0.105":38183. Please go to unix system and see /etc/hosts file and make sure you change localhost to 127.0.0.1.

Datastax hadoop InvalidRequestException(why:You have not logged in)

I have installed DataStax Enterprise "dse-4.0.1", but when I tried to do the demo per the below link
http://www.datastax.com/docs/datastax_enterprise2.0/sqoop/sqoop_demo, I am getting the below error, can anybody please help me with the issue, I am facing, log file attached for your reference.
[root#chbslx0624 bin]# ./dse sqoop import --connect jdbc:mysql://127.0.0.1/npa_nxx_demo --username root --password poc123 --table npa_nxx --target-dir /npa_nxx
14/04/14 10:44:14 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
14/04/14 10:44:14 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
14/04/14 10:44:14 INFO tool.CodeGenTool: Beginning code generation
14/04/14 10:44:14 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `npa_nxx` AS t LIMIT 1
14/04/14 10:44:14 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `npa_nxx` AS t LIMIT 1
14/04/14 10:44:14 INFO orm.CompilationManager: HADOOP_HOME is /opt/cassandra/dse-4.0.1/resources/hadoop/bin/..
Note: /tmp/sqoop-root/compile/b0fc8093d30c07f252da42678679e461/npa_nxx.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
14/04/14 10:44:15 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/b0fc8093d30c07f252da42678679e461/npa_nxx.jar
14/04/14 10:44:15 WARN manager.MySQLManager: It looks like you are importing from mysql.
14/04/14 10:44:15 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
14/04/14 10:44:15 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
14/04/14 10:44:15 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
14/04/14 10:44:15 INFO mapreduce.ImportJobBase: Beginning import of npa_nxx
14/04/14 10:44:17 INFO snitch.Workload: Setting my workload to Cassandra
14/04/14 10:44:18 ERROR security.UserGroupInformation: PriviledgedActionException as:root cause:java.io.IOException: InvalidRequestException(why:You have not logged in)
14/04/14 10:44:18 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: InvalidRequestException(why:You have not logged in)
at com.datastax.bdp.util.CassandraProxyClient.initialize(CassandraProxyClient.java:453)
at com.datastax.bdp.util.CassandraProxyClient.<init>(CassandraProxyClient.java:376)
at com.datastax.bdp.util.CassandraProxyClient.newProxyConnection(CassandraProxyClient.java:259)
at com.datastax.bdp.util.CassandraProxyClient.newProxyConnection(CassandraProxyClient.java:306)
at com.datastax.bdp.hadoop.cfs.CassandraFileSystemThriftStore.initialize(CassandraFileSystemThriftStore.java:230)
at com.datastax.bdp.hadoop.cfs.CassandraFileSystem.initialize(CassandraFileSystem.java:73)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:97)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:856)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:141)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:202)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:475)
at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:108)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:403)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:476)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
Caused by: InvalidRequestException(why:You have not logged in)
at org.apache.cassandra.thrift.Cassandra$describe_keyspaces_result$describe_keyspaces_resultStandardScheme.read(Cassandra.java:31961)
at org.apache.cassandra.thrift.Cassandra$describe_keyspaces_result$describe_keyspaces_resultStandardScheme.read(Cassandra.java:31928)
at org.apache.cassandra.thrift.Cassandra$describe_keyspaces_result.read(Cassandra.java:31870)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_describe_keyspaces(Cassandra.java:1181)
at org.apache.cassandra.thrift.Cassandra$Client.describe_keyspaces(Cassandra.java:1169)
at com.datastax.bdp.util.CassandraProxyClient.initialize(CassandraProxyClient.java:425)
... 32 more
It looks like you have another local running Cassandra node which is username/password enable. You can follow the latest document, and start DSE as -t then run the import script.