Hi I am trying to run the sqoop export command. There is an empty table called empolyee in mysql under the username newuser and the database is db. I have created a csv file by taking care of the datatypes that the table employee in mysql has. I have put that file in the hdfs directory called /sqoop/export/emp.csv. I have checked the file in hdfs. It is present in that directory. Then I have run the export command as
sqoop export --connect jdbc:mysql://localhost:3306/db --username newuser --table employee --export-dir /sqoop/export/emp.csv --driver com.mysql.jdbc.Driver
It is mapping 100% and then I am getting the error as Export job failed as in
20/06/26 15:18:37 INFO mapreduce.Job: map 100% reduce 0%
20/06/26 15:18:38 INFO mapreduce.Job: Job job_1593163228066_0003 failed with state FAILED due to: Task failed task_1593163228066_0003_m_000003
Job failed as tasks failed. failedMaps:1 failedReduces:0
20/06/26 15:18:38 INFO mapreduce.Job: Counters: 12
Job Counters
Failed map tasks=1
Killed map tasks=3
Launched map tasks=4
Data-local map tasks=4
Total time spent by all maps in occupied slots (ms)=23002
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=23002
Total vcore-milliseconds taken by all map tasks=23002
Total megabyte-milliseconds taken by all map tasks=23554048
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
20/06/26 15:18:38 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
20/06/26 15:18:38 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 22.6316 seconds (0 bytes/sec)
20/06/26 15:18:38 INFO mapreduce.ExportJobBase: Exported 0 records.
20/06/26 15:18:38 ERROR mapreduce.ExportJobBase: Export job failed!
20/06/26 15:18:38 ERROR tool.ExportTool: Error during export:
Export job failed!
at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:445)
at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
So, what may be the problem and how could I fix this? Thanks
I am using Hadoop 2.7.1 version sqoop 1.4.6 and java 1.8. All deamons were running properly. When i am using sqoop import i am getting below error. Can u tell me from where the error is occuring and how to resolve this error. Thanks in advance.
sqoop import --bindir ./ --connect jdbc:mysql://localhost/mydb --username root --password yashu123 --table shipper --m 1 --target-dir /file
Warning: /usr/local/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/local/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
2017-02-07 10:30:00,197 INFO [main] sqoop.Sqoop: Running Sqoop version: 1.4.6
2017-02-07 10:30:00,215 WARN [main] tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
2017-02-07 10:30:00,349 INFO [main] manager.MySQLManager: Preparing to use a MySQL streaming resultset.
2017-02-07 10:30:00,349 INFO [main] tool.CodeGenTool: Beginning code generation
Tue Feb 07 10:30:00 IST 2017 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
2017-02-07 10:30:00,813 INFO [main] manager.SqlManager: Executing SQL statement: SELECT t.* FROM `shipper` AS t LIMIT 1
2017-02-07 10:30:00,855 INFO [main] manager.SqlManager: Executing SQL statement: SELECT t.* FROM `shipper` AS t LIMIT 1
2017-02-07 10:30:00,885 INFO [main] orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/hadoop
Note: ./shipper.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
2017-02-07 10:30:02,259 INFO [main] orm.CompilationManager: Writing jar file: ./shipper.jar
2017-02-07 10:30:02,427 WARN [main] manager.MySQLManager: It looks like you are importing from mysql.
2017-02-07 10:30:02,428 WARN [main] manager.MySQLManager: This transfer can be faster! Use the --direct
2017-02-07 10:30:02,428 WARN [main] manager.MySQLManager: option to exercise a MySQL-specific fast path.
2017-02-07 10:30:02,428 INFO [main] manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
2017-02-07 10:30:02,430 INFO [main] mapreduce.ImportJobBase: Beginning import of shipper
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
2017-02-07 10:30:02,656 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-02-07 10:30:02,660 INFO [main] Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
2017-02-07 10:30:03,295 INFO [main] Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
2017-02-07 10:30:03,561 INFO [main] client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
Tue Feb 07 10:30:09 IST 2017 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
2017-02-07 10:30:09,358 INFO [main] db.DBInputFormat: Using read commited transaction isolation
2017-02-07 10:30:09,580 INFO [main] mapreduce.JobSubmitter: number of splits:1
2017-02-07 10:30:09,649 INFO [main] Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
2017-02-07 10:30:09,649 INFO [main] Configuration.deprecation: mapred.cache.files.timestamps is deprecated. Instead, use mapreduce.job.cache.files.timestamps
2017-02-07 10:30:09,649 INFO [main] Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
2017-02-07 10:30:09,650 INFO [main] Configuration.deprecation: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class
2017-02-07 10:30:09,650 INFO [main] Configuration.deprecation: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class
2017-02-07 10:30:09,650 INFO [main] Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
2017-02-07 10:30:09,650 INFO [main] Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
2017-02-07 10:30:09,650 INFO [main] Configuration.deprecation: mapred.cache.files is deprecated. Instead, use mapreduce.job.cache.files
2017-02-07 10:30:09,650 INFO [main] Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
2017-02-07 10:30:09,650 INFO [main] Configuration.deprecation: mapred.job.classpath.files is deprecated. Instead, use mapreduce.job.classpath.files
2017-02-07 10:30:09,651 INFO [main] Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
2017-02-07 10:30:09,651 INFO [main] Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
2017-02-07 10:30:09,651 INFO [main] Configuration.deprecation: mapred.cache.files.filesizes is deprecated. Instead, use mapreduce.job.cache.files.filesizes
2017-02-07 10:30:09,651 INFO [main] Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
2017-02-07 10:30:09,849 INFO [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1486442978038_0001
2017-02-07 10:30:10,353 INFO [main] impl.YarnClientImpl: Submitted application application_1486442978038_0001 to ResourceManager at /0.0.0.0:8032
2017-02-07 10:30:10,403 INFO [main] mapreduce.Job: The url to track the job: http://http://yasodhara-ideacentre-300S-08IHH:8088/proxy/application_1486442978038_0001/
2017-02-07 10:30:10,404 INFO [main] mapreduce.Job: Running job: job_1486442978038_0001
2017-02-07 10:30:16,612 INFO [main] mapreduce.Job: Job job_1486442978038_0001 running in uber mode : false
2017-02-07 10:30:16,614 INFO [main] mapreduce.Job: map 0% reduce 0%
2017-02-07 10:30:21,813 INFO [main] mapreduce.Job: map 100% reduce 0%
2017-02-07 10:30:21,829 INFO [main] mapreduce.Job: Job job_1486442978038_0001 completed successfully
2017-02-07 10:30:21,936 ERROR [main] tool.ImportTool: Imported Failed: No enum constant org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_MAPS
If you are trying to import data from mysql to Hive, consider the scenario:
**MySql Table :**
create table company
(
id int,
name varchar2(20),
location varchar2(20)
);
**Hive Table :**
create database acad;
use acad;
create table company
(
id int,
name string,
location string
);
This table will be stored in HDFS default warehouse location:
/user/hive/warehouse/acad.db/company
Now to load data using sqoop use should use command:
sqoop import --connect jdbc:mysql://localhost/b1 --username 'root' -P --
table 'company' --hive-import --hive-table 'company' -m 1 --warehouse-dir
/user/hive/warehouse/acad.db;
Make sure you specify --warehouse-dir to db, so when the mapreduce job runs it imports the company table from mysql to /user/hive/warehouse/acad.db location creating company folder and inside that you will have your output files: _SUCCESS and part-m-00000 .
This can be generally understood by when you create a table and store value it will be stored in location /user/hive/warehouse///
When you could do the same using sqoop , hive will be able to read the data from that file , and display the data in hive shell
With this you will be able to successfully import data into hive using sqoop.
Regarding that error, it actually wont stop you to import data into hive using sqoop from mysql
I am following this tutorial http://hadooped.blogspot.fr/2013/05/apache-sqoop-for-data-integration.html. And I have installed hadoop services (hdfs, hive, sqoop, hue, ...) using cloudera manager.
I am using Ubuntu 12.04 TLS.
When trying to import data from Mysql to HDFS, mapreduce jobs takes infinite time without returning any error. Knowing that the imported table has 4 columns and 10 rows.
this is what I do:
sqoop import --connect jdbc:mysql://localhost/employees --username hadoop --password password --table departments -m 1 --target-dir /user/sqoop2/sqoop-mysql/department
Warning: /opt/cloudera/parcels/CDH-5.5.2-1.cdh5.5.2.p0.4/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
16/02/23 17:49:09 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.5.2
16/02/23 17:49:09 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
16/02/23 17:49:10 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
16/02/23 17:49:10 INFO tool.CodeGenTool: Beginning code generation
16/02/23 17:49:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `departments` AS t LIMIT 1
16/02/23 17:49:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `departments` AS t LIMIT 1
16/02/23 17:49:11 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
Note: /tmp/sqoop-root/compile/6bdeb198a0c249392703e3fc0070cb64/departments.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
16/02/23 17:49:19 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/6bdeb198a0c249392703e3fc0070cb64/departments.jar
16/02/23 17:49:19 WARN manager.MySQLManager: It looks like you are importing from mysql.
16/02/23 17:49:19 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
16/02/23 17:49:19 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
16/02/23 17:49:19 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
16/02/23 17:49:19 INFO mapreduce.ImportJobBase: Beginning import of departments
16/02/23 17:49:20 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
16/02/23 17:49:24 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
16/02/23 17:49:24 INFO client.RMProxy: Connecting to ResourceManager at hadoopUser/10.0.2.15:8032
16/02/23 17:49:31 INFO db.DBInputFormat: Using read commited transaction isolation
16/02/23 17:49:31 INFO mapreduce.JobSubmitter: number of splits:1
16/02/23 17:49:33 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1456236806433_0004
16/02/23 17:49:34 INFO impl.YarnClientImpl: Submitted application application_1456236806433_0004
16/02/23 17:49:34 INFO mapreduce.Job: The url to track the job: http://hadoopUser:8088/proxy/application_1456236806433_0004/
16/02/23 17:49:34 INFO mapreduce.Job: Running job: job_1456236806433_0004
Job_image
regards,
The MapReduce Job is not getting Launched. You need to run a test wordcount job on the cluster.
I am using MAC Osx for Hadoop Stack and using MySQL as the database for it.I am trying to execute a Sqoop import command:
sqoop import --connect jdbc:mysql://127.0.0.1/emp --table employee --username root --password reality --m 1 --target-dir /sqoop_import
But I am facing below-mentioned issue while executing it. Even in /etc/hosts, localhost is at 127.0.0.1.host file screenshot. I have tried pinging localhost and it works but the error of host is down, still prevails. Please help.
2016-02-06 17:42:38,267 INFO [main] mapreduce.Job: Job job_1454152643692_0010 failed with state FAILED due to: Application application_1454152643692_0010 failed 2 times due to Error launching appattempt_1454152643692_0010_000002. Got exception: java.io.IOException: Failed on local exception: java.net.SocketException: Host is down; Host Details : local host is: "Mohits-MacBook-Pro.local/192.168.0.103"; destination host is: "192.168.0.105":38183;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy32.startContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Host is down
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
... 9 more
. Failing the application.
2016-02-06 17:42:38,304 INFO [main] mapreduce.Job: Counters: 0
2016-02-06 17:42:38,321 WARN [main] mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2016-02-06 17:42:38,326 INFO [main] mapreduce.ImportJobBase: Transferred 0 bytes in 125.7138 seconds (0 bytes/sec)
2016-02-06 17:42:38,349 WARN [main] mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-02-06 17:42:38,350 INFO [main] mapreduce.ImportJobBase: Retrieved 0 records.
2016-02-06 17:42:38,351 ERROR [main] tool.ImportTool: Error during import: Import job failed!
I see that you are having network issues. From the log above it says that local host translates to Mohits-MacBook-Pro.local/192.168.0.103 in you unix system and destination which it is trying to connect is "192.168.0.105":38183. Please go to unix system and see /etc/hosts file and make sure you change localhost to 127.0.0.1.
I have installed DataStax Enterprise "dse-4.0.1", but when I tried to do the demo per the below link
http://www.datastax.com/docs/datastax_enterprise2.0/sqoop/sqoop_demo, I am getting the below error, can anybody please help me with the issue, I am facing, log file attached for your reference.
[root#chbslx0624 bin]# ./dse sqoop import --connect jdbc:mysql://127.0.0.1/npa_nxx_demo --username root --password poc123 --table npa_nxx --target-dir /npa_nxx
14/04/14 10:44:14 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
14/04/14 10:44:14 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
14/04/14 10:44:14 INFO tool.CodeGenTool: Beginning code generation
14/04/14 10:44:14 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `npa_nxx` AS t LIMIT 1
14/04/14 10:44:14 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `npa_nxx` AS t LIMIT 1
14/04/14 10:44:14 INFO orm.CompilationManager: HADOOP_HOME is /opt/cassandra/dse-4.0.1/resources/hadoop/bin/..
Note: /tmp/sqoop-root/compile/b0fc8093d30c07f252da42678679e461/npa_nxx.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
14/04/14 10:44:15 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/b0fc8093d30c07f252da42678679e461/npa_nxx.jar
14/04/14 10:44:15 WARN manager.MySQLManager: It looks like you are importing from mysql.
14/04/14 10:44:15 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
14/04/14 10:44:15 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
14/04/14 10:44:15 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
14/04/14 10:44:15 INFO mapreduce.ImportJobBase: Beginning import of npa_nxx
14/04/14 10:44:17 INFO snitch.Workload: Setting my workload to Cassandra
14/04/14 10:44:18 ERROR security.UserGroupInformation: PriviledgedActionException as:root cause:java.io.IOException: InvalidRequestException(why:You have not logged in)
14/04/14 10:44:18 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: InvalidRequestException(why:You have not logged in)
at com.datastax.bdp.util.CassandraProxyClient.initialize(CassandraProxyClient.java:453)
at com.datastax.bdp.util.CassandraProxyClient.<init>(CassandraProxyClient.java:376)
at com.datastax.bdp.util.CassandraProxyClient.newProxyConnection(CassandraProxyClient.java:259)
at com.datastax.bdp.util.CassandraProxyClient.newProxyConnection(CassandraProxyClient.java:306)
at com.datastax.bdp.hadoop.cfs.CassandraFileSystemThriftStore.initialize(CassandraFileSystemThriftStore.java:230)
at com.datastax.bdp.hadoop.cfs.CassandraFileSystem.initialize(CassandraFileSystem.java:73)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:97)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:856)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:141)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:202)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:475)
at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:108)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:403)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:476)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
Caused by: InvalidRequestException(why:You have not logged in)
at org.apache.cassandra.thrift.Cassandra$describe_keyspaces_result$describe_keyspaces_resultStandardScheme.read(Cassandra.java:31961)
at org.apache.cassandra.thrift.Cassandra$describe_keyspaces_result$describe_keyspaces_resultStandardScheme.read(Cassandra.java:31928)
at org.apache.cassandra.thrift.Cassandra$describe_keyspaces_result.read(Cassandra.java:31870)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_describe_keyspaces(Cassandra.java:1181)
at org.apache.cassandra.thrift.Cassandra$Client.describe_keyspaces(Cassandra.java:1169)
at com.datastax.bdp.util.CassandraProxyClient.initialize(CassandraProxyClient.java:425)
... 32 more
It looks like you have another local running Cassandra node which is username/password enable. You can follow the latest document, and start DSE as -t then run the import script.