I'm spinning emr-5.31.0 image of AWS EMR cluster with Spark 2.4.6 onboard and then I'm trying to login into spark-shell on the master node and follow this tutorial
https://bigdataprogrammers.com/load-data-from-mysql-in-spark-using-jdbc/
for uploading data from my RDS MySQL instance.
I've uploaded both connector jar (mysql-connector-java-5.1.49-bin.jar) as well as script to ~/home/hadoop folder.
Then I perform as described in tutorial and I'm getting 2 errors
scala> [hadoop#ip-172-31-* ~]$ spark-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
20/10/09 16:41:31 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
Spark context Web UI available at http://ip-172-31-*.ec2.internal:4040
Spark context available as 'sc' (master = yarn, app id = application_1602254033216_0005).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.6-amzn-0
/_/
Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_265)
Type in expressions to have them evaluated.
Type :help for more information.
scala> :require /home/hadoop/mysql-connector-java-5.1.49-bin.jar
Added '/home/hadoop/mysql-connector-java-5.1.49-bin.jar' to classpath.
scala> :load /home/hadoop/test01.scala
Loading /home/hadoop/test01.scala...
import java.sql.{Connection, DriverManager, ResultSet}
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.hive.HiveContext
error: error while loading package, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/execution/package.class)' has location not matching its contents: contains package object execution
error: error while loading QueryExecution, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/execution/QueryExecution.class)' has location not matching its contents: contains class QueryExecution
error: error while loading package, class file '/usr/lib/spark/jars/spark-catalyst_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/catalyst/plans/package.class)' has location not matching its contents: contains package object plans
error: error while loading LogicalPlan, class file '/usr/lib/spark/jars/spark-catalyst_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.class)' has location not matching its contents: contains class LogicalPlan
error: error while loading package, class file '/usr/lib/spark/jars/spark-catalyst_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/catalyst/encoders/package.class)' has location not matching its contents: contains package object encoders
error: error while loading ExpressionEncoder, class file '/usr/lib/spark/jars/spark-catalyst_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/catalyst/encoders/ExpressionEncoder.class)' has location not matching its contents: contains class ExpressionEncoder
error: error while loading Expression, class file '/usr/lib/spark/jars/spark-catalyst_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/catalyst/expressions/Expression.class)' has location not matching its contents: contains class Expression
error: error while loading NamedExpression, class file '/usr/lib/spark/jars/spark-catalyst_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/catalyst/expressions/NamedExpression.class)' has location not matching its contents: contains class NamedExpression
error: error while loading DataFrameNaFunctions, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/DataFrameNaFunctions.class)' has location not matching its contents: contains class DataFrameNaFunctions
error: error while loading DataFrameStatFunctions, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/DataFrameStatFunctions.class)' has location not matching its contents: contains class DataFrameStatFunctions
error: error while loading TypedColumn, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/TypedColumn.class)' has location not matching its contents: contains class TypedColumn
error: error while loading package, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/package.class)' has location not matching its contents: contains package object function
error: error while loading ReduceFunction, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/ReduceFunction.class)' has location not matching its contents: contains class ReduceFunction
error: error while loading KeyValueGroupedDataset, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/KeyValueGroupedDataset.class)' has location not matching its contents: contains class KeyValueGroupedDataset
error: error while loading MapFunction, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/MapFunction.class)' has location not matching its contents: contains class MapFunction
error: error while loading Metadata, class file '/usr/lib/spark/jars/spark-catalyst_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/types/Metadata.class)' has location not matching its contents: contains class Metadata
error: error while loading FilterFunction, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/FilterFunction.class)' has location not matching its contents: contains class FilterFunction
error: error while loading MapPartitionsFunction, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/MapPartitionsFunction.class)' has location not matching its contents: contains class MapPartitionsFunction
error: error while loading FlatMapFunction, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/FlatMapFunction.class)' has location not matching its contents: contains class FlatMapFunction
error: error while loading ForeachFunction, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/ForeachFunction.class)' has location not matching its contents: contains class ForeachFunction
error: error while loading ForeachPartitionFunction, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/ForeachPartitionFunction.class)' has location not matching its contents: contains class ForeachPartitionFunction
error: error while loading StorageLevel, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/storage/StorageLevel.class)' has location not matching its contents: contains class StorageLevel
error: error while loading CreateViewCommand, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/execution/command/CreateViewCommand.class)' has location not matching its contents: contains class CreateViewCommand
error: error while loading DataFrameWriter, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/DataFrameWriter.class)' has location not matching its contents: contains class DataFrameWriter
error: error while loading DataStreamWriter, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/streaming/DataStreamWriter.class)' has location not matching its contents: contains class DataStreamWriter
error: error while loading SparkPlan, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/execution/SparkPlan.class)' has location not matching its contents: contains class SparkPlan
scala> :load /home/hadoop/test01.scala
Loading /home/hadoop/test01.scala...
import java.sql.{Connection, DriverManager, ResultSet}
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.hive.HiveContext
defined object ReadDataFromJdbc
scala> ReadDataFromJdbc.main(Array("batches"))
Started.......Fri Oct 09 16:42:02 UTC 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
[Stage 0:> (0 + 1) / 1]20/10/09 16:42:04 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, ip-172-31-20-13.ec2.internal, executor 1): java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:111)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:45)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:55)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:272)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1405)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at java.lang.ClassLoader.findClass(ClassLoader.java:523)
at org.apache.spark.util.ParentClassLoader.findClass(ParentClassLoader.java:35)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at org.apache.spark.util.ParentClassLoader.loadClass(ParentClassLoader.java:40)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:106)
... 25 more
[Stage 0:> (0 + 0) / 1]20/10/09 16:42:05 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
(Connectivity Failed for Table ,org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, ip-172-31-27-165.ec2.internal, executor 2): java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:111)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:45)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:55)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:272)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1405)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at java.lang.ClassLoader.findClass(ClassLoader.java:523)
at org.apache.spark.util.ParentClassLoader.findClass(ParentClassLoader.java:35)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at org.apache.spark.util.ParentClassLoader.loadClass(ParentClassLoader.java:40)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:106)
... 25 more
Driver stacktrace:)
first error when I'm loading scala script, it is getting loaded with some errors but repetition of same command seems to fix it
second error once I'm requesting data to be loaded from mysql and despite fact that mysql jdbc connector was added to classpath with a command earlier, it fails with java.lang.ClassNotFoundException: com.mysql.jdbc.Driver.
While I believe I can find some directory which will be accessible by spark to find jdbc, I'm super-confused by error appearing on load of script - why is it appearing and how it can be fixed?
I've ended up creating a bootstrap step for cluster which was copying mysql-connector-java jar to all nodes of cluster before spark and hadoop even installed.
First, create copymysqljar.sh script
#!/bin/bash
sudo mkdir -p /home/hadoop
sudo mkdir -p /usr/lib/spark/jars
sudo mkdir -p /usr/lib/hadoop/lib
aws s3 cp s3://<YOUR_BUCKET>/mysql-connector-java-5.1.49-bin.jar /home/hadoop
chmod 777 /home/hadoop/mysql-connector-java-5.1.49-bin.jar
sudo cp /home/hadoop/mysql-connector-java-5.1.49-bin.jar /usr/lib/spark/jars
sudo cp /home/hadoop/mysql-connector-java-5.1.49-bin.jar /usr/lib/hadoop/lib
save copymysqljar.sh to S3 bucket identified by s3://<YOUR_BUCKET>
proceed to cluster creation in AWS with 'create cluster'-'advanced configuration'
during advanced configuration on step 4 create a custom bootstrap action with s3://<YOUR_BUCKET>/copymysqljar.sh as a script
start cluster creation
Alternatively, instead of steps 3, 4 and 5 you can do the same with AWS command-line tools.
You can reach out to official docs on bootstrap steps https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-bootstrap.html#CustomBootstrapCopyS3Object
In general, this script takes care of everything for AWS EMR 5.31 with Hadoop, Spark and Zeppelin. Might require to copy to other directories if other tools should connect to mysql too.
I was importing data from mysql to hive using sqoop:
sqoop import --connect jdbc:mysql://localhost:3306/DATASET -username root -P --table MATCHES --hive-import
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask.
com.fasterxml.jackson.databind.ObjectMapper.readerFor(Ljava/lang/Class;)Lcom/fasterxml/jackson/databind/ObjectReader;18/11/25
11:42:58 ERROR ql.Driver: FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask.
com.fasterxml.jackson.databind.ObjectMapper.readerFor(Ljava/lang/Class;)Lcom/fasterxml/jackson/databind/ObjectReader;
Do you have the jackson-databind jar in your hive lib directory.check it once
1.sqoop export --connect jdbc:mysql://localhost:3306/hduser_db --username hduser
--password hduser --table export --export-dir /user/hive/warehouse/three --
fields-terminated-by ','
17/09/13 14:10:45 INFO mapreduce.Job: map 0% reduce 0%
17/09/13 14:10:50 INFO mapreduce.Job: map 100% reduce 0%
17/09/13 14:10:51 INFO mapreduce.Job: Job job_1505199140014_0033 failed with
state FAILED due to: Task failed task_1505199140014_0033_m_000000
ob failed as tasks failed. failedMaps:1 failedReduces:0
2.17/09/13 14:10:51 INFO mapreduce.Job: Counters: 8
Job Counters
Failed map tasks=1
Launched map tasks=1
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=2947
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=2947
Total vcore-milliseconds taken by all map tasks=2947
Total megabyte-milliseconds taken by all map tasks=3017728
17/09/13 14:10:51 WARN mapreduce.Counters: Group FileSystemCounters is
deprecate
17/09/13 14:10:51 INFO mapreduce.ExportJobBase: Transferred 0 bytes in
14.8875 s
17/09/13 14:10:51 WARN mapreduce.Counters: Group
org.apache.hadoop.mapred.Task$C
17/09/13 14:10:51 INFO mapreduce.ExportJobBase: Exported 0 records.
17/09/13 14:10:51 ERROR tool.ExportTool: Error during export:
Export job failed!
at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java
at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
While running export command-
Below steps have to be taken care.
Datatypes, column names between source(HDFS data) and destination(table on rdbms) should match.
we should specify all column names in the --columns parameter.
Eg:
sqoop export --connect jdbc:mysql://localhost:3306/hduser_db
--username hduser
--password hduser
--table export
--export-dir /user/hive/warehouse/three
--fields-terminated-by ','
--columns "column1,column2,...." ;
I have CSV file in HDFS with lines like:
"2015-12-01","Augusta","46728.0","1"
I am trying to export this file to MySQL table.
CREATE TABLE test.events_top10(
dt VARCHAR(255),
name VARCHAR(255),
summary VARCHAR(255),
row_number VARCHAR(255)
);
With the command:
sqoop export --table events_top10 --export-dir /user/hive/warehouse/result --escaped-by \" --connect ...
This command fails with error:
Error: java.io.IOException: Can't export data, please check failed map task logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.RuntimeException: Can't parse input data: '2015-12-02,Ashburn,43040.0,9'
at events_top10.__loadFromFields(events_top10.java:335)
at events_top10.parse(events_top10.java:268)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
Caused by: java.util.NoSuchElementException
at java.util.ArrayList$Itr.next(ArrayList.java:834)
at events_top10.__loadFromFields(events_top10.java:320)
... 12 more
In case I do not use --escaped-by \" parameter than MySQL table contains rows like this
"2015-12-01" | "Augusta" | "46728.0" | "1"
Could you please explain how to export CSV file to MySQL table without double quotes?
I have to use both --escaped-by \ and --enclosed-by '\"'
So the correct command is
sqoop export --table events_top10 --export-dir /user/hive/warehouse/result --escaped-by '\\' --enclosed-by '\"' --connect ...
For more information please see official documentation
I am trying to sqoop data out of a MySQL database where I have a table with both a primary key and a last_updated field. I am trying to essentially get all records that were recently updated and overwrite the current records in the hive warehouse
I have tried the following command
sqoop job --create trainingDataUpdate -- import \
--connect jdbc:mysql://localhost:3306/analytics \
--username user \
--password-file /sqooproot.pwd \
--incremental lastmodified \
--check-column last_updated \
--last-value '2015-02-13 11:08:18' \
--table trainingDataFinal \
--merge-key id \
--direct --hive-import \
--hive-table analytics.trainingDataFinal \
--null-string '\\N' \
--null-non-string '\\N' \
--map-column-hive last_updated=TIMESTAMP
and I get the following error
15/02/13 14:07:41 INFO hive.HiveImport: FAILED: SemanticException Line 2:17 Invalid path ''hdfs://dev.cluster.com:8020/user/hdfs/_sqoop/13140640000000520_32226_hwhjobdev_cluster.com_trainingDataFinal'': No files matching path hdfs://dev.cluster.com:8020/user/hdfs/_sqoop/13140640000000520_32226_dev.cluster.com_trainingDataFinal
15/02/13 14:07:42 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Hive exited with status 64
at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:385)
at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:335)
at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:239)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:514)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
at org.apache.sqoop.tool.JobTool.execJob(JobTool.java:228)
at org.apache.sqoop.tool.JobTool.run(JobTool.java:283)
at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
I thought by including the --merge-key it would be able to overwrite the old records with new records. Does anyone know if this is possible in sqoop?
I don't think sqoop can do it.
--merge-key is only used by sqoop-merge not import
also see http://sqoop.apache.org/docs/1.4.0-incubating/SqoopUserGuide.html#id1764421