Using JBDC to read sql file in spark scala collecting Warehouse error - mysql

I am trying to read MySQL file using Spark Scala. Following is the code I tried
val dataframe_mysql = sqlContext.read.format("jdbc")
.option("url","jdbc:mysql://xx.xx.xx.xx:xx")
.option("driver", "com.mysql.jdbc.Driver")
.option("dbtable", "schema.xxxx")
.option("user", "xxxx").option("password", "xxxxx").load()
but I am collecting Warehouse path error as following:
Warehouse path is 'file:/C:/Users/Owner/eclipse-workspace/stProject/spark-ware‌​house/'. Exception in thread "main" java.lang.NullPointerException at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.res‌​olveTable(JDBCRDD.sc‌​ala:72) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation‌​.(JDBCRelation‌​.scala:113) at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelation‌​Provider.createRelat‌​ion(JdbcRelationProv‌​ider.scala:45) at

Related

Apache Spark SQL get_json_object java.lang.String cannot be cast to org.apache.spark.unsafe.types.UTF8String

I am trying to read a json stream from an MQTT broker in Apache Spark with structured streaming, read some properties of an incoming json and output them to the console. My code looks like that:
val spark = SparkSession
.builder()
.appName("BahirStructuredStreaming")
.master("local[*]")
.getOrCreate()
import spark.implicits._
val topic = "temp"
val brokerUrl = "tcp://localhost:1883"
val lines = spark.readStream
.format("org.apache.bahir.sql.streaming.mqtt.MQTTStreamSourceProvider")
.option("topic", topic).option("persistence", "memory")
.load(brokerUrl)
.toDF().withColumn("payload", $"payload".cast(StringType))
val jsonDF = lines.select(get_json_object($"payload", "$.eventDate").alias("eventDate"))
val query = jsonDF.writeStream
.format("console")
.start()
query.awaitTermination()
However, when the json arrives I get the following errors:
Exception in thread "main" org.apache.spark.sql.streaming.StreamingQueryException: Writing job aborted.
=== Streaming Query ===
Identifier: [id = 14d28475-d435-49be-a303-8e47e2f907e3, runId = b5bd28bb-b247-48a9-8a58-cb990edaf139]
Current Committed Offsets: {MQTTStreamSource[brokerUrl: tcp://localhost:1883, topic: temp clientId: paho7247541031496]: -1}
Current Available Offsets: {MQTTStreamSource[brokerUrl: tcp://localhost:1883, topic: temp clientId: paho7247541031496]: 0}
Current State: ACTIVE
Thread State: RUNNABLE
Logical Plan:
Project [get_json_object(payload#22, $.id) AS eventDate#27]
+- Project [id#10, topic#11, cast(payload#12 as string) AS payload#22, timestamp#13]
+- StreamingExecutionRelation MQTTStreamSource[brokerUrl: tcp://localhost:1883, topic: temp clientId: paho7247541031496], [id#10, topic#11, payload#12, timestamp#13]
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:300)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Caused by: org.apache.spark.SparkException: Writing job aborted.
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2Exec.scala:92)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:296)
at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3384)
at org.apache.spark.sql.Dataset.$anonfun$collect$1(Dataset.scala:2783)
at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3365)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3365)
at org.apache.spark.sql.Dataset.collect(Dataset.scala:2783)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$15(MicroBatchExecution.scala:537)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$14(MicroBatchExecution.scala:533)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:349)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:532)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:198)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:349)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)
... 1 more
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 8, localhost, executor driver): java.lang.ClassCastException: java.lang.String cannot be cast to org.apache.spark.unsafe.types.UTF8String
at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.getUTF8String(rows.scala:46)
at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.getUTF8String$(rows.scala:46)
at org.apache.spark.sql.catalyst.expressions.GenericInternalRow.getUTF8String(rows.scala:195)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:619)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$2(WriteToDataSourceV2Exec.scala:117)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:116)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.$anonfun$doExecute$2(WriteToDataSourceV2Exec.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:405)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2Exec.scala:64)
... 34 more
Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to org.apache.spark.unsafe.types.UTF8String
at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.getUTF8String(rows.scala:46)
at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.getUTF8String$(rows.scala:46)
at org.apache.spark.sql.catalyst.expressions.GenericInternalRow.getUTF8String(rows.scala:195)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:619)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$2(WriteToDataSourceV2Exec.scala:117)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:116)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.$anonfun$doExecute$2(WriteToDataSourceV2Exec.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:405)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I am sending the JSON records using mosquitto broker and they look like this:
mosquitto_pub -m '{"eventDate": "2020-11-11T15:17:00.000+0200"}' -t "temp"
It seems that every strings coming from Bahir stream source provider raise this error. For instance the following code also raises this error :
spark.readStream
.format("org.apache.bahir.sql.streaming.mqtt.MQTTStreamSourceProvider")
.option("topic", topic).option("persistence", "memory")
.load(brokerUrl)
.select("topic")
.writeStream
.format("console")
.start()
It looks like Spark does not recognize strings coming from Bahir, maybe some kind of weird string class version issue. I've tried the following actions to make the code work:
setup java version to 8
upgrade spark version from 2.4.0 to 2.4.7
setup scala version to 2.11.12
use decode function with all possible encoding combinations instead of .cast(StringType) to transform column "payload" to String
use substring function on column "payload" to recreate a compatible String.
Finally, I got working code by recreating the string using constructor and dataset:
val lines = spark.readStream
.format("org.apache.bahir.sql.streaming.mqtt.MQTTStreamSourceProvider")
.option("topic", topic).option("persistence", "memory")
.load(brokerUrl)
.select("payload")
.as[Array[Byte]]
.map(payload => new String(payload))
.toDF("payload")
This solution is rather ugly but at least it works.
I believe that there is nothing wrong with the code provided in the question and I suspect a bug on Bahir or Spark side preventing Spark to handle String from Bahir source.

AWS EMR Spark exception on jdbc datasource load

I'm spinning emr-5.31.0 image of AWS EMR cluster with Spark 2.4.6 onboard and then I'm trying to login into spark-shell on the master node and follow this tutorial
https://bigdataprogrammers.com/load-data-from-mysql-in-spark-using-jdbc/
for uploading data from my RDS MySQL instance.
I've uploaded both connector jar (mysql-connector-java-5.1.49-bin.jar) as well as script to ~/home/hadoop folder.
Then I perform as described in tutorial and I'm getting 2 errors
scala> [hadoop#ip-172-31-* ~]$ spark-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
20/10/09 16:41:31 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
Spark context Web UI available at http://ip-172-31-*.ec2.internal:4040
Spark context available as 'sc' (master = yarn, app id = application_1602254033216_0005).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.6-amzn-0
/_/
Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_265)
Type in expressions to have them evaluated.
Type :help for more information.
scala> :require /home/hadoop/mysql-connector-java-5.1.49-bin.jar
Added '/home/hadoop/mysql-connector-java-5.1.49-bin.jar' to classpath.
scala> :load /home/hadoop/test01.scala
Loading /home/hadoop/test01.scala...
import java.sql.{Connection, DriverManager, ResultSet}
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.hive.HiveContext
error: error while loading package, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/execution/package.class)' has location not matching its contents: contains package object execution
error: error while loading QueryExecution, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/execution/QueryExecution.class)' has location not matching its contents: contains class QueryExecution
error: error while loading package, class file '/usr/lib/spark/jars/spark-catalyst_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/catalyst/plans/package.class)' has location not matching its contents: contains package object plans
error: error while loading LogicalPlan, class file '/usr/lib/spark/jars/spark-catalyst_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.class)' has location not matching its contents: contains class LogicalPlan
error: error while loading package, class file '/usr/lib/spark/jars/spark-catalyst_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/catalyst/encoders/package.class)' has location not matching its contents: contains package object encoders
error: error while loading ExpressionEncoder, class file '/usr/lib/spark/jars/spark-catalyst_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/catalyst/encoders/ExpressionEncoder.class)' has location not matching its contents: contains class ExpressionEncoder
error: error while loading Expression, class file '/usr/lib/spark/jars/spark-catalyst_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/catalyst/expressions/Expression.class)' has location not matching its contents: contains class Expression
error: error while loading NamedExpression, class file '/usr/lib/spark/jars/spark-catalyst_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/catalyst/expressions/NamedExpression.class)' has location not matching its contents: contains class NamedExpression
error: error while loading DataFrameNaFunctions, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/DataFrameNaFunctions.class)' has location not matching its contents: contains class DataFrameNaFunctions
error: error while loading DataFrameStatFunctions, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/DataFrameStatFunctions.class)' has location not matching its contents: contains class DataFrameStatFunctions
error: error while loading TypedColumn, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/TypedColumn.class)' has location not matching its contents: contains class TypedColumn
error: error while loading package, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/package.class)' has location not matching its contents: contains package object function
error: error while loading ReduceFunction, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/ReduceFunction.class)' has location not matching its contents: contains class ReduceFunction
error: error while loading KeyValueGroupedDataset, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/KeyValueGroupedDataset.class)' has location not matching its contents: contains class KeyValueGroupedDataset
error: error while loading MapFunction, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/MapFunction.class)' has location not matching its contents: contains class MapFunction
error: error while loading Metadata, class file '/usr/lib/spark/jars/spark-catalyst_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/types/Metadata.class)' has location not matching its contents: contains class Metadata
error: error while loading FilterFunction, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/FilterFunction.class)' has location not matching its contents: contains class FilterFunction
error: error while loading MapPartitionsFunction, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/MapPartitionsFunction.class)' has location not matching its contents: contains class MapPartitionsFunction
error: error while loading FlatMapFunction, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/FlatMapFunction.class)' has location not matching its contents: contains class FlatMapFunction
error: error while loading ForeachFunction, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/ForeachFunction.class)' has location not matching its contents: contains class ForeachFunction
error: error while loading ForeachPartitionFunction, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/api/java/function/ForeachPartitionFunction.class)' has location not matching its contents: contains class ForeachPartitionFunction
error: error while loading StorageLevel, class file '/usr/lib/spark/jars/spark-core_2.11-2.4.6-amzn-0.jar(org/apache/spark/storage/StorageLevel.class)' has location not matching its contents: contains class StorageLevel
error: error while loading CreateViewCommand, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/execution/command/CreateViewCommand.class)' has location not matching its contents: contains class CreateViewCommand
error: error while loading DataFrameWriter, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/DataFrameWriter.class)' has location not matching its contents: contains class DataFrameWriter
error: error while loading DataStreamWriter, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/streaming/DataStreamWriter.class)' has location not matching its contents: contains class DataStreamWriter
error: error while loading SparkPlan, class file '/usr/lib/spark/jars/spark-sql_2.11-2.4.6-amzn-0.jar(org/apache/spark/sql/execution/SparkPlan.class)' has location not matching its contents: contains class SparkPlan
scala> :load /home/hadoop/test01.scala
Loading /home/hadoop/test01.scala...
import java.sql.{Connection, DriverManager, ResultSet}
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.hive.HiveContext
defined object ReadDataFromJdbc
scala> ReadDataFromJdbc.main(Array("batches"))
Started.......Fri Oct 09 16:42:02 UTC 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
[Stage 0:> (0 + 1) / 1]20/10/09 16:42:04 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, ip-172-31-20-13.ec2.internal, executor 1): java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:111)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:45)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:55)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:272)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1405)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at java.lang.ClassLoader.findClass(ClassLoader.java:523)
at org.apache.spark.util.ParentClassLoader.findClass(ParentClassLoader.java:35)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at org.apache.spark.util.ParentClassLoader.loadClass(ParentClassLoader.java:40)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:106)
... 25 more
[Stage 0:> (0 + 0) / 1]20/10/09 16:42:05 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
(Connectivity Failed for Table ,org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, ip-172-31-27-165.ec2.internal, executor 2): java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:111)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:45)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:55)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:272)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1405)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at java.lang.ClassLoader.findClass(ClassLoader.java:523)
at org.apache.spark.util.ParentClassLoader.findClass(ParentClassLoader.java:35)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at org.apache.spark.util.ParentClassLoader.loadClass(ParentClassLoader.java:40)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:106)
... 25 more
Driver stacktrace:)
first error when I'm loading scala script, it is getting loaded with some errors but repetition of same command seems to fix it
second error once I'm requesting data to be loaded from mysql and despite fact that mysql jdbc connector was added to classpath with a command earlier, it fails with java.lang.ClassNotFoundException: com.mysql.jdbc.Driver.
While I believe I can find some directory which will be accessible by spark to find jdbc, I'm super-confused by error appearing on load of script - why is it appearing and how it can be fixed?
I've ended up creating a bootstrap step for cluster which was copying mysql-connector-java jar to all nodes of cluster before spark and hadoop even installed.
First, create copymysqljar.sh script
#!/bin/bash
sudo mkdir -p /home/hadoop
sudo mkdir -p /usr/lib/spark/jars
sudo mkdir -p /usr/lib/hadoop/lib
aws s3 cp s3://<YOUR_BUCKET>/mysql-connector-java-5.1.49-bin.jar /home/hadoop
chmod 777 /home/hadoop/mysql-connector-java-5.1.49-bin.jar
sudo cp /home/hadoop/mysql-connector-java-5.1.49-bin.jar /usr/lib/spark/jars
sudo cp /home/hadoop/mysql-connector-java-5.1.49-bin.jar /usr/lib/hadoop/lib
save copymysqljar.sh to S3 bucket identified by s3://<YOUR_BUCKET>
proceed to cluster creation in AWS with 'create cluster'-'advanced configuration'
during advanced configuration on step 4 create a custom bootstrap action with s3://<YOUR_BUCKET>/copymysqljar.sh as a script
start cluster creation
Alternatively, instead of steps 3, 4 and 5 you can do the same with AWS command-line tools.
You can reach out to official docs on bootstrap steps https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-bootstrap.html#CustomBootstrapCopyS3Object
In general, this script takes care of everything for AWS EMR 5.31 with Hadoop, Spark and Zeppelin. Might require to copy to other directories if other tools should connect to mysql too.

com.mysql.jdbc.Driver not found in spark2 scala

I am using Jupyter Notebook with Scala kernel, below is my code to import mysql table to a dataframe:
val sql="""select * from customer"""
val df_customer = spark.read
.format("jdbc")
.option("url", "jdbc:mysql://localhost:3306/ccfd")
.option("driver", "com.mysql.jdbc.Driver")
.option("dbtable", s"( $sql ) t")
.option("user", "root")
.option("password", "xxxxxxx")
.load()
Below is the error:
Name: java.lang.ClassNotFoundException
Message: com.mysql.jdbc.Driver
StackTrace: at scala.reflect.internal.util.AbstractFileClassLoader.findClass(AbstractFileClassLoader.scala:62)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:45)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:79)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:79)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:79)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:35)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:34)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:340)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:164)
Can anyone share a working code snippet here? I am using Spark2, session named spark is ready when I start the kernel in a new notebook.
Thank you in advance.

Use SparkSession.sql() with JDBC

Problem :
I would like to use JDBC connection to make a custom request using spark.
The goal of this query is to optimized memory allocation on workers, because of that I can't use :
ss.read
.format("jdbc")
.option("url", "jdbc:postgresql:dbserver")
.option("dbtable", "schema.tablename")
.option("user", "username")
.option("password", "password")
.load()
Currently :
I am currently trying to run :
ss = SparkSession
.builder()
.appName(appName)
.master("local")
.config(conf)
.getOrCreate()
ss.sql("some custom query")
Configuration :
url=jdbc:mysql://127.0.0.1/database_name
driver=com.mysql.jdbc.Driver
user=user_name
password=xxxxxxxxxx
Error :
[info] Exception encountered when attempting to run a suite with class name: db.TestUserProvider *** ABORTED ***
[info] org.apache.spark.sql.AnalysisException: Table or view not found: users; line 1 pos 14
[info] at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
[info] at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:459)
[info] at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:478)
[info] at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:463)
[info] at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:61)
[info] at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:61)
[info] at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
[info] at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:60)
[info] at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:58)
[info] at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:58)
Assumption :
I guess there is a configuration error, but I can't find out where.
Spark can read and write data to/from relational databases using the JDBC data source (like you did in your first code example).
In addition (and completely separately), spark allows using SQL to query views that were created over data that was already loaded into a DataFrame from some source. For example:
val df = Seq(1,2,3).toDF("a") // could be any DF, loaded from file/JDBC/memory...
df.createOrReplaceTempView("my_spark_table")
spark.sql("select a from my_spark_table").show()
Only "tables" (called views, as of Spark 2.0.0) created this way can be queried using SparkSession.sql.
If your data is stored in a relational database, Spark will have to read it from there first, and only then would it be able to execute any distributed computation on the loaded copy. Bottom line - we can load the data from the table using read, create a temp view, and then query it:
ss.read
.format("jdbc")
.option("url", "jdbc:mysql://127.0.0.1/database_name")
.option("dbtable", "schema.tablename")
.option("user", "username")
.option("password", "password")
.load()
.createOrReplaceTempView("my_spark_table")
// and then you can query the view:
val df = ss.sql("select * from my_spark_table where ... ")

Spark SQL error read JSON file : java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class

i am trying to read JSON file using Spark SQL in Java.
this is my code
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.SQLContext;
...
JavaSparkContext jsc = new JavaSparkContext(sparkConf);
SQLContext sqlContext = new SQLContext(jsc);
DataFrame df = sqlContext.jsonFile("~/test.json");
df.printSchema();
df.registerTempTable("test");
...
i made simple JSON "test.json", to make it simple:
{
"name": "myname"
}
and when i tried to run the code, it comes error message:
efg
17/03/30 10:02:26 INFO BlockManagerMasterEndpoint: Registering block manager 10.6.86.82:36824 with 1948.2 MB RAM, BlockManagerId(driver, 10.6.86.82, 36824)
17/03/30 10:02:26 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.6.86.82, 36824)
17/03/30 10:02:26 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
Exception in thread "main" java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
at org.apache.spark.sql.sources.CaseInsensitiveMap.<init>(ddl.scala:344)
at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:219)
at org.apache.spark.sql.SQLContext.load(SQLContext.scala:697)
at org.apache.spark.sql.SQLContext.jsonFile(SQLContext.scala:572)
at org.apache.spark.sql.SQLContext.jsonFile(SQLContext.scala:553)
at sugi.kau.sparkonjava.SparkSQL.main(SparkSQL.java:32)
Caused by: java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 6 more
17/03/30 10:02:26 INFO SparkContext: Invoking stop() from shutdown hook
...
thanks
in the docs spark for the function jsonFile(String path):
Loads a JSON file (one object per line), returning the result as a DataFrame. (Note tha jsonFile is replaced by read().json())
so you should have an object per line and your source file should be like this :
{"name": "myname"}
{"name": "myname2"}
.....