Gson import error in Scala. - json

I am using a Gson library for parsing Json data. I am trying to run a program from terminal as follows:
scala -classpath "*.jar" JsonParsing.scala
To which I am getting the following error:
JsonParsing.scala:2: error: object google is not a member of package com import com.google.gson.Gson
I am unsure as why this error is coming. When I have gson jar in accurate folder.
gson-2.2.2.jar
I am using import statements as follows:
import com.google.gson.Gson
import com.google.gson.JsonObject
import com.google.gson.JsonParser
Help on this error would be appreciated. Thanks.

Your dependancy not include google package.
You can use :
// https://mvnrepository.com/artifact/com.google.code.gson/gson
libraryDependencies += "com.google.code.gson" % "gson" % "2.8.0"
or download appropriate jar http://www.java2s.com/Code/Jar/g/gson.htm

Compile:
$ scalac -classpath <path_to_your_jar_files> -d classes " path/to/classes/you/want/to/compile/*
Execute:
$ scala -classpath classes:<path_to_your_jar_files> com.your.package.ClassYouWantToRun
This is not a good way of doing it because it's not scalable. You should be using a tool like SBT to build and run projects.

Related

ModuleNotFoundError: No module named 'paddle.distributed'

I am trying to run the following code to train paddleOCR.
import paddle
import paddle.distributed as dist
But I'm getting this error:
ModuleNotFoundError: No module named 'paddle.distributed'
Even after I have installed paddle-client.
docker pull paddlepaddle/paddle:2.3.0-gpu-cuda11.2-cudnn8
I use this images which can work well.
You can try the paddlepaddle with 2.3.1 version,and quick install can refer to: https://www.paddlepaddle.org.cn/en

How to include python mysql.connector into AWS Chalice deployment?

I try to deploy an AWS lambda application, I implemented with the Chalice Python Framework. My app.py connects to a MySQL server and therefore has to
import mysql.connector
But on every invocation of one of my lambda functions I get an error in the log
'Unable to import module 'app': No module named mysql.connector'
I tried to add the mysql.connector to the requirements.txt file in the chalice project:
mysql_connector==2.1.6
And if I do so, 2 additional folders containing several files appear in the AWS lambda environment:
/mysql_connector-2.1.6.data
/mysql_connector-2.1.6.dist-info
But the error remains the same. How to deploy python mysql.connector with Chalice?
This finally worked for me:
lib_path=os.path.abspath(os.path.join(__file__, '..', 'mysql_connector-2.1.6.data', 'purelib'))
sys.path.append(lib_path)
import mysql.connector
Putting the "mysql_connector==2.1.6" into the "requirements.txt" file did install the mysql connector in lambda environment. I added the path of the package (../mysql_connector-2.1.6.data/purelib) to system path.

How to import a packge from a local jar in pyspark?

I am using pyspark to do some work on a csv file, hence I need to import package from spark-csv_2.10-1.4.0.jar downloaded from https://repo1.maven.org/maven2/com/databricks/spark-csv_2.11/1.4.0/spark-csv_2.11-1.4.0.jar
I downloaded the jar to my local due to proxy issue.
Can anyone tell me what is the right usage of referring to a local jar:
Here is the code I use:
pyspark --jars /home/rx52019/data/spark-csv_2.10-1.4.0.jar
it will take me to the pyspark shell as expected, however, when I run:
df = sqlContext.read.format('com.databricks.spark.csv').options(header='true',inferschema='true').load('hdfs://dev-icg/user/spark/routes.dat')
the route.dat is uploaded to hdfs already at hdfs://dev-icg/user/spark/routes.dat
It gives me error:
: java.lang.NoClassDefFoundError: org/apache/commons/csv/CSVFormat
If I run:
df = sqlContext.read.format('com.databricks.spark.csv').options(header='true',inferschema='true').load('routes.dat')
I get this error:
py4j.protocol.Py4JJavaError: An error occurred while calling o72.load.
: java.lang.NoClassDefFoundError: Could not initialize class
com.databricks.spark.csv.package$
Can anyone help to sort it out for me? Thank you very much. Any clue is appreciated.
The correct way to do this would be to add the options (say if you are starting a spark shell)
spark-shell --packages com.databricks:spark-csv_2.11:1.4.0 --driver-class-path /path/to/csvfilejar.jar
I have not used the databricks csvjar directly, but I used a netezza connector to spark where they mention using this option
https://github.com/SparkTC/spark-netezza

Play Framework Anorm & DB Not Resolved

I'm using the Play Framework 2.2.3 for my first time and I'm having a lot of trouble importing anorm._ and api.db.DB so I can set up my SQL databases.
My set-up is this:
MainController.scala
import play.api._
import play.api.mvc._
import play.api.db.DB
import anorm._
object MainController extends Controller {...}
application.conf
# db.default.driver=com.mysql.jdbc.Driver
# db.default.url="jdbc:mysql:/usr/local/path/to/database"
build.sbt
libraryDependencies += "mysql" % "mysql-connector-java" % "5.1.27"
My first question is whether or not I'm setting up MySQL database connection correctly. The other thing is that when I try to find the import with my IntelliJ, it finds it great. But when I compile in my browser, I get a compilation error: object db is not a member of package play.api. Any tips?
I downloaded both Play 2.2.2 and 2.2.3, and with both I had this problem so version problems are ruled out. I installed from the website, I unzipped the file into my Home folder, and used
play new app
cd app/
play
idea with-sources=yes
for my installation. I honestly just have no idea what's going on.
I have got the same issue and it's resolved by adding: "com.typesafe.play" % "play-iteratees_2.10" % "2.2.3", in my build.scala file.
simply add this to your dependency
<dependency>
<groupId>com.typesafe.play</groupId>
<artifactId>play-jdbc_2.11</artifactId>
<version>2.4.6</version>
</dependency>

How to add javax.swing.SwingUtilities dependency to SBT?

I'm starting to develop a Scala application with Swing using SBT. I figured out that I need two dependencies for a start, so that's scala-swing and javax.swing.SwingUtilities.
I've been searching the web, maven repositories and github, but stil couldn't find, where did the javax.swing package go.
So far I have found javax in Maven Repos, but javax.swing is not listed there for some reason.
I tried to add a javax dependency to my Build.scala:
val javax = "javax" % "javaee-api" % "7.0"
SBT downloaded several packages. Then I launched the terminal:
scala> import javax.swing.SwingUtilities
import javax.swing.SwingUtilities
scala> SwingUtilities.invokeLater()
<console>:9: error: not enough arguments for method invokeLater: (x$1: Runnable)Unit.
Unspecified value parameter x$1.
SwingUtilities.invokeLater()
^
That's scala console being launched from sbt project. So as you can see, import was successful and the console knows about invokeLater() method. But IntelliJ Idea still does not, it marks javax.swing as unresolvable, though it has downloaded the packages.
I'm completely stuck here.
javax.swing is part of any standard Java SE installation (see http://docs.oracle.com/javase/6/docs/api/ - there you have SwingUtilities), so there is no need to add a dependency. You only need the scala-swing dependency:
libraryDependencies += "org.scala-lang" % "scala-swing" % scalaVersion.value
If IntelliJ IDEA doesn't see javax.swing, you have probably not yet defined an "SDK". Go to File -> Project Structure -> Platform Settings -> SDKs. There you should have at least one entry such as "1.6" for Java 1.6 or "1.7" for Java 1.7. If not, press the "+" to add one and locate the appropriate Java home directory for the version you want to use (depends on your OS).