I am using ofbiz in my organization. I want to migrate ofbiz from derby to MySQL.
I refer the steps from
(https://cwiki.apache.org/confluence/display/OFBIZ/How+to+migrate+OfBiz+from+Derby+to+MySQL+database) here, but at the i got stuck at the end.
At the end when I type (java -jar ofbiz.jar -install) this command I am getting an exception,
C:\Users\sagar_vinod_khanke\Sagar\Apache OFBiz\Ofbiz\13.07>java -jar ofbiz.jar -
install
Exception in thread "main" org.ofbiz.base.start.StartupException: Couldn't not f
etch config instance
at org.ofbiz.base.start.Start.init(Start.java:202)
at org.ofbiz.base.start.Start.main(Start.java:127)
Caused by: java.io.IOException: Cannot load configuration properties : org/ofbiz
/base/start/-install.properties
at org.ofbiz.base.start.Config.getPropertiesFile(Config.java:229)
at org.ofbiz.base.start.Config.readConfig(Config.java:297)
at org.ofbiz.base.start.Config.getInstance(Config.java:58)
at org.ofbiz.base.start.Start.init(Start.java:200)
... 1 more
Can you please help me?
don't use - with install.
See revised Step-V
Step-V
1. Run the following command from command prompt:
java -jar ofbiz.jar install
2. Start OfBiz
3. Use webtools to import all data from XML:
a. Navigate to http://localhost:8080/catalog/
b. Go to Applications>WebTools
c. Go to section 'Entity XML Tools' and click the link 'XML Data Import Dir' -> In the 'Absolute directory path:' enter the full path of the directory where you exported the data in Step - II
Related
I am using JDK: 1.8.0.281 and Drill: 1.18.0 on m1 Mac.
I cannot start drill downloaded from https://drill.apache.org/download/ directly. It says
could not find or load main class sqlline.sqlline. So I refer to https://github.com/julianhyde/sqlline/issues/69, download and compile sqlline and add these two lines
BINPATH=/Users/fields/Repositories/sqlline-sqlline-1.9.0/bin
exec java -cp $BINPATH/../target/sqlline-1.9.0-jar-with-dependencies.jar sqlline.SqlLine "$#"
to the beginning of drill/bin/sqlline. Then I start drill-embedded and get no current connection every time I enter a query.
Please help me identify the problem. Thanks a lot!!!
Getting below error when I start sandbox. I think we need use jdk8 as per the DAML doc.
daml sandbox --scenario Main:setup .daml/dist/quickstart-0.0.1.dar
Error: Registry key 'Software\JavaSoft\Java Runtime Environment'\CurrentVersion'
has value '1.8', but '1.7' is required.
Error: could not find java.dll
Error: Could not find Java SE Runtime Environment.
daml-helper: Received ExitFailure 2 when running
Raw command: java -jar "C:\Users\santh\AppData\Roaming\daml\sdk\0.13.21\sandbox/sandbox.jar" --scenario Main:setup .daml/dist/quickstart-0.0.1.dar
During my initial setup, I have faced the similar issue. It may be because of an existing Java installation in your machine.
You can try the below options to fix this issue.
Option 1:
1. Check if you have more than one version of JAVA in machine.
2. If yes, uninstall everything and do the clean installation. Make sure your Environment Path variable(s) are got set properly.
or
Option 2:
1) Download the latest JDK in Zip format "jdk-12.0.2_windows-x64_bin.zip"
2) Extract it manually in your local drive.
3) Manually update your Environment Path variables (ref. https://javatutorial.net/set-java-home-windows-10).
I hope it will help you to fix your problem.
Cheers,
Augustine
I'm running an EMR with a spark cluster on AWS.
Spark version is 1.6
When running the folllowing command:
proxy = sqlContext.read.load("/user/zeppelin/ProxyRaw.csv",
format="com.databricks.spark.csv",
header="true",
inferSchema="true")
I get the following error:
Py4JJavaError: An error occurred while calling o162.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at
http://spark-packages.org
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:77)
How can I solve this? I assume I should add a package but how do I install it and where?
There is many way to add packages in Zeppelin :
One of them is to actually change the conf/zeppelin-env.sh configuration file adding the packages you need e.g com.databricks:spark-csv_2.10:1.4.0 in your case to the submit options since Zeppelin uses the spark-submit command under the hood :
export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.4.0"
But let's say that you don't have actually access to those configuration. You can then use Dynamic Dependency Loading via %dep interpreter (deprecated) :
%dep
z.load("com.databricks:spark-csv_2.10:1.4.0")
This will require that you load the dependencies before launching or restarting the interpreter.
Another way to do it is do add the dependency you need via the interpreter dependency manager as described in the following link : Dependency Management for Interpreter.
Well,
First you need to download the CSV liv from Maven repository:
https://mvnrepository.com/artifact/com.databricks/spark-csv_2.10/1.5.0
Check the scala version that you are using. If is 2.10 or 2.11.
When you call spark-shell our spark-submit or pyspark. Or even your Zeppelin you need to add the option --jars and the path to your lib.
Like this:
pyspark --jars /path/to/jar/spark-csv_2.10-1.5.0.jar
Than you can call it as you did above.
You can see other close issue here: How to add third party java jars for use in pyspark
I am trying to perform an sqoop export on HDP sandbox 2.1 via Oozie. When I run the Oozie job I get the following java runtime exception.
'>>> Invoking Sqoop command line now >>>
7598 [main] WARN org.apache.sqoop.tool.SqoopTool - $SQOOP_CONF_DIR
has not been set in the environment. Cannot check for additional
configuration.
7714 [main] INFO org.apache.sqoop.Sqoop - Running Sqoop version:
1.4.4.2.1.1.0-385
7760 [main] WARN org.apache.sqoop.SqoopOptions - Character argument
'\t' has multiple characters; only the first will be used.
7791 [main] WARN org.apache.sqoop.ConnFactory - $SQOOP_CONF_DIR has
not been set in the environment. Cannot check for additional
configuration.
7904 [main] INFO org.apache.sqoop.manager.MySQLManager - Preparing
to use a MySQL streaming resultset.
7905 [main] INFO org.apache.sqoop.tool.CodeGenTool - Beginning code
generation
7946 [main] ERROR org.apache.sqoop.Sqoop - Got exception running
Sqoop: java.lang.RuntimeException: Could not load db driver class:
com.mysql.jdbc.Driver Intercepting System.exit(1)
I have copied jdbc driver file "mysql-connector-java.jar" to Oozie's shared library folder which I believe is "/usr/lib/oozie/share/lib/sqoop/". I have restarted my sandbox and tried to perform the export with Oozie again and I still get the same error.
The export works perfectly fine when I try performing it only via sqoop, so I presume Oozie needs its own set of drivers.
My question is, which Oozie directory am I suppose to copy my jdbc drivers to?
If you guys think I'm doing something wrong or you need further information, please let me know.
Thank you for your time.
Normally for Oozie the sharelib directory is /user/oozie/share/lib/ on HDFS where "oozie" would be the name of the user which is used to start the Oozie Server. I don't know what that is in case of HDP sandbox 2.1 , but you can use ps command to figure that out.
And for jars needed for sqoop action, I think you should copy the jar to /user/oozie/share/lib/sqoop/ folder.
This question already has an answer here:
Closed 10 years ago.
Possible Duplicate:
Manifest.MF issue with MSSQLSERVER 2008 and Groovy
I have a very simple groovy script with 2 database connections:
One db connection to Oracle
Another db connection to SQLServer
Problem
When I run the program through the GGTS Editor (The groovy and grails version of SpringSource Tool Suite), the two queries run and return results fine. But, when I run the program from the command line, from the project folder as follows:
groovy -cp lib\jtds-1.3.0.jar lib\ojdbc6-11g.jar src\Starter.groovy
I get the following error:
C:\workspace-ggts\Test>groovy -cp lib\jtds-1.3.0.jar lib\ojdbc6-11g.jar src\Star
ter.groovy
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
C:\workspace-ggts\Test\lib\ojdbc6-11g.jar: 1: unexpected char: 0x3 # line 1, col
umn 3.
PK♥ ßî∟9 ♦ META-INF/■╩ ♥ ☻ PK♥♦ ßî∟9 ¶ M
ETA-INF/MANIFEST.MF?æ┴N├0►D∩æ≥☼½₧α►7)R[rúΘÑá☻R½^æq6─òcç╡SΦ▀π4◄ → ─╒3;π}╗µ
Z▬h]┤C▓╥Φ¶↕▬ç┴¬¬§V¿↔w■╤ï:7ö┬♥qí►2C╡íôtf▌Jº0♣│╧ƒ┼öφ9
^
1 error
What I have Tried
I have tried using the jtds driver to connect to SQLServer as I thought the problem was the sqljdbc4.jar from Microsoft site based on this same problem reported differently here
I have tried putting semicolons to separate the classpath dependencies, and still same error.
I have upgraded java version to 1.7. Groovy version is 2.0.5
From the IDE it runs fine, but from command line I get the error.
If I comment out one of the db access code (connection, query, println of resultset) leaving my groovy script with only one db connection & access the program runs fine from command line. For example:
This
groovy -cp lib\jtds-1.3.0.jar src\Starter.groovy
or this:
groovy -cp lib\ojdbc6-11g.jar src\Starter.groovy
does work. As soon as I add the code and the jar in the classpath for that second db access I get the error reported above.
I am out of thoughts or ideas
Files in your classpath need to be separated with a semi-colon on Windows. On unix-like platforms like Linux or OSX, the separator is a colon. Groovy is treating the second jar file as the script, and the script name as the first command line parameter.
Try this:
groovy -cp lib\jtds-1.3.0.jar;lib\ojdbc6-11g.jar src\Starter.groovy
Do you get a different error with that?