I am using apache drill in window 10 having latest version (1.9).
I want to start my drill in distributed mode.
I have configure zookeeper zoo.cfg file:-
tickTime=2000
initLimit=10
syncLimit=5
dataDir=F:/zookeepertest/data
clientPort=2181
server.1=192.589.XX.01:2888:3888
server.1=192.565.XX.02:2888:3888
And Drill folder inside drill-override.conf
drill.exec: {
cluster-id: "test",
zk.connect: "192.589.XX.01:2181,192.565.XX.02:2181"
}
And my zookeeper is running..
Now When i trying to start my drill using this command:-
sqlline.bat -u "jdbc:drill:zk=192.589.XX.01:2181"
Its thoughing following error:-
Error: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client. (state=,code=0)
java.sql.SQLException: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init>(DrillConnectionImpl.java:161)
at org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:70)
at org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:69)
at org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:143)
at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)
at sqlline.Commands.connect(Commands.java:1083)
at sqlline.Commands.connect(Commands.java:1015)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:742)
at sqlline.SqlLine.initArgs(SqlLine.java:528)
at sqlline.SqlLine.begin(SqlLine.java:596)
at sqlline.SqlLine.start(SqlLine.java:375)
at sqlline.SqlLine.main(SqlLine.java:268)
Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:245)
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init>(DrillConnectionImpl.java:154)
... 18 more
Caused by: java.io.IOException: Failure to connect to the zookeeper cluster service within the allotted time of 10000 milliseconds.
at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCoordinator.java:123)
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:243)
... 19 more
Can anyone tell how to start drill in distributed mode in window.??
From Drill documentation
To start drill in distributed mode, use drillbit.sh not sqlline i.e.
drillbit.sh start
Since drillbit.sh is a shell script (not a windows batch job) you'll need a 3rd party shell scripting tool such as Cygwin or since you're using Windows 10, you can also enable Bash on Ubuntu.
Related
I am testing Apache Drill with a two server cluster.
Let's say their external IPs are:
1.1.1.1
2.2.2.2
I first setup Zookeeper to run on both, and when I do the status command I get positive response:
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: leader
The way I have my zoo.cfg to get it working was like this:
Server 1:
// other default values omitted
clientPort=2181
server.1=0.0.0.0:2888:3888
server.2=2.2.2.2:2888:3888
Server 2:
// other default values omitted
clientPort=2181
server.1=1.1.1.1:2888:3888
server.2=0.0.0.0:2888:3888
Next I wanted to get Drill running with this cluster, so I modify the drill-override.conf file for the 2 servers as follows:
Server 1:
drill.exec: {
cluster-id: "test",
zk.connect: "1.1.1.1:2181,2.2.2.2:2181"
}
Server 2:
drill.exec: {
cluster-id: "test",
zk.connect: "2.2.2.2:2181,1.1.1.1:2181"
}
I can start a drillbit on both servers, and when I do status I get this response on both servers:
drillbit is running.
But when I then try to open the console via bin/drill-conf I get this stack trace:
Error: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client. (state=,code=0)
java.sql.SQLException: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init>(DrillConnectionImpl.java:159)
at org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:64)
at org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:69)
at net.hydromatic.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:126)
at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)
at sqlline.Commands.connect(Commands.java:1083)
at sqlline.Commands.connect(Commands.java:1015)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:742)
at sqlline.SqlLine.initArgs(SqlLine.java:528)
at sqlline.SqlLine.begin(SqlLine.java:596)
at sqlline.SqlLine.start(SqlLine.java:375)
at sqlline.SqlLine.main(SqlLine.java:268)
Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:208)
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init>(DrillConnectionImpl.java:151)
... 18 more
Caused by: java.io.IOException: Failure to connect to the zookeeper cluster service within the allotted time of 10000 milliseconds.
at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCoordinator.java:123)
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:206)
... 19 more
apache drill 1.7.0
"start your sql engine"
Why would drill fail to connect to the ZK cluster, which is running just fine?
All ports are open between these two boxes.
Pre-Requisites
Prerequisites for starting drill in distributed mode:
(Required) Running Oracle JDK version 7
(Required) Running a ZooKeeper quorum
(Recommended) Running a Hadoop cluster
(Recommended) Using DNS
Configuration
As your server IP address:
Server 1 - 1.1.1.1
Server 2 - 2.2.2.2
Put same configuration in zoo.cfg in both Server 1 and Server 2
clientPort=2181
server.1=1.1.1.1:2888:3888
server.2=2.2.2.2:2888:3888
Similarly same configuration in drill-override.conf for both the servers
drill.exec: {
cluster-id: "test",
zk.connect: "1.1.1.1:2181,2.2.2.2:2181"
}
Starting Drill
Start drillbit on all the cluster nodes using
bin/drillbit.sh start
Using Drill
Web UI:
Open web UI using any node address. For example:
1.1.1.1:8047
Via Shell:
Fire bin/drill-localhost command and drill shell will appear.
Verify Installation
From drill shell or UI fire
SELECT * FROM sys.drillbits;
Drill lists information about the Drillbits that are running
Stopping Drill
Fire command
bin/drillbit.sh stop
SnappyData v.0-5
I am logged into Ubuntu as a non-root user, 'foo'.
SnappyData directory/install is owned by 'foo' user and 'foo' group.
I am starting ALL nodes (locator,lead,server) with a script here:
SNAPPY_HOME/sbin/snappy-start-all.sh
Locator starts.
Server starts.
Lead dies with this error.
16/07/21 23:12:26.883 UTC serverConnector INFO JobFileDAO:
rootDir is /tmp/spark-jobserver/filedao/data 16/07/21 23:12:26.888 UTC
serverConnector ERROR JobServer$: Unable to start Spark
JobServer: java.lang.reflect.InvocationTargetException at
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at spark.jobserver.JobServer$.start(JobServer.scala:69) at
io.snappydata.impl.LeadImpl.startAddOnServices(LeadImpl.scala:283) at
io.snappydata.impl.LeadImpl$.invokeLeadStartAddonService(LeadImpl.scala:360)
at
io.snappydata.ToolsCallbackImpl$.invokeLeadStartAddonService(ToolsCallbackImpl.scala:28)
at
org.apache.spark.sql.SnappyContext$.invokeServices(SnappyContext.scala:1362)
at
org.apache.spark.sql.SnappyContext$.initGlobalSnappyContext(SnappyContext.scala:1340)
at org.apache.spark.sql.SnappyContext.(SnappyContext.scala:104)
at org.apache.spark.sql.SnappyContext.(SnappyContext.scala:95)
at
org.apache.spark.sql.SnappyContext$.newSnappyContext(SnappyContext.scala:1221)
at
org.apache.spark.sql.SnappyContext$.apply(SnappyContext.scala:1249)
at
org.apache.spark.scheduler.SnappyTaskSchedulerImpl.postStartHook(SnappyTaskSchedulerImpl.scala:25)
at org.apache.spark.SparkContext.(SparkContext.scala:601) at
io.snappydata.impl.LeadImpl.start(LeadImpl.scala:129) at
io.snappydata.impl.ServerImpl.start(ServerImpl.scala:32) at
io.snappydata.tools.LeaderLauncher.startServerVM(LeaderLauncher.scala:91)
at
com.pivotal.gemfirexd.tools.internal.GfxdServerLauncher.connect(GfxdServerLauncher.java:174)
at
com.gemstone.gemfire.internal.cache.CacheServerLauncher$AsyncServerLauncher.run(CacheServerLauncher.java:1003)
at java.lang.Thread.run(Thread.java:745) Caused by:
java.io.FileNotFoundException:
/tmp/spark-jobserver/filedao/data/jars.data (Permission denied) at
java.io.FileOutputStream.open0(Native Method) at
java.io.FileOutputStream.open(FileOutputStream.java:270) at
java.io.FileOutputStream.(FileOutputStream.java:213) at
spark.jobserver.io.JobFileDAO.init(JobFileDAO.scala:90) at
spark.jobserver.io.JobFileDAO.(JobFileDAO.scala:30) ... 22 more
16/07/21 23:12:26.891 UTC Distributed system shutdown hook
INFO snappystore: VM is exiting - shutting down distributed system
Do I need to be a different user to start the Lead node? Use 'sudo'? Configure a property to tell Spark to use a directory 'foo' has permission to? Create this directory myself ahead of time?
It seems that the current owner of /tmp/spark-jobserver is some other user. Check the permissions on that directory and delete it.
If multiple users will be running leads on the same machine, you can configure the job-server directories to be elsewhere like mentioned here. The relevant properties can be found in application.conf source. This is probably more trouble than worth, so for now it will be easier to just ensure a single user starts the lead nodes on a machine.
We shall be fixing the default to be inside work/ directory in next release (SNAP-69).
Trying to run a simple hdfs query failed with:
[ms#cosmosmaster-gi ~]$ hadoop fs -ls /user/ms/def_serv/def_servpath
Java HotSpot(TM) 64-Bit Server VM warning: Insufficient space for shared memory file:
/tmp/hsperfdata_ms/21066
Try using the -Djava.io.tmpdir= option to select an alternate temp location.
Exception in thread "main" java.lang.NoClassDefFoundError: ___/tmp/hsperfdata_ms/21078
Caused by: java.lang.ClassNotFoundException: ___.tmp.hsperfdata_ms.21078
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: ___/tmp/hsperfdata_ms/21078. Program will exit.
Any idea how to fix that or increase quota?
Thanks!
ms
Your quota has not been exceeded (see command below), but this was a problem with the cluster. It should be fixed now.
$ hadoop fs -dus /user/ms
hdfs://cosmosmaster-gi/user/ms 90731
I'm trying to deploy a WAR on CloudBees Glassfish4 server. I've followed the instructions at the bottom of http://developer.cloudbees.com/bin/view/RUN/Glassfish4 to include the jar in the META-INF/lib directory.
When I deploy with:
bees app:deploy target/app.war -a myDomain/app -t glassfish4-full
I get the error:
ERROR: Server.InternalError - java.lang.IllegalArgumentException: Platform error -
plugin_setup_error: glassfish4-full 1 [main] INFO com.cloudbees.clickstack.glassfish.Setup - Setup clickstack com.cloudbees.clickstack:glassfish-clickstack:4-full-1.0.2 - 2013-12-12T13:06:29.572+0100, current dir /mnt/genapp/apps/1cabb3f9/.
[main] INFO com.cloudbees.clickstack.glassfish.Setup - Setup: Environment{,
appUser='app_1cabb3f9',
appId='1cabb3f9',
appPort=8336,
appDir=/var/genapp/apps/1cabb3f9,
logDir=/var/genapp/apps/1cabb3f9/.genapp/log,
genappDir=/var/genapp/apps/1cabb3f9/.genapp,
controlDir=/var/genapp/apps/1cabb3f9/.genapp/control,
clickstackDir=/mnt/genapp-tmp/genapp-remote-plugin-1389871636905879,
packageDir=/mnt/genapp-tmp/stax-genapp-1389871636.236927/app,
}, com.cloudbees.clickstack.domain.metadata.Metadata#385cbbb1
Exception in thread "main" java.lang.Exception: Exception deploying on 10.159.35.35
at com.cloudbees.clickstack.glassfish.Setup.main(Setup.java:147)
Caused by: java.lang.IllegalArgumentException
at com.sun.nio.zipfs.ZipPath.relativize(ZipPath.java:238)
at com.cloudbees.clickstack.util.Files2$3.visitFile(Files2.java:188)
at com.cloudbees.clickstack.util.Files2$3.visitFile(Files2.java:184)
at java.nio.file.FileTreeWalker.walk(Unknown Source)
at java.nio.file.FileTreeWalker.walk(Unknown Source)
at java.nio.file.FileTreeWalker.walk(Unknown Source)
at java.nio.file.Files.walkFileTree(Unknown Source)
at java.nio.file.Files.walkFileTree(Unknown Source)
at com.cloudbees.clickstack.util.Files2.unzipSubDirectoryIfExists(Files2.java:184)
at com.cloudbees.clickstack.util.ApplicationUtils.extractContainerExtraLibs(ApplicationUtils.java:49)
at com.cloudbees.clickstack.glassfish.Setup.installApplication(Setup.java:259)
at com.cloudbees.clickstack.glassfish.Setup.setup(Setup.java:154)
at com.cloudbees.clickstack.glassfish.Setup.main(Setup.java:139)
I got a reply from CloudBees support.
The documentation at http://developer.cloudbees.com/bin/view/RUN/Glassfish4 was wrong, you don't need to include the mysql connector in your project.
As I replied to you on our support platform. We fixed the bug on both "glassfish4-full" and "glassfish4" (web profile) ClickStacks.
Sorry for the inconvenience,
Cyrille
Clickstacks release notes:
https://github.com/CloudBees-community/glassfish4-clickstack/releases/tag/v4-web-1.0.1
https://github.com/CloudBees-community/glassfish4-clickstack/releases/tag/v4-full-1.0.3
I am trying to install a slave on windows xp on a different machine from the master. I tried clicking on new node and it only give me the option of choosing dumb slave. Is it supposed to be like that? Anyway, I fill out the node name as "Test" and i select launch using jnlp and i hit save. Now when I click on that node i download the slave-agent.jnlp and slave.jar files from that screen and i open a command prompt and enter java -jar slave.jar -jnlpUrl http://servername:portnumber/hudson/computer/Test/slave-agent.jnlp but I get the following java errors:
java.net.ConnectException: Connection timed out: connect
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(Unknown Source)
at java.net.PlainSocketImpl.connectToAddress(Unknown Source)
at java.net.PlainSocketImpl.connect(Unknown Source)
at java.net.SocksSocketImpl.connect(Unknown Source)
at java.net.Socket.connect(Unknown Source)
at java.net.Socket.connect(Unknown Source)
at java.net.Socket.<init>(Unknown Source)
at java.net.Socket.<init>(Unknown Source)
at hudson.remoting.Engine.connect(Engine.java:265)
at hudson.remoting.Engine.run(Engine.java:185)
When i try to run it from the web interface it cannot connect and it looks like it is trying to use a port number like 59870 instead of the real portnumber to connect to the host. Does anyone know why I can't install the slave agent?
found it. You need to open that port number on the machine the master slave is on.