snappystore - VM is exiting - shutting down distributed system - snappydata

snappystore - VM is exiting - shutting down distributed system
org.apache.spark.SparkContext - Invoking stop() from shutdown hook
o.e.jetty.server.ServerConnector - Stopped ServerConnector#244e619a{HTTP/1.1}{0.0.0.0:4040}
ERROR o.a.spark.scheduler.LiveListenerBus - Listener SparkContextListener threw an exception
com.pivotal.gemfirexd.internal.impl.jdbc.EmbedSQLException: GemFireXD system shutdown.
at com.pivotal.gemfirexd.internal.impl.jdbc.SQLExceptionFactory40.getSQLException(SQLExceptionFactory40.java:124)
at com.pivotal.gemfirexd.internal.impl.jdbc.Util.newEmbedSQLException(Util.java:110)
at com.pivotal.gemfirexd.internal.impl.jdbc.Util.newEmbedSQLException(Util.java:122)
at com.pivotal.gemfirexd.internal.engine.fabricservice.FabricServiceImpl.stopNoSync(FabricServiceImpl.java:402)
at com.pivotal.gemfirexd.internal.engine.fabricservice.FabricServiceImpl.stop(FabricServiceImpl.java:374)
at io.snappydata.util.ServiceUtils$.invokeStopFabricServer(ServiceUtils.scala:81)
at org.apache.spark.sql.SnappyContext$.org$apache$spark$sql$SnappyContext$$stopSnappyContext(SnappyContext.scala:1091)
at org.apache.spark.sql.SnappyContext$SparkContextListener.onApplicationEnd(SnappyContext.scala:1043)
at org.apache.spark.scheduler.SparkListenerBus$class.doPostEvent(SparkListenerBus.scala:57)
at org.apache.spark.scheduler.LiveListenerBus.doPostEvent(LiveListenerBus.scala:36)
at org.apache.spark.scheduler.LiveListenerBus.doPostEvent(LiveListenerBus.scala:36)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:63)
at org.apache.spark.scheduler.LiveListenerBus.postToAll(LiveListenerBus.scala:36)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:94)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1305)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77)
Caused by: com.gemstone.gemfire.distributed.DistributedSystemDisconnectedException: No connection to the distributed system
... 17 common frames omitted

Related

Apache Karaf stuck while shutting with Pax-Exam

I was running integrations tests with Pax-Exam and Karaf, tests got executed successfully but while shutting Karaf, it stuck on below and never resume
Pax-Exam = 4.11
Karaf = 4.2
[main] DEBUG o.ops4j.store.intern.TemporaryStore - Exit store(): 66cf6a516d0d1a670e78bd6b0be97f3da2a380b3
[main] DEBUG o.o.p.e.c.remote.RBCRemoteTarget - Preparing and Installing bundle (from stream )..
[main] DEBUG o.o.p.e.r.c.RemoteBundleContextClient - Packing probe into memory for true RMI. Hopefully things will fill in..
[main] DEBUG o.o.p.e.c.remote.RBCRemoteTarget - Installed bundle (from stream) as ID: 86
[main] DEBUG o.o.p.e.c.remote.RBCRemoteTarget - call [[TestAddress:PaxExam-bc970a6c-c656-4aa6-9300-35ded2bcde50 root:PaxExam-f6737e31-8f28-43e
0-847e-1f3f49649233]]
[main] DEBUG o.o.p.e.k.c.i.KarafTestContainer - Shutting down the test container (Pax Runner)
Following is output of JConsole for blocking
Name: main
State: BLOCKED on java.lang.Object#d53a0bb owned by: KarafJavaRunner
Total blocked: 106 Total waited: 105
Stack trace:
org.ops4j.pax.exam.karaf.container.internal.runner.InternalRunner.shutdown(InternalRunner.java:71)
org.ops4j.pax.exam.karaf.container.internal.runner.KarafJavaRunner.shutdown(KarafJavaRunner.java:120)
- locked org.ops4j.pax.exam.karaf.container.internal.runner.KarafJavaRunner#279baf5b
org.ops4j.pax.exam.karaf.container.internal.KarafTestContainer.stop(KarafTestContainer.java:600)
- locked org.ops4j.pax.exam.karaf.container.internal.KarafTestContainer#25dcfa62
org.ops4j.pax.exam.spi.reactors.AllConfinedStagedReactor.invoke(AllConfinedStagedReactor.java:87)
org.ops4j.pax.exam.junit.impl.ProbeRunner$2.evaluate(ProbeRunner.java:267)
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
org.ops4j.pax.exam.junit.impl.ProbeRunner.run(ProbeRunner.java:98)
org.ops4j.pax.exam.junit.PaxExam.run(PaxExam.java:93)
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Update:
One more thing i observed if i forcefully shut it and then if i run "mvn clean install" i get following error and i have to wait to get it run again
[←[1;31mERROR←[m] Failed to execute goal ←[32morg.apache.maven.plugins:maven-clean-plugin:2.5:clean←[m ←[1m(default-clean)←[m on project ←[36mosgi-unit-tes
ts-sample←[m: ←[1;31mFailed to clean project: Failed to delete C:\Users\..\target\pax
exam\e266ddcb-5fed-4997-8178-3d4944251418\system\org\apache\felix\org.apache.felix.framework\5.6.10\org.apache.felix.framework-5.6.10.jar←[m -> ←[1m[Help 1
Update2:
After exiting prompt still its running
C:\Program Files\Java\jdk1.8.0_162\bin>jps -l
1552 sun.tools.jps.Jps
4144
1420 org.apache.karaf.main.Main
C:\Program Files\Java\jdk1.8.0_162\bin>jps -l 1420
RMI Registry not available at 1420:1099
Exception creating connection to: 1420; nested exception is:
java.net.SocketException: Network is unreachable: connect
Update3:
if i kill this process, Pax resume and display message successful execution of Tests. infact before shutting all tests are already successfull but it not able to shut.
TASKKILL /F /PID 10692
Now i have no clue to handle this locking issue.
Update4:
Name: main
State: WAITING on org.apache.felix.framework.util.ThreadGate#b3d26d8
Total blocked: 6 Total waited: 7
Stack trace:
java.lang.Object.wait(Native Method)
org.apache.felix.framework.util.ThreadGate.await(ThreadGate.java:79)
org.apache.felix.framework.Felix.waitForStop(Felix.java:1075)
org.apache.karaf.main.Main.awaitShutdown(Main.java:640)
org.apache.karaf.main.Main.main(Main.java:188)
Name: FelixDispatchQueue
State: WAITING on java.util.ArrayList#3276dd18
Total blocked: 353 Total waited: 342
Stack trace:
java.lang.Object.wait(Native Method)
java.lang.Object.wait(Object.java:502)
org.apache.felix.framework.EventDispatcher.run(EventDispatcher.java:1122)
org.apache.felix.framework.EventDispatcher.access$000(EventDispatcher.java:54)
org.apache.felix.framework.EventDispatcher$1.run(EventDispatcher.java:102)
java.lang.Thread.run(Thread.java:748)
Update5:
After spending lot of time i finally realize that by adding below bundles it got stuck, if i dont add them it works fine
wrappedBundle( maven("org.ops4j.pax.tinybundles", "tinybundles").versionAsInProject() ), //2.1.0
wrappedBundle( maven("biz.aQute.bnd", "bndlib").versionAsInProject() )//2.4.0
Regards,
I resolved issue by changing following jars version
maven("org.ops4j.pax.tinybundles", "tinybundles") from 2.1.0 to 3.0.0
maven("biz.aQute.bnd", "bndlib") from 2.4.0 to 3.5.0

How do I debug HikariCP losing connections?

I use HikariCP with Play 2.6.10. The application will run fine for days, and then all of our connections will leak. I have leakDetectionThreshold turned on, so we get stack traces for leaked connections like:
2018-01-24 06:29:00,857 - [WARN] - from com.zaxxer.hikari.pool.ProxyLeakTask in
HikariPool-2 housekeeper
Connection leak detection triggered for com.mysql.jdbc.JDBC4Connection#65cd084 on thread pool-1-thread-1, stack trace follows
java.lang.Exception: Apparent connection leak detected
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.jav
a:85)
at play.api.db.DefaultDatabase.getConnection(Databases.scala:142)
at play.api.db.DefaultDatabase.withConnection(Databases.scala:152)
at play.api.db.DefaultDatabase.withConnection(Databases.scala:148)
at models.summaries.ActionSummary$.listByStripe(ActionSummary.scala:137)
I only use the connections through Play's withConnection, so they should be returned to the pool automatically. A thread dump, when the application is in a broken state, shows that all threads inside of a withConnection block are stuck on...
"application-akka.mysql-context-122" #32142 prio=5 os_prio=0 tid=0x00007fca7812a
000 nid=0x28e6 waiting on condition [0x00007fca7541c000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for (a java.util.concurrent.Sync
hronousQueue$TransferQueue)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215
)
at java.util.concurrent.SynchronousQueue$TransferQueue.awaitFulfill(Sync
hronousQueue.java:764)
at java.util.concurrent.SynchronousQueue$TransferQueue.transfer(Synchron
ousQueue.java:695)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
at com.zaxxer.hikari.util.ConcurrentBag.borrow(ConcurrentBag.java:157)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:165)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:147)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:85)
at play.api.db.DefaultDatabase.getConnection(Databases.scala:142)
at play.api.db.DefaultDatabase.withConnection(Databases.scala:152)
at play.api.db.DefaultDatabase.withConnection(Databases.scala:148)
...waiting for a connection to be available, which should mean that none are currently holding a connection. I have no idea how any connection could possible be leaked, but apparently all of them have. We see logging lines like:
2018-01-24 06:19:21,297 - [DEBUG] - from com.zaxxer.hikari.pool.HikariPool in application-akka.mysql-context-129
HikariPool-2 - Timeout failure stats (total=10, active=10, idle=0, waiting=15)
The only unusual thing we are doing is calling setNetworkTimeout on each connection we obtain, sometimes with a timeout as low as 10 seconds. This is done to ensure that queries fail fast if we lose connection to the DB.
I'm not sure what to do next debugging this. It looks like maybe a potential issue between Hikari and Play, or something broken with MySQL and setNetworkTimeout.

Cassandra down after high traffic

I have a problem with cassandra when the traffic goes high... cassandra crashes
here's what i got in system.log
WARN 11:54:35 JNA link failure, one or more native method will be unavailable.
WARN 11:54:35 jemalloc shared library could not be preloaded to speed up memory allocations
WARN 11:54:35 JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
INFO 11:54:35 Initializing SIGAR library
INFO 11:54:35 Checked OS settings and found them configured for optimal performance.
INFO 11:54:35 Initializing system.schema_triggers
ERROR 11:54:36 Exiting due to error while processing commit log during initialization.
java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled
Java code at org.apache.cassandra.db.commitlog.CommitLogDescriptor.writeHeader(CommitLogDescriptor.java:87) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.commitlog.CommitLogSegment.<init>(CommitLogSegment.java:153) ~[apache-cassandra-2.2.4.jar:2.2.4] at org.apache.cassandra.db.commitlog.MemoryMappedSegment.<init>(MemoryMappedSegment.java:47) ~[apache-cassandra-2.2.4.jar:2.2.4] at
org.apache.cassandra.db.commitlog.CommitLogSegment.createSegment(CommitLogSegment.java:121) ~[apache-cassandra-2.2.4.jar:2.2.4]
atorg.apache.cassandra.db.commitlog.CommitLogSegmentManager$1.runMayThrow(CommitLogSegmentManager.java:122) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) [apache-cassandra-2.2.4.jar:2.2.4]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
and in debug.log
DEBUG [SharedPool-Worker-1] 2017-05-25 12:54:18,586 SliceQueryPager.java:92 - Querying next page of slice query; new filter: SliceQueryFilter [reversed=false, slices=[[, ]], count=5000, toGroup = 2]
WARN [SharedPool-Worker-2] 2017-05-25 12:54:18,658 SliceQueryFilter.java:307 - Read 2129 live and 27677 tombstone cells in RestCommSMSC.SLOT_MESSAGES_TABLE_2017_05_25 for key: 549031460 (see tombstone_warn_threshold). 5000 columns were requested, slices=[-]DEBUG [SharedPool-Worker-1] 2017-05-25 12:54:18,808 AbstractQueryPager.java:95 - Fetched 2129 live rows
DEBUG [SharedPool-Worker-1] 2017-05-25 12:54:18,808 AbstractQueryPager.java:112 - Got result (2129) smaller than page size (5000), considering pager exhausted DEBUG [SharedPool-Worker-1] 2017-05-25 12:54:18,808 AbstractQueryPager.java:133 - Remaining rows to page: 2147481518
INFO [main] 2017-05-25 12:54:34,826 YamlConfigurationLoader.java:92 - Loading settings from file:/opt/SMGS/apache-cassandra-2.2.4/conf/cassandra.yaml INFO [main] 2017-05-25 12:54:34,923 YamlConfigurationLoader.java:135 Node configuration
[authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer;
auto_snapshot=true; batch_size_fail_threshold_in_kb=50;
batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_period_in_ms=10000; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_node_timeout=false; disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_user_defined_functions=false; endpoint_snitch=SimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_compression=all; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=localhost; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; memtable_allocation_type=heap_buffers; native_transport_port=9042; num_tokens=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_validity_in_ms=2000; range_request_timeout_in_ms=50000; read_request_timeout_in_ms=10000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeout_in_ms=50000; role_manager=CassandraRoleManager; roles_validity_in_ms=2000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=localhost; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider, parameters=[{seeds=127.0.0.1}]}]; server_encryption_options<REDACTED>;snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=true; storage_port=7000; thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=5000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; windows_timer_interval=1; write_request_timeout_in_ms=2000]
DEBUG [main] 2017-05-25 12:54:34,958 DatabaseDescriptor.java:296 - Syncing log with a period of 10000
INFO [main] 2017-05-25 12:54:34,958 DatabaseDescriptor.java:304 - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO [main] 2017-05-25 12:54:35,110 DatabaseDescriptor.java:409 - Global memtable on-heap threshold is enabled at 1991MB INFO [main] 2017-05-25 12:54:35,110 DatabaseDescriptor.java:413 - Global memtable off-heap threshold is enabled at 1991MB
i don't know if this problem is related to commitLogs or not, anyways in cassandra.yaml i'm setting:
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
you can start you cassandra with this command:
cd /ASMSC02/apache-cassandra-2.0.11/
nohup bin/cassandra
regards,
Hafiz

Issue when trying to start Sonarqube services

I installed sonarqube-6.3.1 in my machine and created a database in mysql db named 'sonarqubedb'. Now when I am making changes in sonar.properties file to use the database, sonarqube is not getting started and throwing error msg, but If I am using the default DB configuration (and NOT mysql), I am able to start.
Could somebody please provide me a solution what is going wrong when I am using mysql db.
My sonar.properties file be like:
sonar.jdbc.username=root
sonar.jdbc.password=
sonar.jdbc.url=jdbc:mysql://localhost:3306/sonarqubedb?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
and the sonar log file when I try to start service, be like:
--> Wrapper Started as Service
Launching a JVM...
WrapperManager class initialized by thread: main Using classloader:
sun.misc.Launcher$AppClassLoader#4e25154f
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
Wrapper Manager: JVM #1
Running a 64-bit JVM.
Wrapper Manager: Registering shutdown hook
Wrapper Manager: Using wrapper
Load native library. One or more attempts may fail if platform specific
libraries do not exist.
Loading native library failed: wrapper-windows-x86-64.dll Cause:
java.lang.UnsatisfiedLinkError: no wrapper-windows-x86-64 in
java.library.path
Loaded native library: wrapper.dll
Calling native initialization method.
Initializing WrapperManager native library.
Java Executable: C:\ProgramData\Oracle\Java\javapath\java.exe
Windows version: 6.1.7600
Java Version : 1.8.0_45-b15 Java HotSpot(TM) 64-Bit Server VM
Java VM Vendor : Oracle Corporation
Control event monitor thread started.
Startup runner thread started.
WrapperManager.start(org.tanukisoftware.wrapper.WrapperSimpleApp#4f023edb, args[]) called by thread: main
Communications runner thread started.
Open socket to wrapper...Wrapper-Connection
Failed attempt to bind using local port 31000
Opened Socket from 31001 to 32000
Send a packet KEY : 4hhDEyNqmPXAiWpf
handleSocket(Socket[addr=/127.0.0.1,port=32000,localport=31001])
Received a packet LOW_LOG_LEVEL : 1
Wrapper Manager: LowLogLevel from Wrapper is 1
Received a packet PING_TIMEOUT : 0
PingTimeout from Wrapper is 0
Received a packet PROPERTIES : (Property Values)
Received a packet START : start
calling WrapperListener.start()
Waiting for WrapperListener.start runner thread to complete.
WrapperListener.start runner thread started.
WrapperSimpleApp: start(args) Will wait up to 2 seconds for the main
method to complete.
WrapperSimpleApp: invoking main method
2017.04.26 14:54:12 INFO app[][o.s.a.AppFileSystem] Cleaning or creating
temp directory C:\Program Files\Sonar\sonarqube-6.3.1\sonarqube-6.3.1\temp
2017.04.26 14:54:12 INFO app[][o.s.p.m.JavaProcessLauncher] Launch
process[es]: C:\Program Files\Java\jre1.8.0_45\bin\java -
Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djna.nosys=true -
XX:+UseParNewGC -XX:+UseConcMarkSweepGC -
XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -
XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=C:\Program
Files\Sonar\sonarqube-6.3.1\sonarqube-6.3.1\temp -javaagent:C:\Program
Files\Java\jre1.8.0_45\lib\management-agent.jar -cp
./lib/common/*;./lib/search/* org.sonar.search.SearchServer C:\Program
Files\Sonar\sonarqube-6.3.1\sonarqube-6.3.1\temp\sq-
process3041279828124660880properties
Send a packet START_PENDING : 5000
Send a packet START_PENDING : 5000
WrapperSimpleApp: start(args) end. Main Completed=false, exitCode=null
WrapperListener.start runner thread stopped.
returned from WrapperListener.start()
Send a packet STARTED :
Startup runner thread stopped.
Received a packet PING : ping
Send a packet PING : ok
Received a packet PING : ping
Send a packet PING : ok
2017.04.26 14:54:23 INFO app[][o.s.p.m.Monitor] Process[es] is up
2017.04.26 14:54:23 INFO app[][o.s.p.m.JavaProcessLauncher] Launch
process[web]: C:\Program Files\Java\jre1.8.0_45\bin\java -
Djava.awt.headless=true -Dfile.encoding=UTF-8 -Xmx512m -Xms128m -
XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=C:\Program
Files\Sonar\sonarqube-6.3.1\sonarqube-6.3.1\temp -javaagent:C:\Program
Files\Java\jre1.8.0_45\lib\management-agent.jar -cp
./lib/common/*;./lib/server/*;C:\Program Files\Sonar\sonarqube-
6.3.1\sonarqube-6.3.1\lib\jdbc\mysql\mysql-connector-java-5.1.39.jar
org.sonar.server.app.WebServer C:\Program Files\Sonar\sonarqube-
6.3.1\sonarqube-6.3.1\temp\sq-process5745752416531116392properties
Received a packet PING : ping
Send a packet PING : ok
2017.04.26 14:54:28 INFO app[][o.s.p.m.Monitor] Process[es] is stopping
2017.04.26 14:54:28 ERROR app[][o.s.p.m.Monitor] Process[web] failed to
start
2017.04.26 14:54:28 INFO app[][o.s.p.m.Monitor] Process[es] is stopped
Wrapper Manager: ShutdownHook started
WrapperManager.stop(0) called by thread: Wrapper-Shutdown-Hook
Send a packet STOP : 0
Received a packet STOP :
Thread, Wrapper-Shutdown-Hook, handling the shutdown process.
calling listener.stop()
WrapperSimpleApp: stop(0)
returned from listener.stop() -> 0
shutdownJVM(0) Thread:Wrapper-Shutdown-Hook
Send a packet STOPPED : 0
Closing socket.
Server daemon shut down
Wrapper Manager: ShutdownHook complete
<-- Wrapper Stopped
Thanks in Advance :)
Go to your sonarqube/logs directory. You'll find several log files and one of them will contain the detailed error on why sonarqube won't start.(you'll have to scroll all the way down inside the files for the latest information iirc)
Go through this sonar documentation, you will get some help -
https://docs.sonarqube.org/display/SONAR/Installing+the+Server
I had the same issue.
The problem was that my MySQL version didn't meet the minimum requirements.

Are the tables stored with MEMORY engine recoverable from cluster crash?

I have set up MySQL NDB Cluster 7.3.5 and the cluster was working fine.
Cluster with 4 nodes :
NodeA : SQLNode1, DataNode1
NodeB : SQLNode2, DataNode2
NodeC : Mgmt Node1
NodeD : Mgmt Node2
To test the server reboot scenario I rebooted VMWare ESXi and restarted all VMs.
But the data nodes are subsequently failing to start.
Adding logs for the servers respectively:
/home/mysql/mysqlcluster_data/1/ndb_1_out.log (Data Node 1)
error: [ code: 708 line: 38848236 node: 1 count: 1 status: 32687 key: 445914048 name: 'hhmefep/def/fgvmev0000000000-elog-1398414831' ]
2014-05-13 13:16:40 [ndbd] INFO -- Failed to recreate object 505 during restart, error 708.
2014-05-13 13:16:40 [ndbd] INFO -- DBDICT (Line: 4688) 0x00000000
2014-05-13 13:16:40 [ndbd] INFO -- Error handler restarting system
2014-05-13 13:16:40 [ndbd] INFO -- Error handler shutdown completed - exiting
2014-05-13 13:16:40 [ndbd] ALERT -- Angel detected too many startup failures(3), not restarting again
2014-05-13 13:16:40 [ndbd] ALERT -- Node 1: Forced node shutdown completed. Occured during startphase 4. Caused by error 2355: 'Failure to restore schema(Resource configuration error). Permanent error, external action needed'.
It seems that the nodes are failing to recover this table:
hhmefep.fgvmev0000000000-elog-1398414831
/home/mysql/mysqlcluster_data/2/ndb_2_out.log (Data Node 2)
2014-05-13 13:05:48 [ndbd] INFO -- Start phase 1 completed
2014-05-13 13:05:48 [ndbd] INFO -- Start phase 2 completed
2014-05-13 13:05:48 [ndbd] INFO -- Start phase 3 completed
2014-05-13 13:05:51 [ndbd] INFO -- Node 1 disconnected
2014-05-13 13:05:51 [ndbd] INFO -- QMGR (Line: 3308) 0x00000000
2014-05-13 13:05:51 [ndbd] INFO -- Error handler restarting system
2014-05-13 13:05:51 [ndbd] INFO -- Error handler shutdown completed - exiting
2014-05-13 13:05:51 [ndbd] ALERT -- Angel detected too many startup failures(3), not restarting again
2014-05-13 13:05:51 [ndbd] ALERT -- Node 2: Forced node shutdown completed. Occured during startphase 4. Caused by error 2308: 'Another node failed during system restart, please investigate error(s) on other node(s)(Restart error). Temporary error, restart node'.
It seems that data node 2 is trying to sync with data node 1 but has been forcefully shutdown by management node.
(Mgmt Node)
ndb_mgm> Node 1: Forced node shutdown completed, restarting. Occured during startphase 4. Caused by error 2355: 'Failure to restore schema(Resource configuration error). Permanent error, external action needed'.
Node 1: Forced node shutdown completed, restarting. Occured during startphase 4. Caused by error 2355: 'Failure to restore schema(Resource configuration error). Permanent error, external action needed'.
Node 1: Forced node shutdown completed. Occured during startphase 4. Caused by error 2355: 'Failure to restore schema(Resource configuration error). Permanent error, external action needed'.
Node 2: Forced node shutdown completed, restarting. Occured during startphase 4. Caused by error 2308: 'Another node failed during system restart, please investigate error(s) on other node(s)(Restart error). Temporary error, restart node'.
Node 2: Forced node shutdown completed, restarting. Occured during startphase 4. Caused by error 2355: 'Failure to restore schema(Resource configuration error). Permanent error, external action needed'.
ndb_mgm> Node 2: Forced node shutdown completed. Occured during startphase 4. Caused by error 2355: 'Failure to restore schema(Resource configuration error). Permanent error, external action needed'.
Please help me on this since it is very frustrating.
Per the MySQL memory engine page:
The MEMORY storage engine (formerly known as HEAP) creates
special-purpose tables with contents that are stored in memory.
Because the data is vulnerable to crashes, hardware issues, or power
outages, only use these tables as temporary work areas or read-only
caches for data pulled from other tables.