Are the tables stored with MEMORY engine recoverable from cluster crash? - mysql

I have set up MySQL NDB Cluster 7.3.5 and the cluster was working fine.
Cluster with 4 nodes :
NodeA : SQLNode1, DataNode1
NodeB : SQLNode2, DataNode2
NodeC : Mgmt Node1
NodeD : Mgmt Node2
To test the server reboot scenario I rebooted VMWare ESXi and restarted all VMs.
But the data nodes are subsequently failing to start.
Adding logs for the servers respectively:
/home/mysql/mysqlcluster_data/1/ndb_1_out.log (Data Node 1)
error: [ code: 708 line: 38848236 node: 1 count: 1 status: 32687 key: 445914048 name: 'hhmefep/def/fgvmev0000000000-elog-1398414831' ]
2014-05-13 13:16:40 [ndbd] INFO -- Failed to recreate object 505 during restart, error 708.
2014-05-13 13:16:40 [ndbd] INFO -- DBDICT (Line: 4688) 0x00000000
2014-05-13 13:16:40 [ndbd] INFO -- Error handler restarting system
2014-05-13 13:16:40 [ndbd] INFO -- Error handler shutdown completed - exiting
2014-05-13 13:16:40 [ndbd] ALERT -- Angel detected too many startup failures(3), not restarting again
2014-05-13 13:16:40 [ndbd] ALERT -- Node 1: Forced node shutdown completed. Occured during startphase 4. Caused by error 2355: 'Failure to restore schema(Resource configuration error). Permanent error, external action needed'.
It seems that the nodes are failing to recover this table:
hhmefep.fgvmev0000000000-elog-1398414831
/home/mysql/mysqlcluster_data/2/ndb_2_out.log (Data Node 2)
2014-05-13 13:05:48 [ndbd] INFO -- Start phase 1 completed
2014-05-13 13:05:48 [ndbd] INFO -- Start phase 2 completed
2014-05-13 13:05:48 [ndbd] INFO -- Start phase 3 completed
2014-05-13 13:05:51 [ndbd] INFO -- Node 1 disconnected
2014-05-13 13:05:51 [ndbd] INFO -- QMGR (Line: 3308) 0x00000000
2014-05-13 13:05:51 [ndbd] INFO -- Error handler restarting system
2014-05-13 13:05:51 [ndbd] INFO -- Error handler shutdown completed - exiting
2014-05-13 13:05:51 [ndbd] ALERT -- Angel detected too many startup failures(3), not restarting again
2014-05-13 13:05:51 [ndbd] ALERT -- Node 2: Forced node shutdown completed. Occured during startphase 4. Caused by error 2308: 'Another node failed during system restart, please investigate error(s) on other node(s)(Restart error). Temporary error, restart node'.
It seems that data node 2 is trying to sync with data node 1 but has been forcefully shutdown by management node.
(Mgmt Node)
ndb_mgm> Node 1: Forced node shutdown completed, restarting. Occured during startphase 4. Caused by error 2355: 'Failure to restore schema(Resource configuration error). Permanent error, external action needed'.
Node 1: Forced node shutdown completed, restarting. Occured during startphase 4. Caused by error 2355: 'Failure to restore schema(Resource configuration error). Permanent error, external action needed'.
Node 1: Forced node shutdown completed. Occured during startphase 4. Caused by error 2355: 'Failure to restore schema(Resource configuration error). Permanent error, external action needed'.
Node 2: Forced node shutdown completed, restarting. Occured during startphase 4. Caused by error 2308: 'Another node failed during system restart, please investigate error(s) on other node(s)(Restart error). Temporary error, restart node'.
Node 2: Forced node shutdown completed, restarting. Occured during startphase 4. Caused by error 2355: 'Failure to restore schema(Resource configuration error). Permanent error, external action needed'.
ndb_mgm> Node 2: Forced node shutdown completed. Occured during startphase 4. Caused by error 2355: 'Failure to restore schema(Resource configuration error). Permanent error, external action needed'.
Please help me on this since it is very frustrating.

Per the MySQL memory engine page:
The MEMORY storage engine (formerly known as HEAP) creates
special-purpose tables with contents that are stored in memory.
Because the data is vulnerable to crashes, hardware issues, or power
outages, only use these tables as temporary work areas or read-only
caches for data pulled from other tables.

Related

Cannot set configuration in Elastic Beanstalk

I have 4 Elastic Beanstalk deployments: 3 are Corretto 8 and the other one is Corretto 11.
On the Corretto 8 deployments, I can set new configuration without issue. On the Corretto 11 instance, however, any attempt to set a new configuration fails and causes a rollback.
The Corretto versions might not be the problem, but it's the only difference I can see. All 4 apps are Spring Boot apps that run as web servers (i.e embedded tomcat with exposed web ports). I am trying to set the exact same configuration name and value, and it only fails on the one instance.
The configuration I'm trying to set is pretty simple:
VALIDATE_RENEWALS = true
Even just trying to set DEBUG = true causes a failure and rollback.
I don't see a lot of information from the console about what's failing. Here is the event log:
2020-03-16 13:55:17 UTC-0600 INFO The environment was reverted to the previous configuration setting.
2020-03-16 13:54:45 UTC-0600 ERROR During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
2020-03-16 13:54:45 UTC-0600 ERROR Failed to deploy configuration.
2020-03-16 13:54:45 UTC-0600 ERROR Unsuccessful command execution on instance id(s) 'i-00553f4ac36afd327'. Aborting the operation.
2020-03-16 13:54:45 UTC-0600 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2020-03-16 13:54:45 UTC-0600 ERROR [Instance: i-00553f4ac36afd327] Command failed on instance. An unexpected error has occurred [ErrorCode: 0000000001].
2020-03-16 13:54:20 UTC-0600 INFO Updating environment XXX's configuration settings.
2020-03-16 13:54:15 UTC-0600 INFO Environment update is starting.
I've also downloaded the full set of logs for the instance and don't see anything obvious. The app stdout doesn't have any errors or exceptions, it just starts normally and then gets terminated. None of the other log files have messages around the times above, so I'm really not sure what else I can look at.
Edit
The times don't line up but I do see this in eb-engine.log file:
2020/03/16 17:54:38.508634 [INFO] checking whether command is applicable to this instance...
2020/03/16 17:54:38.508658 [INFO] this command is applicable to the instance, thus instance should execute command
2020/03/16 17:54:38.508665 [INFO] check whether this is an enhanced env...
2020/03/16 17:54:38.508794 [INFO] Executing instruction: StageJavaApplication
2020/03/16 17:54:38.508858 [ERROR] GetArchivedFileType with file /opt/elasticbeanstalk/deployment/app_source_bundle failed with error open /opt/elasticbeanstalk/deployment/app_source_bundle: no such file or directory
2020/03/16 17:54:38.508868 [ERROR] An error occurred during execution of command [config-deploy] - [StageJavaApplication]. Stop running the command. Error: staging java app failed with error GetArchivedFileType with file /opt/elasticbeanstalk/deployment/app_source_bundle failed with error open /opt/elasticbeanstalk/deployment/app_source_bundle: no such file or directory

Cassandra down after high traffic

I have a problem with cassandra when the traffic goes high... cassandra crashes
here's what i got in system.log
WARN 11:54:35 JNA link failure, one or more native method will be unavailable.
WARN 11:54:35 jemalloc shared library could not be preloaded to speed up memory allocations
WARN 11:54:35 JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
INFO 11:54:35 Initializing SIGAR library
INFO 11:54:35 Checked OS settings and found them configured for optimal performance.
INFO 11:54:35 Initializing system.schema_triggers
ERROR 11:54:36 Exiting due to error while processing commit log during initialization.
java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled
Java code at org.apache.cassandra.db.commitlog.CommitLogDescriptor.writeHeader(CommitLogDescriptor.java:87) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.commitlog.CommitLogSegment.<init>(CommitLogSegment.java:153) ~[apache-cassandra-2.2.4.jar:2.2.4] at org.apache.cassandra.db.commitlog.MemoryMappedSegment.<init>(MemoryMappedSegment.java:47) ~[apache-cassandra-2.2.4.jar:2.2.4] at
org.apache.cassandra.db.commitlog.CommitLogSegment.createSegment(CommitLogSegment.java:121) ~[apache-cassandra-2.2.4.jar:2.2.4]
atorg.apache.cassandra.db.commitlog.CommitLogSegmentManager$1.runMayThrow(CommitLogSegmentManager.java:122) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) [apache-cassandra-2.2.4.jar:2.2.4]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
and in debug.log
DEBUG [SharedPool-Worker-1] 2017-05-25 12:54:18,586 SliceQueryPager.java:92 - Querying next page of slice query; new filter: SliceQueryFilter [reversed=false, slices=[[, ]], count=5000, toGroup = 2]
WARN [SharedPool-Worker-2] 2017-05-25 12:54:18,658 SliceQueryFilter.java:307 - Read 2129 live and 27677 tombstone cells in RestCommSMSC.SLOT_MESSAGES_TABLE_2017_05_25 for key: 549031460 (see tombstone_warn_threshold). 5000 columns were requested, slices=[-]DEBUG [SharedPool-Worker-1] 2017-05-25 12:54:18,808 AbstractQueryPager.java:95 - Fetched 2129 live rows
DEBUG [SharedPool-Worker-1] 2017-05-25 12:54:18,808 AbstractQueryPager.java:112 - Got result (2129) smaller than page size (5000), considering pager exhausted DEBUG [SharedPool-Worker-1] 2017-05-25 12:54:18,808 AbstractQueryPager.java:133 - Remaining rows to page: 2147481518
INFO [main] 2017-05-25 12:54:34,826 YamlConfigurationLoader.java:92 - Loading settings from file:/opt/SMGS/apache-cassandra-2.2.4/conf/cassandra.yaml INFO [main] 2017-05-25 12:54:34,923 YamlConfigurationLoader.java:135 Node configuration
[authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer;
auto_snapshot=true; batch_size_fail_threshold_in_kb=50;
batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_period_in_ms=10000; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_node_timeout=false; disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_user_defined_functions=false; endpoint_snitch=SimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_compression=all; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=localhost; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; memtable_allocation_type=heap_buffers; native_transport_port=9042; num_tokens=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_validity_in_ms=2000; range_request_timeout_in_ms=50000; read_request_timeout_in_ms=10000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeout_in_ms=50000; role_manager=CassandraRoleManager; roles_validity_in_ms=2000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=localhost; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider, parameters=[{seeds=127.0.0.1}]}]; server_encryption_options<REDACTED>;snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=true; storage_port=7000; thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=5000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; windows_timer_interval=1; write_request_timeout_in_ms=2000]
DEBUG [main] 2017-05-25 12:54:34,958 DatabaseDescriptor.java:296 - Syncing log with a period of 10000
INFO [main] 2017-05-25 12:54:34,958 DatabaseDescriptor.java:304 - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO [main] 2017-05-25 12:54:35,110 DatabaseDescriptor.java:409 - Global memtable on-heap threshold is enabled at 1991MB INFO [main] 2017-05-25 12:54:35,110 DatabaseDescriptor.java:413 - Global memtable off-heap threshold is enabled at 1991MB
i don't know if this problem is related to commitLogs or not, anyways in cassandra.yaml i'm setting:
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
you can start you cassandra with this command:
cd /ASMSC02/apache-cassandra-2.0.11/
nohup bin/cassandra
regards,
Hafiz

Issue when trying to start Sonarqube services

I installed sonarqube-6.3.1 in my machine and created a database in mysql db named 'sonarqubedb'. Now when I am making changes in sonar.properties file to use the database, sonarqube is not getting started and throwing error msg, but If I am using the default DB configuration (and NOT mysql), I am able to start.
Could somebody please provide me a solution what is going wrong when I am using mysql db.
My sonar.properties file be like:
sonar.jdbc.username=root
sonar.jdbc.password=
sonar.jdbc.url=jdbc:mysql://localhost:3306/sonarqubedb?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
and the sonar log file when I try to start service, be like:
--> Wrapper Started as Service
Launching a JVM...
WrapperManager class initialized by thread: main Using classloader:
sun.misc.Launcher$AppClassLoader#4e25154f
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
Wrapper Manager: JVM #1
Running a 64-bit JVM.
Wrapper Manager: Registering shutdown hook
Wrapper Manager: Using wrapper
Load native library. One or more attempts may fail if platform specific
libraries do not exist.
Loading native library failed: wrapper-windows-x86-64.dll Cause:
java.lang.UnsatisfiedLinkError: no wrapper-windows-x86-64 in
java.library.path
Loaded native library: wrapper.dll
Calling native initialization method.
Initializing WrapperManager native library.
Java Executable: C:\ProgramData\Oracle\Java\javapath\java.exe
Windows version: 6.1.7600
Java Version : 1.8.0_45-b15 Java HotSpot(TM) 64-Bit Server VM
Java VM Vendor : Oracle Corporation
Control event monitor thread started.
Startup runner thread started.
WrapperManager.start(org.tanukisoftware.wrapper.WrapperSimpleApp#4f023edb, args[]) called by thread: main
Communications runner thread started.
Open socket to wrapper...Wrapper-Connection
Failed attempt to bind using local port 31000
Opened Socket from 31001 to 32000
Send a packet KEY : 4hhDEyNqmPXAiWpf
handleSocket(Socket[addr=/127.0.0.1,port=32000,localport=31001])
Received a packet LOW_LOG_LEVEL : 1
Wrapper Manager: LowLogLevel from Wrapper is 1
Received a packet PING_TIMEOUT : 0
PingTimeout from Wrapper is 0
Received a packet PROPERTIES : (Property Values)
Received a packet START : start
calling WrapperListener.start()
Waiting for WrapperListener.start runner thread to complete.
WrapperListener.start runner thread started.
WrapperSimpleApp: start(args) Will wait up to 2 seconds for the main
method to complete.
WrapperSimpleApp: invoking main method
2017.04.26 14:54:12 INFO app[][o.s.a.AppFileSystem] Cleaning or creating
temp directory C:\Program Files\Sonar\sonarqube-6.3.1\sonarqube-6.3.1\temp
2017.04.26 14:54:12 INFO app[][o.s.p.m.JavaProcessLauncher] Launch
process[es]: C:\Program Files\Java\jre1.8.0_45\bin\java -
Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djna.nosys=true -
XX:+UseParNewGC -XX:+UseConcMarkSweepGC -
XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -
XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=C:\Program
Files\Sonar\sonarqube-6.3.1\sonarqube-6.3.1\temp -javaagent:C:\Program
Files\Java\jre1.8.0_45\lib\management-agent.jar -cp
./lib/common/*;./lib/search/* org.sonar.search.SearchServer C:\Program
Files\Sonar\sonarqube-6.3.1\sonarqube-6.3.1\temp\sq-
process3041279828124660880properties
Send a packet START_PENDING : 5000
Send a packet START_PENDING : 5000
WrapperSimpleApp: start(args) end. Main Completed=false, exitCode=null
WrapperListener.start runner thread stopped.
returned from WrapperListener.start()
Send a packet STARTED :
Startup runner thread stopped.
Received a packet PING : ping
Send a packet PING : ok
Received a packet PING : ping
Send a packet PING : ok
2017.04.26 14:54:23 INFO app[][o.s.p.m.Monitor] Process[es] is up
2017.04.26 14:54:23 INFO app[][o.s.p.m.JavaProcessLauncher] Launch
process[web]: C:\Program Files\Java\jre1.8.0_45\bin\java -
Djava.awt.headless=true -Dfile.encoding=UTF-8 -Xmx512m -Xms128m -
XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=C:\Program
Files\Sonar\sonarqube-6.3.1\sonarqube-6.3.1\temp -javaagent:C:\Program
Files\Java\jre1.8.0_45\lib\management-agent.jar -cp
./lib/common/*;./lib/server/*;C:\Program Files\Sonar\sonarqube-
6.3.1\sonarqube-6.3.1\lib\jdbc\mysql\mysql-connector-java-5.1.39.jar
org.sonar.server.app.WebServer C:\Program Files\Sonar\sonarqube-
6.3.1\sonarqube-6.3.1\temp\sq-process5745752416531116392properties
Received a packet PING : ping
Send a packet PING : ok
2017.04.26 14:54:28 INFO app[][o.s.p.m.Monitor] Process[es] is stopping
2017.04.26 14:54:28 ERROR app[][o.s.p.m.Monitor] Process[web] failed to
start
2017.04.26 14:54:28 INFO app[][o.s.p.m.Monitor] Process[es] is stopped
Wrapper Manager: ShutdownHook started
WrapperManager.stop(0) called by thread: Wrapper-Shutdown-Hook
Send a packet STOP : 0
Received a packet STOP :
Thread, Wrapper-Shutdown-Hook, handling the shutdown process.
calling listener.stop()
WrapperSimpleApp: stop(0)
returned from listener.stop() -> 0
shutdownJVM(0) Thread:Wrapper-Shutdown-Hook
Send a packet STOPPED : 0
Closing socket.
Server daemon shut down
Wrapper Manager: ShutdownHook complete
<-- Wrapper Stopped
Thanks in Advance :)
Go to your sonarqube/logs directory. You'll find several log files and one of them will contain the detailed error on why sonarqube won't start.(you'll have to scroll all the way down inside the files for the latest information iirc)
Go through this sonar documentation, you will get some help -
https://docs.sonarqube.org/display/SONAR/Installing+the+Server
I had the same issue.
The problem was that my MySQL version didn't meet the minimum requirements.

Troubleshoot DCHQ Host

I have been running DCHQ.io (On-Prem) for a few months now with no major issue.
My Container Hosts environment looks like this:
DCHQ-VM-Host
RAM: 14gb / CPU 4 / 100gb Storage
10.21.38.165
host0
This is where DCHQ resides
Docker-VM-1
RAM: 8gb / CPU 2
10.21.36.201
host1
Where LB containers are hosted
Docker-Metal-2
RAM 96gb / CPU 12
10.21.39.71
host2
Where APP containers are hosted
Docker-Metal-4
RAM 96gb / CPU 12
10.21.38.170
host4
Where DB containers are hosted
Today, while attempting to deploy the 3-Tier Java (ApacheHTTP – Tomcat – MySQL) application template for a POC at work, host-4 went offline.
Couple days ago I converted host-1 from a BM machine to a VM. Therefore, I removed that host from DCHQ and added it back using the same name (host-1) but this time as a VM ona different ESX server. Not sure if this has something to do with host-4 throwing the error below. As a result, no template involving host-4 can be deployed so as a workaround I'm using host-1 and host-2 to deploy.
I have tried restarting the host-4, deactivating/activating host-4 from within DCHQ UI and restarting the agent on host-4 but to no avail.
My last resort is to remove and reinstall the client on host-4 but I wanted to post it here first. I have also email DCHQ support with this issue.
The DCHQ log shows the following error :
2016-01-29 15:56:31.026 INFO 1217 --- [pool-3-thread-2] c.d.a.o.impl.ImagePullQueueProcessor : Processing pull req
2016-01-29 15:56:31.028 INFO 1217 --- [pool-3-thread-2] c.d.a.o.impl.TemplateOperationsImpl : Received pull request for image [mysql:latest] registry [null]
2016-01-29 15:56:31.028 INFO 1217 --- [pool-3-thread-2] c.d.a.o.impl.DockerClientBuilderUtil : Using public repo since username [null] or password is empty
2016-01-29 15:56:31.043 ERROR 1217 --- [pool-15-thread-1] c.g.d.core.async.ResultCallbackTemplate : Error during callback
org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:10624 [/127.0.0.1] failed: Connection refused
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:71)
at com.github.dockerjava.jaxrs.connector.ApacheConnector.apply(ApacheConnector.java:443)
at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:246)
at org.glassfish.jersey.client.JerseyInvocation$2.call(JerseyInvocation.java:683)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:424)
at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:679)
at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:435)
at org.glassfish.jersey.client.JerseyInvocation$Builder.post(JerseyInvocation.java:338)
at com.github.dockerjava.jaxrs.async.POSTCallbackNotifier.response(POSTCallbackNotifier.java:29)
at com.github.dockerjava.jaxrs.async.AbstractCallbackNotifier.call(AbstractCallbackNotifier.java:45)
at com.github.dockerjava.jaxrs.async.AbstractCallbackNotifier.call(AbstractCallbackNotifier.java:22)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 25 common frames omitted
2016-01-29 15:56:31.044 ERROR 1217 --- [pool-3-thread-2] c.d.a.o.impl.TemplateOperationsImpl : Error pulling image [mysql] response logs []
2016-01-29 15:56:31.046 INFO 1217 --- [pool-3-thread-2] c.d.a.o.impl.ImagePullQueueProcessor : Finished processing pull req
2016-01-29 15:58:39.687 WARN 1217 --- [pool-3-thread-1] c.d.a.o.impl.SysInfoMonitorService : org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:10624 [/127.0.0.1] failed: Connection refused
2016-01-29 15:58:39.688 ERROR 1217 --- [pool-3-thread-2] c.d.a.o.impl.MachineOperationsImpl : org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:10624 [/127.0.0.1] failed: Connection refused
Thank you in advance,
Rod
Thanks for reporting this issue. Feel free to report this on our issue tracker.
https://github.com/dchqinc/dchq-on-premise-issue-tracker/issues
Please make sure that the information in the application.properties file has not changed. Make sure that the server key matches whatever the DCHQ UI shows for host-4.
vi /opt/dchq/config/application.properties
You can then restart the agent:
service dchq stop
ps -ef | grep dchq
## forcefully kill any DCHQ process that may not have stopped otherwise using kill -9
service dchq start
Lastly -- can you please test the connection from the UI? This will help us narrow down the issue.
For any help deploying Docker Compose applications, please refer to our documentation: http://dchq.co/docker-compose.html

Error Unable to save measure for metric pdf-data on component :Sonar Exception?

I have a Jenkins Job that runs SonarRunner on a Maven project composed of several modules i'm using msql db for sonar. The build fails showing that
[ERROR] Unable to save measure for metric [pdf-data] on component [com.XX:xxx-parent]
[ERROR] Failed to execute goal org.codehaus.mojo:sonar-maven-plugin:2.3.1:sonar (default-cli) on project XXX-parent: Unable to save measure for metric [pdf-data] on component [com.XX:XXX-parent]:
[ERROR] ### Error updating database. Cause: com.mysql.jdbc.PacketTooBigException: Packet for query is too large (3676748 > 1048576). You can change this value on the server by setting the max_allowed_packet' variable.
[ERROR] ### The error may involve org.sonar.api.database.model.MeasureMapper.insertData-Inline
[ERROR] ### The error occurred while setting parameters
[ERROR] ### SQL: INSERT INTO measure_data (measure_id, snapshot_id, data) VALUES (?, ?, ?)
[ERROR] ### Cause: com.mysql.jdbc.PacketTooBigException: Packet for query is too large (3676748 > 1048576). You can change this value on the server by setting the max_allowed_packet' variable.
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.mojo:sonar-maven-plugin:2.3.1:sonar (default-cli) on project XXX-parent: Unable to save measure for metric [pdf-data] on component [com.XXX:XXX-parent]
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
I encountered this error as well. The MySQL database that your SonarQube is using only allows LOB contents up to 1 MB by default and the content it is trying to save is larger than that as stated in the error message.
com.mysql.jdbc.PacketTooBigException: Packet for query is too large (3676748 > 1048576).
The solution is that you need to increase the max_allowed_packet setting for the MySQL instance the SonarQube is using.
You can change this value on the server by setting the max_allowed_packet' variable.
Instructions can be found here.
Start the server with the desired setting, i.e. mysqld --max_allowed_packet=16M for 16 MB max.
Or more permanently, modify the config file (my.cnf on Linux systems, possibly my.ini on Windows) by adding the setting under the [mysqld] section.
[mysqld]
max_allowed_packet=16M