airflow webserver starting - gunicorn workers shutting down - gunicorn

I am running airflow 1.8 on centos7 on docker and my webserver is not getting to the browser. I installed airflow via pip2.7. Flower ui is displaying fine, initdb ran connecting to a postgres and redis backend, using CeleryExecutor, running on ECS, and I am running as root. Webserver is being deployed via airflow webserver to default 8080.
Does anyone know what the causes / solutions are for the gunicorn workers exiting are per the log shown below? Specifically, it seems like it is this line
ERROR - [0 / 0] some workers seem to have died and gunicorndid not restart them as expected
Whole log...
[2018-04-13 20:05:01,161] {db.py:287} INFO - Creating tables
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
Done.
[2018-04-13 20:05:02,358] {__init__.py:57} INFO - Using executor CeleryExecutor
/usr/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
[2018-04-13 20:05:03,363] [1] {models.py:167} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2018-04-13 20:05:04,488] {__init__.py:57} INFO - Using executor CeleryExecutor
[2018-04-13 20:05:04 +0000] [18] [INFO] Starting gunicorn 19.3.0
[2018-04-13 20:05:04 +0000] [18] [INFO] Listening at: http://0.0.0.0:8080 (18)
[2018-04-13 20:05:04 +0000] [18] [INFO] Using worker: sync
[2018-04-13 20:05:04 +0000] [24] [INFO] Booting worker with pid: 24
[2018-04-13 20:05:05 +0000] [25] [INFO] Booting worker with pid: 25
/usr/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2018-04-13 20:05:05 +0000] [26] [INFO] Booting worker with pid: 26
[2018-04-13 20:05:05 +0000] [27] [INFO] Booting worker with pid: 27
/usr/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
=================================================================
/usr/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
/usr/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2018-04-13 20:05:06,461] [24] {models.py:167} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2018-04-13 20:05:07,873] [1] {cli.py:723} ERROR - [0 / 0] some workers seem to have died and gunicorndid not restart them as expected
[2018-04-13 20:05:08,271] [27] {models.py:167} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2018-04-13 20:05:08,271] [25] {models.py:167} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2018-04-13 20:05:08,271] [26] {models.py:167} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2018-04-13 20:05:09 +0000] [25] [INFO] Parent changed, shutting down: <Worker 25>
[2018-04-13 20:05:09 +0000] [25] [INFO] Worker exiting (pid: 25)
[2018-04-13 20:05:09 +0000] [26] [INFO] Parent changed, shutting down: <Worker 26>
[2018-04-13 20:05:09 +0000] [26] [INFO] Worker exiting (pid: 26)
[2018-04-13 20:05:09 +0000] [27] [INFO] Parent changed, shutting down: <Worker 27>
[2018-04-13 20:05:09 +0000] [27] [INFO] Worker exiting (pid: 27)
I swear I had this working not long ago, don't know what happened. Here is a list of pip packages I installed
airflow (1.8.0)
alembic (0.8.10)
amqp (2.2.2)
asn1crypto (0.24.0)
awscli (1.15.4)
Babel (2.5.3)
backports-abc (0.5)
billiard (3.5.0.3)
boto3 (1.7.4)
botocore (1.10.4)
celery (4.0.2)
certifi (2018.1.18)
cffi (1.11.5)
chardet (3.0.4)
click (6.7)
colorama (0.3.7)
croniter (0.3.20)
cryptography (2.2.2)
Cython (0.28.2)
dill (0.2.7.1)
docutils (0.14)
enum34 (1.1.6)
Flask (0.11.1)
Flask-Admin (1.4.1)
Flask-Cache (0.13.1)
Flask-Login (0.2.11)
flask-swagger (0.2.13)
Flask-WTF (0.12)
flower (0.9.2)
funcsigs (1.0.0)
future (0.15.2)
futures (3.2.0)
gitdb2 (2.0.3)
GitPython (2.1.9)
gunicorn (19.3.0)
idna (2.6)
ipaddress (1.0.19)
itsdangerous (0.24)
Jinja2 (2.8.1)
jmespath (0.9.3)
kombu (4.1.0)
lockfile (0.12.2)
lxml (3.8.0)
Mako (1.0.7)
Markdown (2.6.11)
MarkupSafe (1.0)
ndg-httpsclient (0.4.4)
numpy (1.14.2)
ordereddict (1.1)
pandas (0.22.0)
pip (9.0.3)
psutil (4.4.2)
psycopg2-binary (2.7.4)
pyasn1 (0.4.2)
pycparser (2.18)
Pygments (2.2.0)
pyOpenSSL (17.5.0)
python-daemon (2.1.2)
python-dateutil (2.7.2)
python-editor (1.0.3)
python-nvd3 (0.14.2)
python-slugify (1.1.4)
pytz (2018.4)
PyYAML (3.12)
redis (2.10.6)
requests (2.18.4)
rsa (3.4.2)
s3transfer (0.1.13)
setproctitle (1.1.10)
setuptools (39.0.1)
singledispatch (3.4.0.3)
six (1.11.0)
smmap2 (2.0.3)
SQLAlchemy (1.2.6)
tabulate (0.7.7)
thrift (0.9.3)
tornado (5.0.2)
Unidecode (1.0.22)
urllib3 (1.22)
vine (1.1.4)
Werkzeug (0.14.1)
wheel (0.31.0)
WTForms (2.1)
zope.deprecation (4.3.0)
UPDATE
I installed from source and am now getting this error from the webserver
[2018-04-14 00:20:48,594] {{cli.py:718}} ERROR - [0 / 0] some workers seem to have died and gunicorndid not restart them as expected
[2018-04-14 00:20:50,396] {{models.py:197}} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2018-04-14 00:20:50,396] {{models.py:197}} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2018-04-14 00:20:50,396] {{models.py:197}} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2018-04-14 00:24:18,135] {{cli.py:725}} ERROR - No response from gunicorn master within 120 seconds
[2018-04-14 00:24:23,032] {{cli.py:726}} ERROR - Shutting down webserver
I think this a consequence of https://issues.apache.org/jira/browse/AIRFLOW-1235 which shuts down the webserver when the gunicorn workers die. I think....
UPDATE
Ok this fixed itself somehow. Don't know how because I did a number of things but installing gunicorn with greenlet, eventlet, gevent might have helped and it could have been something on my entrypoint perhaps with concurrency in executing airflow webserver right after airflow initdb. Leaving the question up as I faced this with a puckel install before as well and would love to know if this is a bug others are facing and what this issue was.

So, when you installed from source you got the fix for https://issues.apache.org/jira/browse/AIRFLOW-1235, which I think restarts the master and workers when the worker dies.
I've also seen my workers die with the MySQL session/connection goes bad. EG an exception from SQLAlchemy either about the transaction having failed due to a concurrency lock and needing to be retried, around which Airflow models didn't have any logic, OR a InvalidRequestError: This session is in 'prepared' state; no further SQL can be emitted within this transaction. But not generally AT start up.
The two times I had errors at start up was when the connection to the database could not be made due to a security group in thing in AWS, and when our 3000+ dags took so long to get added to the DAG Bag that the timeout on the workers was getting tripped and they'd shut themselves down before the setup code was done. I would love to see if this setup code could be improved or moved out of the workers.

Related

Unable to start Tessera node for Quorum

I have set up Quorom 3 nodes by following the guide and now I am setting of the Tessera nodes for the 3 nodes. however the Tessera node are not starting up.
https://docs.goquorum.com/en/latest/Getting%20Started/Creating-A-Network-From-Scratch/#tessera
Here are logs captured.
2019-11-09 00:57:30.942 [main] INFO org.eclipse.jetty.util.log - Logging initialized #7972ms to org.eclipse.jetty.util.log.Slf4jLog
2019-11-09 00:57:31.069 [main] INFO c.quorum.tessera.server.JerseyServer - Starting http://localhost:9081
2019-11-09 00:57:31.075 [main] INFO org.eclipse.jetty.server.Server - jetty-9.4.z-SNAPSHOT; built: 2019-04-18T19:45:35.259Z; git: aa1c656c315c011c01e7b21aabb04066635b9f67; jvm 1.8.0_231-b11
2019-11-09 00:57:32.051 [main] WARN o.g.jersey.internal.inject.Providers - A provider com.quorum.tessera.thirdparty.RawTransactionResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider com.quorum.tessera.thirdparty.RawTransactionResource will be ignored.
2019-11-09 00:57:32.519 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler#37c5284a{/,null,AVAILABLE}
Failed to bind to 0.0.0.0/0.0.0.0:9081
2019-11-09 00:57:32.540 [Thread-2] INFO c.quorum.tessera.server.JerseyServer - Stopping Jersey server at http://localhost:9081
2019-11-09 00:57:32.555 [Thread-2] INFO o.e.jetty.server.AbstractConnector - Stopped ServerConnector#41853299{HTTP/1.1,[http/1.1]}{0.0.0.0:9081}
2019-11-09 00:57:32.782 [Thread-2] INFO o.e.j.server.handler.ContextHandler - Stopped o.e.j.s.ServletContextHandler#37c5284a{/,null,UNAVAILABLE}
2019-11-09 00:57:32.787 [Thread-2] INFO c.quorum.tessera.server.JerseyServer - Stopped Jersey server at http://localhost:9081
2019-11-09 00:57:32.787 [Thread-2] INFO c.quorum.tessera.server.JerseyServer - Stopping Jersey server at unix:/home/srikant/Desktop/fromscratch/new-node-1t/tm.ipc
2019-11-09 00:57:32.787 [Thread-2] INFO c.quorum.tessera.server.JerseyServer - Stopped Jersey server at unix:/home/srikant/Desktop/fromscratch/new-node-1t/tm.ipc
2019-11-09 00:57:32.788 [Thread-2] INFO c.quorum.tessera.server.JerseyServer - Stopping Jersey server at http://localhost:9001
2019-11-09 00:57:32.788 [Thread-2] INFO c.quorum.tessera.server.JerseyServer - Stopped Jersey server at http://localhost:9001
Any help would be really appreciable.
The 'Failed to bind' message implies that the port is in use.
Perhaps Tessera is already running?
You can use lsof -nP -i4TCP:9081 to check which process is using port 9081.
I came across the same issue. I restarted the VM and then it worked

Apache Karaf stuck while shutting with Pax-Exam

I was running integrations tests with Pax-Exam and Karaf, tests got executed successfully but while shutting Karaf, it stuck on below and never resume
Pax-Exam = 4.11
Karaf = 4.2
[main] DEBUG o.ops4j.store.intern.TemporaryStore - Exit store(): 66cf6a516d0d1a670e78bd6b0be97f3da2a380b3
[main] DEBUG o.o.p.e.c.remote.RBCRemoteTarget - Preparing and Installing bundle (from stream )..
[main] DEBUG o.o.p.e.r.c.RemoteBundleContextClient - Packing probe into memory for true RMI. Hopefully things will fill in..
[main] DEBUG o.o.p.e.c.remote.RBCRemoteTarget - Installed bundle (from stream) as ID: 86
[main] DEBUG o.o.p.e.c.remote.RBCRemoteTarget - call [[TestAddress:PaxExam-bc970a6c-c656-4aa6-9300-35ded2bcde50 root:PaxExam-f6737e31-8f28-43e
0-847e-1f3f49649233]]
[main] DEBUG o.o.p.e.k.c.i.KarafTestContainer - Shutting down the test container (Pax Runner)
Following is output of JConsole for blocking
Name: main
State: BLOCKED on java.lang.Object#d53a0bb owned by: KarafJavaRunner
Total blocked: 106 Total waited: 105
Stack trace:
org.ops4j.pax.exam.karaf.container.internal.runner.InternalRunner.shutdown(InternalRunner.java:71)
org.ops4j.pax.exam.karaf.container.internal.runner.KarafJavaRunner.shutdown(KarafJavaRunner.java:120)
- locked org.ops4j.pax.exam.karaf.container.internal.runner.KarafJavaRunner#279baf5b
org.ops4j.pax.exam.karaf.container.internal.KarafTestContainer.stop(KarafTestContainer.java:600)
- locked org.ops4j.pax.exam.karaf.container.internal.KarafTestContainer#25dcfa62
org.ops4j.pax.exam.spi.reactors.AllConfinedStagedReactor.invoke(AllConfinedStagedReactor.java:87)
org.ops4j.pax.exam.junit.impl.ProbeRunner$2.evaluate(ProbeRunner.java:267)
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
org.ops4j.pax.exam.junit.impl.ProbeRunner.run(ProbeRunner.java:98)
org.ops4j.pax.exam.junit.PaxExam.run(PaxExam.java:93)
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Update:
One more thing i observed if i forcefully shut it and then if i run "mvn clean install" i get following error and i have to wait to get it run again
[←[1;31mERROR←[m] Failed to execute goal ←[32morg.apache.maven.plugins:maven-clean-plugin:2.5:clean←[m ←[1m(default-clean)←[m on project ←[36mosgi-unit-tes
ts-sample←[m: ←[1;31mFailed to clean project: Failed to delete C:\Users\..\target\pax
exam\e266ddcb-5fed-4997-8178-3d4944251418\system\org\apache\felix\org.apache.felix.framework\5.6.10\org.apache.felix.framework-5.6.10.jar←[m -> ←[1m[Help 1
Update2:
After exiting prompt still its running
C:\Program Files\Java\jdk1.8.0_162\bin>jps -l
1552 sun.tools.jps.Jps
4144
1420 org.apache.karaf.main.Main
C:\Program Files\Java\jdk1.8.0_162\bin>jps -l 1420
RMI Registry not available at 1420:1099
Exception creating connection to: 1420; nested exception is:
java.net.SocketException: Network is unreachable: connect
Update3:
if i kill this process, Pax resume and display message successful execution of Tests. infact before shutting all tests are already successfull but it not able to shut.
TASKKILL /F /PID 10692
Now i have no clue to handle this locking issue.
Update4:
Name: main
State: WAITING on org.apache.felix.framework.util.ThreadGate#b3d26d8
Total blocked: 6 Total waited: 7
Stack trace:
java.lang.Object.wait(Native Method)
org.apache.felix.framework.util.ThreadGate.await(ThreadGate.java:79)
org.apache.felix.framework.Felix.waitForStop(Felix.java:1075)
org.apache.karaf.main.Main.awaitShutdown(Main.java:640)
org.apache.karaf.main.Main.main(Main.java:188)
Name: FelixDispatchQueue
State: WAITING on java.util.ArrayList#3276dd18
Total blocked: 353 Total waited: 342
Stack trace:
java.lang.Object.wait(Native Method)
java.lang.Object.wait(Object.java:502)
org.apache.felix.framework.EventDispatcher.run(EventDispatcher.java:1122)
org.apache.felix.framework.EventDispatcher.access$000(EventDispatcher.java:54)
org.apache.felix.framework.EventDispatcher$1.run(EventDispatcher.java:102)
java.lang.Thread.run(Thread.java:748)
Update5:
After spending lot of time i finally realize that by adding below bundles it got stuck, if i dont add them it works fine
wrappedBundle( maven("org.ops4j.pax.tinybundles", "tinybundles").versionAsInProject() ), //2.1.0
wrappedBundle( maven("biz.aQute.bnd", "bndlib").versionAsInProject() )//2.4.0
Regards,
I resolved issue by changing following jars version
maven("org.ops4j.pax.tinybundles", "tinybundles") from 2.1.0 to 3.0.0
maven("biz.aQute.bnd", "bndlib") from 2.4.0 to 3.5.0

Cassandra down after high traffic

I have a problem with cassandra when the traffic goes high... cassandra crashes
here's what i got in system.log
WARN 11:54:35 JNA link failure, one or more native method will be unavailable.
WARN 11:54:35 jemalloc shared library could not be preloaded to speed up memory allocations
WARN 11:54:35 JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
INFO 11:54:35 Initializing SIGAR library
INFO 11:54:35 Checked OS settings and found them configured for optimal performance.
INFO 11:54:35 Initializing system.schema_triggers
ERROR 11:54:36 Exiting due to error while processing commit log during initialization.
java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled
Java code at org.apache.cassandra.db.commitlog.CommitLogDescriptor.writeHeader(CommitLogDescriptor.java:87) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.db.commitlog.CommitLogSegment.<init>(CommitLogSegment.java:153) ~[apache-cassandra-2.2.4.jar:2.2.4] at org.apache.cassandra.db.commitlog.MemoryMappedSegment.<init>(MemoryMappedSegment.java:47) ~[apache-cassandra-2.2.4.jar:2.2.4] at
org.apache.cassandra.db.commitlog.CommitLogSegment.createSegment(CommitLogSegment.java:121) ~[apache-cassandra-2.2.4.jar:2.2.4]
atorg.apache.cassandra.db.commitlog.CommitLogSegmentManager$1.runMayThrow(CommitLogSegmentManager.java:122) ~[apache-cassandra-2.2.4.jar:2.2.4]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) [apache-cassandra-2.2.4.jar:2.2.4]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
and in debug.log
DEBUG [SharedPool-Worker-1] 2017-05-25 12:54:18,586 SliceQueryPager.java:92 - Querying next page of slice query; new filter: SliceQueryFilter [reversed=false, slices=[[, ]], count=5000, toGroup = 2]
WARN [SharedPool-Worker-2] 2017-05-25 12:54:18,658 SliceQueryFilter.java:307 - Read 2129 live and 27677 tombstone cells in RestCommSMSC.SLOT_MESSAGES_TABLE_2017_05_25 for key: 549031460 (see tombstone_warn_threshold). 5000 columns were requested, slices=[-]DEBUG [SharedPool-Worker-1] 2017-05-25 12:54:18,808 AbstractQueryPager.java:95 - Fetched 2129 live rows
DEBUG [SharedPool-Worker-1] 2017-05-25 12:54:18,808 AbstractQueryPager.java:112 - Got result (2129) smaller than page size (5000), considering pager exhausted DEBUG [SharedPool-Worker-1] 2017-05-25 12:54:18,808 AbstractQueryPager.java:133 - Remaining rows to page: 2147481518
INFO [main] 2017-05-25 12:54:34,826 YamlConfigurationLoader.java:92 - Loading settings from file:/opt/SMGS/apache-cassandra-2.2.4/conf/cassandra.yaml INFO [main] 2017-05-25 12:54:34,923 YamlConfigurationLoader.java:135 Node configuration
[authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer;
auto_snapshot=true; batch_size_fail_threshold_in_kb=50;
batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_period_in_ms=10000; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_node_timeout=false; disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_user_defined_functions=false; endpoint_snitch=SimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_compression=all; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=localhost; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; memtable_allocation_type=heap_buffers; native_transport_port=9042; num_tokens=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_validity_in_ms=2000; range_request_timeout_in_ms=50000; read_request_timeout_in_ms=10000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeout_in_ms=50000; role_manager=CassandraRoleManager; roles_validity_in_ms=2000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=localhost; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider, parameters=[{seeds=127.0.0.1}]}]; server_encryption_options<REDACTED>;snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=true; storage_port=7000; thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=5000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; windows_timer_interval=1; write_request_timeout_in_ms=2000]
DEBUG [main] 2017-05-25 12:54:34,958 DatabaseDescriptor.java:296 - Syncing log with a period of 10000
INFO [main] 2017-05-25 12:54:34,958 DatabaseDescriptor.java:304 - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO [main] 2017-05-25 12:54:35,110 DatabaseDescriptor.java:409 - Global memtable on-heap threshold is enabled at 1991MB INFO [main] 2017-05-25 12:54:35,110 DatabaseDescriptor.java:413 - Global memtable off-heap threshold is enabled at 1991MB
i don't know if this problem is related to commitLogs or not, anyways in cassandra.yaml i'm setting:
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
you can start you cassandra with this command:
cd /ASMSC02/apache-cassandra-2.0.11/
nohup bin/cassandra
regards,
Hafiz

Jenkins error: Authorization failed to svn in sonarqube.

I am using Jenkins 2.8 with Sonarqube Plugin 2.2.1. A week ago we had a problem with the sonar server and we get this error. To solve the problem we decided to create a new mysql schema and link it with sonar server.
We did something like this:
mysql –u root -plinux;
create database sonarqube2 character set utf8;
grant all privileges on sonarqube2.* to 'sonarsuer'#'localhost' identified by 'linux';
grant all privileges sonarqube2.* to 'sonaruser'#'%' identified by 'linux';
flush privileges;
Note: We used the same user we had in the old database
After update sonar.jdbc.url with the new data in sonar.properties and in Jenkins configuration we manage to deploy sonarqube again. Then, we tried to launch a SONAR job that we have already created before and we get this error:
[ERROR] Failed to execute goal org.codehaus.mojo:sonar-maven-plugin:2.6:sonar (default-cli) on project cas: The svn blame command [svn blame --xml --non-interactive -x -w src/main/java/net/mycompany/cas/CookieRetrievingCookieGeneratorPatch.java] failed: svn: OPTIONS of 'https://my_svn_server/svn/mycompanyxf/cas/trunk/src/main/java/net/mycompany/cas/CookieRetrievingCookieGeneratorPatch.java': authorization failed: Could not authenticate to server: rejected Basic challenge (https://my_svn_server) -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.mojo:sonar-maven-plugin:2.6:sonar (default-cli) on project cas: The svn blame command [svn blame --xml --non-interactive -x -w src/main/java/net/mycompany/cas/CookieRetrievingCookieGeneratorPatch.java] failed: svn: OPTIONS of 'https://my_svn_server/svn/mycompanyxf/cas/trunk/src/main/java/net/mycompany/cas/CookieRetrievingCookieGeneratorPatch.java': authorization failed: Could not authenticate to server: rejected Basic challenge (https://my_svn_server)
This is the whole stacktrace:
[INFO] --- sonar-maven-plugin:2.6:sonar (default-cli) # cas ---
[INFO] SonarQube version: 5.0.1
[WARNING] Invalid POM for org.samba.jcifs:jcifs-ext:jar:0.9.4, transitive dependencies (if any) will not be available, enable debug logging for more details
INFO: Default locale: "es_ES", source code encoding: "UTF-8" (analysis is platform dependent)
INFO: Work directory: /u01/jenkins_home/jobs/SONAR - MC - Cas/workspace/trunk/target/sonar
INFO: SonarQube Server 5.0.1
[INFO] [13:47:05.297] Load global referentials...
[INFO] [13:47:05.751] Load global referentials done: 458 ms
[INFO] [13:47:05.771] User cache: /home/tomcat/.sonar/cache
[INFO] [13:47:05.801] Install plugins
[INFO] [13:47:07.792] Install JDBC driver
[INFO] [13:47:07.802] Create JDBC datasource for jdbc:mysql://myserver:3306/sonarqube2?useUnicode=true&characterEncoding=utf8
[INFO] [13:47:11.353] Initializing Hibernate
[ERROR] [13:47:15.591] No license for plugin views
[INFO] [13:47:16.958] Load project referentials...
[INFO] [13:47:18.003] Load project referentials done: 1045 ms
[INFO] [13:47:18.003] Load project settings
[INFO] [13:47:19.038] Loading technical debt model...
[INFO] [13:47:19.097] Loading technical debt model done: 59 ms
[INFO] [13:47:19.115] Apply project exclusions
[INFO] [13:47:19.336] ------------- Scan CAS (Central Authentication Service)
[INFO] [13:47:19.339] Load module settings
[INFO] [13:47:21.334] Loading rules...
[INFO] [13:47:23.049] Loading rules done: 1715 ms
[INFO] [13:47:23.115] Configure Maven plugins
[INFO] [13:47:23.367] No quality gate is configured.
[INFO] [13:47:29.435] Initializer FindbugsMavenInitializer...
[INFO] [13:47:29.437] Initializer FindbugsMavenInitializer done: 2 ms
[INFO] [13:47:29.437] Base dir: /u01/jenkins_home/jobs/SONAR - MC - Cas/workspace/trunk
[INFO] [13:47:29.437] Working dir: /u01/jenkins_home/jobs/SONAR - MC - Cas/workspace/trunk/target/sonar
[INFO] [13:47:29.438] Source paths: src/main/webapp, pom.xml, src/main/java
[INFO] [13:47:29.438] Test paths: src/test/java
[INFO] [13:47:29.439] Binary dirs: target/classes
[INFO] [13:47:29.439] Source encoding: UTF-8, default locale: es_ES
[INFO] [13:47:29.439] Index files
[INFO] [13:47:30.480] 36 files indexed
[INFO] [13:47:31.213] Quality profile for java: Sonar way
[INFO] [13:47:31.213] Quality profile for js: Sonar way
[INFO] [13:47:31.300] JIRA issues sensor will not run as some parameters are missing.
[INFO] [13:47:31.392] Sensor JavaSquidSensor...
[INFO] [13:47:32.089] Java Main Files AST scan...
[INFO] [13:47:32.094] 25 source files to be analyzed
[INFO] [13:47:36.733] Java Main Files AST scan done: 4643 ms
[INFO] [13:47:36.733] 25/25 source files analyzed
[INFO] [13:47:36.746] Java bytecode scan...
[INFO] [13:47:37.302] Java bytecode scan done: 556 ms
[INFO] [13:47:37.305] Java Test Files AST scan...
[INFO] [13:47:37.306] 5 source files to be analyzed
[INFO] [13:47:37.626] 5/5 source files analyzed
[INFO] [13:47:37.627] Java Test Files AST scan done: 321 ms
[INFO] [13:47:37.633] Package design analysis...
[INFO] [13:47:37.684] Package design analysis done: 51 ms
[INFO] [13:47:37.801] Sensor JavaSquidSensor done: 6409 ms
[INFO] [13:47:37.813] Sensor QProfileSensor...
[INFO] [13:47:37.819] Sensor QProfileSensor done: 6 ms
[INFO] [13:47:37.819] Sensor Maven dependencies...
[INFO] [13:47:40.023] Sensor Maven dependencies done: 2204 ms
[INFO] [13:47:40.026] Sensor JavaScriptSquidSensor...
[INFO] [13:47:40.205] 6 source files to be analyzed
[INFO] [13:47:45.590] 6/6 source files analyzed
[INFO] [13:47:48.499] Sensor JavaScriptSquidSensor done: 8473 ms
[INFO] [13:47:48.506] Sensor CoverageSensor...
[INFO] [13:47:48.507] Sensor CoverageSensor done: 1 ms
[INFO] [13:47:48.507] Sensor InitialOpenIssuesSensor...
[INFO] [13:47:48.525] Sensor InitialOpenIssuesSensor done: 18 ms
[INFO] [13:47:48.525] Sensor ProjectLinksSensor...
[INFO] [13:47:48.557] Sensor ProjectLinksSensor done: 32 ms
[INFO] [13:47:48.558] Sensor VersionEventsSensor...
[INFO] [13:47:48.600] Sensor VersionEventsSensor done: 42 ms
[INFO] [13:47:48.600] Sensor FileHashSensor...
[INFO] [13:47:48.608] Sensor FileHashSensor done: 8 ms
[INFO] [13:47:48.610] Sensor CoberturaSensor...
[INFO] [13:47:48.616] parsing /u01/jenkins_home/jobs/SONAR - MC - Cas/workspace/trunk/target/site/cobertura/coverage.xml
[INFO] [13:47:49.078] Sensor CoberturaSensor done: 468 ms
[INFO] [13:47:49.078] Sensor SCM Sensor...
[INFO] [13:47:49.089] SCM provider for this project is: svn
[INFO] [13:47:49.089] Retrieve SCM blame information...
[INFO] [13:47:49.218] 36 files to be analyzed
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:04.240s
[INFO] Finished at: Fri Aug 12 13:47:54 CEST 2016
[INFO] Final Memory: 58M/955M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:sonar-maven-plugin:2.6:sonar (default-cli) on project cas: The svn blame command [svn blame --xml --non-interactive -x -w src/main/java/net/mycompany/cas/CookieRetrievingCookieGeneratorPatch.java] failed: svn: OPTIONS of 'https://my_svn_server/svn/mycompanyxf/cas/trunk/src/main/java/net/mycompany/cas/CookieRetrievingCookieGeneratorPatch.java': authorization failed: Could not authenticate to server: rejected Basic challenge (https://my_svn_server) -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.mojo:sonar-maven-plugin:2.6:sonar (default-cli) on project cas: The svn blame command [svn blame --xml --non-interactive -x -w src/main/java/net/mycompany/cas/CookieRetrievingCookieGeneratorPatch.java] failed: svn: OPTIONS of 'https://my_svn_server/svn/mycompanyxf/cas/trunk/src/main/java/net/mycompany/cas/CookieRetrievingCookieGeneratorPatch.java': authorization failed: Could not authenticate to server: rejected Basic challenge (https://my_svn_server)
Should I change aditional credential in Jenkins?
Any help would be appreciated.
See SonarQube 5.0 SCM support documentation for details on how to configure SVN authentication (which is what is failing here) using sonar.svn.username and sonar.svn.password.secured.
(side-note: SonarQube 5.0 is pretty old and known for having performance issues, you should really upgrade to latest/LTS version)

Chef believes dependencies not met

I have a Chef custom recipe that uses the Opscode 'database' cookbook. I'm using Chef 11.10 and Berkshelf 3.1.3 in an Opsworks stack and have specified the 'database' cookbook in the Berksfile. It pulls down the dependencies correctly and I can see this in the log. Upon running setup, I get a couple of errors; one about a file not being found (and the path indeed doesn't exist) and another, a 412, about dependency precondition of 'mysql' not being met.
I don't know enough Chef to know if the first error would cause the second failure but it certainy appears that the version of the mysql cookbook required is being met. Is there any known issues with this anyone knows of? Here's the output log of the setup command failing:
[2014-09-29T07:32:17+00:00] INFO: Starting chef-zero on port 8889 with repository at repository at /opt/aws/opsworks/current
One version per cookbook
data_bags at /var/lib/aws/opsworks/data/data_bags
nodes at /var/lib/aws/opsworks/data/nodes
[2014-09-29T07:32:18+00:00] INFO: Forking chef instance to converge...
[2014-09-29T07:32:18+00:00] INFO: *** Chef 11.10.4 ***
[2014-09-29T07:32:18+00:00] INFO: Chef-client pid: 2695
[2014-09-29T07:32:19+00:00] INFO: Setting the run_list to ["opsworks_custom_cookbooks::load", "opsworks_custom_cookbooks::execute"] from JSON
[2014-09-29T07:32:19+00:00] WARN: Run List override has been provided.
[2014-09-29T07:32:19+00:00] WARN: Original Run List: [recipe[opsworks_custom_cookbooks::load], recipe[opsworks_custom_cookbooks::execute]]
[2014-09-29T07:32:19+00:00] WARN: Overridden Run List: [recipe[opsworks_custom_cookbooks::load], recipe[opsworks_custom_cookbooks::execute]]
[2014-09-29T07:32:19+00:00] INFO: Run List is [recipe[opsworks_custom_cookbooks::load], recipe[opsworks_custom_cookbooks::execute]]
[2014-09-29T07:32:19+00:00] INFO: Run List expands to [opsworks_custom_cookbooks::load, opsworks_custom_cookbooks::execute]
[2014-09-29T07:32:19+00:00] INFO: Starting Chef Run for www-prod-migration-3.localdomain
[2014-09-29T07:32:19+00:00] INFO: Running start handlers
[2014-09-29T07:32:19+00:00] INFO: Start handlers complete.
[2014-09-29T07:32:19+00:00] INFO: HTTP Request Returned 404 Not Found: Object not found: /reports/nodes/www-prod-migration-3.localdomain/runs
[2014-09-29T07:32:34+00:00] INFO: Loading cookbooks [apache2, dependencies, deploy, gem_support, mod_php5_apache2, mysql, nginx, opsworks_agent_monit, opsworks_aws_flow_ruby, opsworks_berkshelf, opsworks_bundler, opsworks_commons, opsworks_custom_cookbooks, opsworks_initial_setup, opsworks_java, opsworks_nodejs, opsworks_rubygems, packages, passenger_apache2, php, rails, ruby, scm_helper, ssh_users, unicorn]
[2014-09-29T07:32:36+00:00] INFO: Not needed with Chef 11.x (x >= 8) anymore.
[2014-09-29T07:32:36+00:00] INFO: Processing package[git] action install (opsworks_custom_cookbooks::checkout line 11)
[2014-09-29T07:32:38+00:00] INFO: Processing directory[/root/.ssh] action create (opsworks_custom_cookbooks::checkout line 8)
[2014-09-29T07:32:38+00:00] INFO: Processing file[/root/.ssh/config] action touch (opsworks_custom_cookbooks::checkout line 16)
[2014-09-29T07:32:38+00:00] INFO: file[/root/.ssh/config] updated atime and mtime to 2014-09-29 07:32:38 +0000
[2014-09-29T07:32:38+00:00] INFO: Processing execute[echo 'StrictHostKeyChecking no' > /root/.ssh/config] action run (opsworks_custom_cookbooks::checkout line 23)
[2014-09-29T07:32:38+00:00] INFO: Processing template[/root/.ssh/id_dsa] action create (opsworks_custom_cookbooks::checkout line 27)
[2014-09-29T07:32:38+00:00] INFO: Processing git[Download Custom Cookbooks] action checkout (opsworks_custom_cookbooks::checkout line 29)
[2014-09-29T07:32:38+00:00] INFO: Processing ruby_block[Move single cookbook contents into appropriate subdirectory] action run (opsworks_custom_cookbooks::checkout line 64)
[2014-09-29T07:32:38+00:00] INFO: Processing opsworks_berkshelf_runner[Install berkshelf cookbooks] action berks_install (opsworks_berkshelf::install line 54)
[2014-09-29T07:32:38+00:00] INFO: Processing directory[/opt/aws/opsworks/current/berkshelf-cookbooks] action delete (/var/lib/aws/opsworks/cache.stage1/cookbooks/opsworks_berkshelf/providers/runner.rb line 2)
[2014-09-29T07:32:38+00:00] INFO: directory[/opt/aws/opsworks/current/berkshelf-cookbooks] deleted /opt/aws/opsworks/current/berkshelf-cookbooks recursively
[2014-09-29T07:32:38+00:00] INFO: Processing ruby_block[Install the cookbooks specified in the Berksfile and their dependencies] action run (/var/lib/aws/opsworks/cache.stage1/cookbooks/opsworks_berkshelf/providers/runner.rb line 11)
[2014-09-29T07:32:39+00:00] INFO:
Resolving cookbook dependencies...
Using apt (2.6.0)
Using aws (2.4.0)
Using build-essential (2.0.6)
Using chef-sugar (2.3.0)
Using database (2.3.0) from https://github.com/opscode-cookbooks/database.git (at master)
Using mysql (5.5.3)
Using mysql-chef_gem (0.0.5)
Using openssl (2.0.0)
Using postgresql (3.4.6)
Using xfs (1.1.0)
Using yum (3.3.2)
Using yum-mysql-community (0.1.10)
Vendoring apt (2.6.0) to /opt/aws/opsworks/current/berkshelf-cookbooks/apt
Vendoring aws (2.4.0) to /opt/aws/opsworks/current/berkshelf-cookbooks/aws
Vendoring build-essential (2.0.6) to /opt/aws/opsworks/current/berkshelf-cookbooks/build-essential
Vendoring chef-sugar (2.3.0) to /opt/aws/opsworks/current/berkshelf-cookbooks/chef-sugar
Vendoring database (2.3.0) to /opt/aws/opsworks/current/berkshelf-cookbooks/database
Vendoring mysql (5.5.3) to /opt/aws/opsworks/current/berkshelf-cookbooks/mysql
Vendoring mysql-chef_gem (0.0.5) to /opt/aws/opsworks/current/berkshelf-cookbooks/mysql-chef_gem
Vendoring openssl (2.0.0) to /opt/aws/opsworks/current/berkshelf-cookbooks/openssl
Vendoring postgresql (3.4.6) to /opt/aws/opsworks/current/berkshelf-cookbooks/postgresql
Vendoring xfs (1.1.0) to /opt/aws/opsworks/current/berkshelf-cookbooks/xfs
Vendoring yum (3.3.2) to /opt/aws/opsworks/current/berkshelf-cookbooks/yum
Vendoring yum-mysql-community (0.1.10) to /opt/aws/opsworks/current/berkshelf-cookbooks/yum-mysql-community
[2014-09-29T07:32:39+00:00] INFO: ruby_block[Install the cookbooks specified in the Berksfile and their dependencies] called
[2014-09-29T07:32:39+00:00] INFO: Processing execute[ensure correct permissions of custom cookbooks] action run (opsworks_custom_cookbooks::checkout line 82)
[2014-09-29T07:32:39+00:00] INFO: execute[ensure correct permissions of custom cookbooks] ran successfully
[2014-09-29T07:32:39+00:00] INFO: Processing ruby_block[merge all cookbooks sources] action run (opsworks_custom_cookbooks::load line 12)
[2014-09-29T07:32:40+00:00] INFO: ruby_block[merge all cookbooks sources] called
[2014-09-29T07:32:40+00:00] WARN: Skipping final node save because override_runlist was given
[2014-09-29T07:32:40+00:00] INFO: Chef Run complete in 20.634821643 seconds
[2014-09-29T07:32:40+00:00] INFO: Running report handlers
[2014-09-29T07:32:40+00:00] INFO: Report handlers complete
---
[2014-09-29T07:32:42+00:00] INFO: Starting chef-zero on port 8889 with repository at repository at /opt/aws/opsworks/current
One version per cookbook
data_bags at /var/lib/aws/opsworks/data/data_bags
nodes at /var/lib/aws/opsworks/data/nodes
[2014-09-29T07:32:42+00:00] INFO: Forking chef instance to converge...
[2014-09-29T07:32:42+00:00] INFO: *** Chef 11.10.4 ***
[2014-09-29T07:32:42+00:00] INFO: Chef-client pid: 2868
[2014-09-29T07:32:44+00:00] INFO: Setting the run_list to ["opsworks_custom_cookbooks::load", "opsworks_custom_cookbooks::execute"] from JSON
[2014-09-29T07:32:44+00:00] WARN: Run List override has been provided.
[2014-09-29T07:32:44+00:00] WARN: Original Run List: [recipe[opsworks_custom_cookbooks::load], recipe[opsworks_custom_cookbooks::execute]]
[2014-09-29T07:32:44+00:00] WARN: Overridden Run List: [recipe[opsworks_initial_setup], recipe[ssh_host_keys], recipe[ssh_users], recipe[mysql::client], recipe[dependencies], recipe[ebs], recipe[opsworks_ganglia::client], recipe[opsworks_stack_state_sync], recipe[mycustom-setup::nginx], recipe[mycustom-setup::php], recipe[mycustom-setup::nfs], recipe[mycustom-setup::framework], recipe[mycustom-setup::timezone], recipe[mycustom-setup::logrotate], recipe[newrelic::default], recipe[newrelic::php-agent], recipe[database::mysql], recipe[deploy::default], recipe[mycustom-deploy::repository], recipe[mycustom-deploy::nginx-site], recipe[mycustom-deploy::php-site], recipe[test_suite], recipe[opsworks_cleanup]]
[2014-09-29T07:32:44+00:00] INFO: Run List is [recipe[opsworks_initial_setup], recipe[ssh_host_keys], recipe[ssh_users], recipe[mysql::client], recipe[dependencies], recipe[ebs], recipe[opsworks_ganglia::client], recipe[opsworks_stack_state_sync], recipe[mycustom-setup::nginx], recipe[mycustom-setup::php], recipe[mycustom-setup::nfs], recipe[mycustom-setup::framework], recipe[mycustom-setup::timezone], recipe[mycustom-setup::logrotate], recipe[newrelic::default], recipe[newrelic::php-agent], recipe[database::mysql], recipe[deploy::default], recipe[mycustom-deploy::repository], recipe[mycustom-deploy::nginx-site], recipe[mycustom-deploy::php-site], recipe[test_suite], recipe[opsworks_cleanup]]
[2014-09-29T07:32:44+00:00] INFO: Run List expands to [opsworks_initial_setup, ssh_host_keys, ssh_users, mysql::client, dependencies, ebs, opsworks_ganglia::client, opsworks_stack_state_sync, mycustom-setup::nginx, mycustom-setup::php, mycustom-setup::nfs, mycustom-setup::framework, mycustom-setup::timezone, mycustom-setup::logrotate, newrelic::default, newrelic::php-agent, database::mysql, deploy::default, mycustom-deploy::repository, mycustom-deploy::nginx-site, mycustom-deploy::php-site, test_suite, opsworks_cleanup]
[2014-09-29T07:32:44+00:00] INFO: Starting Chef Run for www-prod-migration-3.localdomain
[2014-09-29T07:32:44+00:00] INFO: Running start handlers
[2014-09-29T07:32:44+00:00] INFO: Start handlers complete.
[2014-09-29T07:32:44+00:00] INFO: HTTP Request Returned 404 Not Found: Object not found: /reports/nodes/www-prod-migration-3.localdomain/runs
[2014-09-29T07:32:54+00:00] INFO: HTTP Request Returned 412 Precondition Failed: Could not satisfy version constraints for: mysql
================================================================================
Error Resolving Cookbooks for Run List:
================================================================================
Missing Cookbooks:
------------------
Could not satisfy version constraints for: mysql
Expanded Run List:
------------------
* opsworks_initial_setup
* ssh_host_keys
* ssh_users
* mysql::client
* dependencies
* ebs
* opsworks_ganglia::client
* opsworks_stack_state_sync
* mycustom-setup::nginx
* mycustom-setup::php
* mycustom-setup::nfs
* mycustom-setup::framework
* mycustom-setup::timezone
* mycustom-setup::logrotate
* newrelic::default
* newrelic::php-agent
* database::mysql
* deploy::default
* mycustom-deploy::repository
* mycustom-deploy::nginx-site
* mycustom-deploy::php-site
* test_suite
* opsworks_cleanup
[2014-09-29T07:32:55+00:00] ERROR: Running exception handlers
[2014-09-29T07:32:55+00:00] ERROR: Exception handlers complete
[2014-09-29T07:32:55+00:00] FATAL: Stacktrace dumped to /var/lib/aws/opsworks/cache.stage2/chef-stacktrace.out
[2014-09-29T07:32:55+00:00] ERROR: 412 "Precondition Failed"
[2014-09-29T07:32:55+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
here's the Berksfile:
source "https://supermarket.getchef.com"
cookbook "database" , "= 2.3.0" , git: "https://github.com/opscode-cookbooks/database.git"
here's the interesting part of the metadata for the cookbook:
long_description IO.read(File.join(File.dirname(__FILE__), 'README.md'))
version '0.1'
recipe 'mycustom-deploy::cron', 'Set up cron jobs'
recipe 'mycustom::default', 'Returns a fatal error'
recipe 'mycustom::nginx-site', 'configures Nginx for the new site'
recipe 'mycustom::p4ucron', '???'
recipe 'mycustom::php-site', 'Configures php for the new site'
recipe 'mycustom::service', 'Defines services with their allowed parameters'
recipe 'mycustom::repository', '???'
%w{ amazon }.each do |os|
supports os
end
depends 'mycustom-setup'
depends 'database'
There are also some other custom cookbooks, such as the 'mycustom-setup' which is a dependency for this one. I presume I should look through all of these for clashes?
Check your metadata.rb vs Berksfile cookbooks vs dependencies.
I had the same issue, but within AWS OpsWorks (effectively Chef Solo for this purpose) and the primary problem was rooted in the inclusion of a cookbook in metadata.rb. Removing that resolved the issue. The topic is covered/commented on in this post.