Composer hanging while updating dependencies - json

I tried updating a Laravel project I'm working on today using composer update
But it hung on Updating dependencies (including require-dev)
So I tried things like updating composer, dump-autoload, but nothing seemed to work. Then I ran it in verbose mode: composer update -vvv
And I noticed it hung while reading this json:
Reading path/to/Composer/repo/https---packagist.org/provider-cordoval$hamcrest-php.json from cache
I tried searching for cordoval/hamcrest-php on packagist.org and couldn't find it. This isn't listed as a dependency in my composer.json
Searching through my vendor folder, I notice the mockery/mockery package I use requires hamcrest/hamcrest-php, but I can't find anything that makes any reference to cordoval.
Any idea what's wrong and how I can fix it so that I can do the update?
Here's my composer.json:
{
"name": "laravel/laravel",
"description": "The Laravel Framework.",
"keywords": ["framework", "laravel"],
"license": "MIT",
"require": {
"laravel/framework": "4.2.*",
"iron-io/iron_mq": "dev-master",
"phpunit/phpunit": "4.2.*",
"mockery/mockery": "dev-master",
"xethron/migrations-generator": "dev-master",
"mailgun/mailgun-php": "dev-master"
},
"autoload": {
"classmap": [
"app/commands",
"app/controllers",
"app/models",
"app/database/migrations",
"app/database/seeds",
"app/tests/TestCase.php"
]
},
"scripts": {
"post-install-cmd": [
"php artisan clear-compiled",
"php artisan optimize"
],
"post-update-cmd": [
"php artisan clear-compiled",
"php artisan optimize"
],
"post-create-project-cmd": [
"php artisan key:generate"
]
},
"config": {
"preferred-install": "dist"
},
"minimum-stability": "stable"
}
Update
I've tried removing some of the packages from my composer.json, including the "mockery/mockery" package. The only change it made was that Composer would hang on a different file.
After leaving Composer running like that for quite a long time, it finally exited with an error such as the following:
/path/to/ComposerSetup/bin/composer: line 18: 1356 Segmentation fault php "${dir}/composer.phar" $*
Not sure what to do about that...

In my case, it was simply taking a very long time on my 8GB ram Mac. To check the progress and verify that it is going through the dependencies, run composer in verbose mode. This was an approach I missed in the question so worth re-stating here.
composer update -vvv

So it turns out the problem was with php's xdebug extension. After disabling it in my php.ini, composer ran without any problems.
And just to note, the hang-up wasn't actually occurring while reading files from the cache. It was the step right after where composer was trying to resolve dependencies. It just never finished that step and never printed the output. That's why no matter what I did, it always appeared to be stuck reading a file from the cache.

1st of all : Check firewall and proxy connections. If everything is ok but composer is still hanging try to clear composer cache:
composer clear-cache
https://getcomposer.org/doc/03-cli.md#clear-cache
2nd option If these steps does not repair your composer then it's possible that the system does not have enough RAM memory available (I faced this problem and the symptomps were the same that you describe). At this point you have two options:
a) Increase memory (Virtual Machines or Docker) : Your container or VM needs more available memory. Follow this guide: https://stackoverflow.com/a/44533437/3518053
b) Generate swap file (Linux) : Try creating a swap file to provide more memory:
(Above commands are from composer killed while updating)
free -m
mkdir -p /var/_swap_
cd /var/_swap_
#Here, 1M * 2000 ~= 2GB of swap memory
dd if=/dev/zero of=swapfile bs=1M count=2000
mkswap swapfile
swapon swapfile
chmod 600 swapfile
echo "/var/_swap_/swapfile none swap sw 0 0" >> /etc/fstab
#cat /proc/meminfo
free -m

Some times it is stuck because it is trying to use HTTP instead of https so just run this
composer config --global repo.packagist composer https://packagist.org

Working for me.
First Run command for auto load, then clear cache and run update.
composer dump-autoload
php artisan cache:clear
php artisan view:clear
composer update

For me the issue was with xDebug. I was using IDE's terminal, and the debugger was listening to incoming connections (as always). Turning the listening off (without requiring to disable the extension) solved the issue.

this worked for me:
composer self-update

I solved it by running command NOT IN VS CODE TERMINAL

I found this in another article, I found that doing the following below worked. It seemed to be a cache/download issue into the composer packages cache.
composer update -vvv
Then doing the following:
Add or edit your composer file to have these settings.
"repositories": [
{
"type": "composer",
"url": "https://packagist.org"
},
{ "packagist": false }
]

Restart your system.
I faced the same problem today. Going by advice, turned off xdebug, but did not help. Verified all files were present. Restarted my system, and it worked.

Check if you are running the minimum required php version
Compare with the specified required php version in the composer.json file
Open terminal run
php -v
Cross check in composer.json file see example below
"require": {
"php": "^7.1.3",
}

check the path of [xdebug] zend_extension = "file/path" in php.ini

I solved it by editing php.ini file in order to set de cacert required for ssl verification:
Download the file http://curl.haxx.se/ca/cacert.pem
Edit php.ini to set the pat:
[openssl]
; The location of a Certificate Authority (CA) file on the local filesystem
; to use when verifying the identity of SSL/TLS peers. Most users should
; not specify a value for this directive as PHP will attempt to use the
; OS-managed cert stores in its absence. If specified, this value may still
; be overridden on a per-stream basis via the "cafile" SSL stream context
; option.
openssl.cafile=C:\web\certs\cacert.pem
curl.cainfo=C:\web\certs\cacert.pem
Try again

Personally, I discovered using free that my system had 0kb of swap storage. Creating a 1GB swap file using https://linuxize.com/post/create-a-linux-swap-file/ solved the problem instantly.

My problem solved with: Change the wifi (I use my phone) - Waiting (about 5 minutes)
Here is the output.
Creating a "magento/project-community-edition" project at "/tmp/exampleproject"
Installing magento/project-community-edition (2.4.5-p1)
- Installing magento/project-community-edition (2.4.5-p1): Loading from cache
Created project in /tmp/exampleproject
Loading composer repositories with package information
Warning from https://repo.packagist.org: Support for Composer 1 is deprecated and some packages will not be available. You should upgrade to Composer 2. See https://blog.packagist.com/deprecating-composer-1-support/
Info from https://repo.packagist.org: #StandWithUkraine
Updating dependencies (including require-dev)
After waiting I saw the following output:
Updating dependencies (including require-dev)
Package operations: 546 installs, 0 updates, 0 removals
- Installing laminas/laminas-dependency-plugin (2.4.0): Loading from cache
Not sure what is the reason but I also run the following commands.
To diagnose the problem you should run the following:
composer diagnose
If you get OK from each line (it should be a warning but not important I had to), That means there is no problem with the composer. Try to switch wifi and do not forget to wait!!!

Related

Why npm run serve is throwing ERR_OSSL_EVP_UNSUPPORTED? [duplicate]

This question already has answers here:
Error message "error:0308010C:digital envelope routines::unsupported"
(50 answers)
Closed 12 months ago.
I'm having an issue with a Webpack build process that suddenly broke, resulting in the following error...
<s> [webpack.Progress] 10% building 0/1 entries 0/0 dependencies 0/0 modules
node:internal/crypto/hash:67
this[kHandle] = new _Hash(algorithm, xofLen);
^
Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:67:19)
at Object.createHash (node:crypto:130:10)
at BulkUpdateDecorator.hashFactory (/app/node_modules/webpack/lib/util/createHash.js:155:18)
at BulkUpdateDecorator.update (/app/node_modules/webpack/lib/util/createHash.js:46:50)
at OriginalSource.updateHash (/app/node_modules/webpack-sources/lib/OriginalSource.js:131:8)
at NormalModule._initBuildHash (/app/node_modules/webpack/lib/NormalModule.js:888:17)
at handleParseResult (/app/node_modules/webpack/lib/NormalModule.js:954:10)
at /app/node_modules/webpack/lib/NormalModule.js:1048:4
at processResult (/app/node_modules/webpack/lib/NormalModule.js:763:11)
at /app/node_modules/webpack/lib/NormalModule.js:827:5 {
opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ],
library: 'digital envelope routines',
reason: 'unsupported',
code: 'ERR_OSSL_EVP_UNSUPPORTED'
}
command terminated with exit code 1
I've tried googling ERR_OSSL_EVP_UNSUPPORTED webpack which yielded almost no useful results, but it did highlight an issue using MD4 as provided by OpenSSL (which is apparently deprecated?) to generate hashes.
The webpack.config.js code is as follows:
const path = require('path');
const webpack = require('webpack');
/*
* SplitChunksPlugin is enabled by default and replaced
* deprecated CommonsChunkPlugin. It automatically identifies modules which
* should be splitted of chunk by heuristics using module duplication count and
* module category (i. e. node_modules). And splits the chunks…
*
* It is safe to remove "splitChunks" from the generated configuration
* and was added as an educational example.
*
* https://webpack.js.org/plugins/split-chunks-plugin/
*
*/
/*
* We've enabled TerserPlugin for you! This minifies your app
* in order to load faster and run less javascript.
*
* https://github.com/webpack-contrib/terser-webpack-plugin
*
*/
const TerserPlugin = require('terser-webpack-plugin');
module.exports = {
mode: 'development',
entry: './src/js/scripts.js',
output: {
path: path.resolve(__dirname, 'js'),
filename: 'scripts.js'
},
devtool: 'source-map',
plugins: [new webpack.ProgressPlugin()],
module: {
rules: []
},
optimization: {
minimizer: [new TerserPlugin()],
splitChunks: {
cacheGroups: {
vendors: {
priority: -10,
test: /[\\/]node_modules[\\/]/
}
},
chunks: 'async',
minChunks: 1,
minSize: 30000,
name: 'true'
}
}
};
How do I change the hashing algorithm used by Webpack to something else?
I was able to fix it via:
export NODE_OPTIONS=--openssl-legacy-provider
sachaw's comment to Node.js v17.0.0 - Error starting project in development mode #30078
But they say they fixed it: ijjk's comment to Node.js v17.0.0 - Error starting project in development mode #30078:
Hi, this has been updated in v11.1.3-canary.89 of Next.js, please update and give it a try!
For me, it worked only with the annotation above.
I also want to point out that npm run start works with -openssl-legacy-provider, but npm run dev won't.
It seems that there is a patch:
Node.js 17: digital envelope routines::unsupported #14532
I personally downgraded to 16-alpine.
I had this problem too. I'd accidentally been running on the latest Node.js (17.0 at time of writing), not the LTS version (14.18) which I'd meant to install. Downgrading my Node.js install to the LTS version fixed the problem for me.
There is a hashing algorithm that comes with Webpack v5.54.0+ that does not rely on OpenSSL.
To use this hash function that relies on a npm-provided dependency instead of an operating system-provided dependency, modify the webpack.config.cjs output key to include the hashFunction: "xxhash64" option.
module.exports = {
output: {
hashFunction: "xxhash64"
}
};
Ryan Brownell's answer is the ideal solution if you are using Webpack v5.54.0+.
If you're using an older version of Webpack, you can still solve this by changing the hash function to one that is not deprecated. (It defaults to the ancient md4, which OpenSSL has removed support for, which is the root cause of the error.) The supported algorithms are any supported by crypto.createHash. For example, to use SHA-256:
module.exports = {
output: {
hashFunction: "sha256"
}
};
Finally, if you are unable to change the Webpack configuration (e.g., if it's a transitive dependency which is running Webpack), you can enable OpenSSL's legacy provider to temporarily enable MD4 during the Webpack build. This is a last resort. Create a file openssl.cnf with this content…
openssl_conf = openssl_init
[openssl_init]
providers = provider_sect
[provider_sect]
default = default_sect
legacy = legacy_sect
[default_sect]
activate = 1
[legacy_sect]
activate = 1
…and then set the environment variable OPENSSL_CONF to the path to that file when running Webpack.
It is not my answer really, but I found this workaround /hack/ to fix my problem Code Check in for a GitHub project... see the bug comments here.
I ran into ERR_OSSL_EVP_UNSUPPORTED after updating with npm install.
I added the following to node_modules\react-scripts\config\webpack.config.js
const crypto = require("crypto");
const crypto_orig_createHash = crypto.createHash;
crypto.createHash = algorithm => crypto_orig_createHash(algorithm == "md4" ? "sha256" : algorithm);
I tried Ryan Brownell's solution and ended up with a different error, but this worked...
This error is mentioned in the release notes for Node.js 17.0.0, with a suggested workaround:
If you hit an ERR_OSSL_EVP_UNSUPPORTED error in your application with Node.js 17, it’s likely that your application or a module you’re using is attempting to use an algorithm or key size which is no longer allowed by default with OpenSSL 3.0. A command-line option, --openssl-legacy-provider, has been added to revert to the legacy provider as a temporary workaround for these tightened restrictions.
I ran into this issue using Laravel Mix (Webpack) and was able to fix it within file package.json by adding in the NODE_OPTIONS=--openssl-legacy-provider (referenced in Jan's answer) to the beginning of the script:
package.json:
{
"private": true,
"scripts": {
"production": "cross-env NODE_ENV=production NODE_OPTIONS=--openssl-legacy-provider node_modules/webpack/bin/webpack.js --progress --hide-modules --config=node_modules/laravel-mix/setup/webpack.config.js"
},
"dependencies": {
...
}
}
Try upgrading your Webpack version to 5.62.2.
I faced the same challenge, but you just need to downgrade Node.js to version 16.13 and everything works well. Download LTS, not the current on Downloads.
I had the same problem with my Vue.js project and I solved it.
macOS and Linux
You should have installed NVM (Node Version Manager). If you never had before, just run this command in your terminal:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
Open your project
Open the terminal in your project
Run the command nvm install 16.13.0 or any older version
After the installation is completed, run nvm use 16.13.0
I faced the same problem in a project I developed with Next.js. For the solution, I ran the project as follows and I solved the problem.
cross-env NODE_OPTIONS='--openssl-legacy-provider' next dev
This means that you have the latest Node.js version. If you are using it for Docker then you need to change the image from
FROM node
to
FROM node:14

Configuring Apache drill for Cassandra

I am trying to configure Cassandra with Drill. I used the same approach given on the link: https://drill.apache.org/docs/starting-the-web-ui/.
I used the following code for New Storage Plugin:
{
"type": "cassandra",
"hosts": [
"127.0.0.1"
],
"port": 9042,
"username": "<username>",
"password": "<password>",
"enabled": false
}
I have attached the Screenshot here.
But I'm getting the following error:
Please retry: Error (invalid JSON mapping)
How can I resolve this?
All the code :
Git: https://github.com/yssharma/drill/tree/cassandra-storage
Patch: https://gist.github.com/yssharma/2581ae8a97c559b2677f
1. Get Drill: Lets get the Drill source
$ git clone https://github.com/apache/drill.git
2. Get Cassandra Storage patch/Download the Patch file from:
https://reviews.apache.org/r/29816/diff/raw/
3. Apply the patch on top of Drill
$ cd drill
$ git apply --check ~/Downloads/DRILL-92-CassandraStorage.patch
$ git apply ~/Downloads/DRILL-92-CassandraStorage.patch
4. Build Drill with Cassandra Storage & export distribution to /opt/drill
$ mvn clean install -DskipTests
$ mkdir /opt/drill
$ tar xvzf distribution/target/*.tar.gz --strip=1 -C /opt/drill
5. Start Sqlline.
That it we have finished with the Drill build and installation – and its time we can start using Drill.
$ cd /opt/drill
$ bin/sqlline -u jdbc:drill:zk=local -n admin -p admin
Drill-Sqlline
Hit ‘show schemas‘ to view existing schemas.
Drill-Sqlline-schemas
6. Drill Web interface
You should be able to see the Drill web interface on localhost:8047, or whatever your host/port is.
Use this as your config:
{
"type": "cassandra",
"config": {
"cassandra.hosts": [
"127.0.0.1",
"127.0.0.2"
],
"cassandra.port": 9042
},
"enabled": true
}
Also, if this doesnt work, know that they are working on a plugin for it now: https://github.com/apache/drill/pull/1960
I'll give an update here as well. We're doing some serious refactoring of the how Drill works with storage plugins. Specifically, we're working to incorporate the Calcite Adapter1 for Cassandra. The reason for this is that the hard part of storage plugins isn't the connection, it's the optimizations. Calcite already does query planning for Drill and already implemented a bunch of these adapters which means that the work of figuring out all the optimizations (AKA pushdowns) is largely done.
In the case of Cassandra/Scylla, this is particularly important because some filters should be pushed down to Cassandra, and some should absolutely not be pushed down. The adapters also include aggregate pushdowns--something which no Drill plugins currently do. Again the point of this is that once we commit this, the connector should work VERY will with Cassandra/Scylla. We have one for ElasticSearch that is very near completion and once that's done the Cassandra plugin is next. If you have any suggestions/comments or other feedback, please post on the pull request linked above.
** UPDATE 11 April 2021: Cassandra/Scylla Plugin Now Merged in Drill 1.19.0-SNAPSHOT **

packer ssh_private_key_file is invalid

I am trying to use the OpenStack provisioner API in packer to clone an instance. So far I have developed the script:
{
"variables": {
},
"description": "This will create the baked vm images for any environment from dev to prod.",
"builders": [
{
"type": "openstack",
"identity_endpoint": "http://192.168.10.10:5000/v3",
"tenant_name": "admin",
"domain_name": "Default",
"username": "admin",
"password": "****************",
"region": "RegionOne",
"image_name": "cirros",
"flavor": "m1.tiny",
"insecure": "true",
"source_image": "0f9b69ee-4e9f-4807-a7c4-6a58355c37b1",
"communicator": "ssh",
"ssh_keypair_name": "******************",
"ssh_private_key_file": "~/.ssh/id_rsa",
"ssh_username": "root"
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"sleep 60"
]
}
]
}
But upon running the script using packer build script.json I get the following error:
User:packer User$ packer build script.json
openstack output will be in this color.
1 error(s) occurred:
* ssh_private_key_file is invalid: stat ~/.ssh/id_rsa: no such file or directory
My id_rsa is a file starting and ending with:
------BEGIN RSA PRIVATE KEY------
key
------END RSA PRIVATE KEY--------
Which I thought meant it was a PEM related file so I found this was weird so I made a pastebin of my PACKER_LOG: http://pastebin.com/sgUPRkGs
Initial analysis tell me that the only error is a missing packerconfig file. Upon googling this the top searches tell me if it doesn't find one it defaults. Is this why it is not working?
Any help would be of great assistance. Apparently there are similar problems on the github support page (https://github.com/mitchellh/packer/issues) But I don't understand some of the solutions posted and if they apply to me.
I've tried to be as informative as I can. Happy to provide any information where I can!!
Thank you.
* ssh_private_key_file is invalid: stat ~/.ssh/id_rsa: no such file or directory
The "~" character isn't special to the operating system. It's only special to shells and certain other programs which choose to interpret it as referring to your home directory.
It appears that OpenStack doesn't treat "~" as special, and it's looking for a key file with the literal pathname "~/.ssh/id_rsa". It's failing because it can't find a key file with that literal pathname.
Update the ssh_private_key_file entry to list the actual pathname to the key file:
"ssh_private_key_file": "/home/someuser/.ssh/id_rsa",
Of course, you should also make sure that the key file actually exists at the location that you specify.
Have to leave a post here as this just bit me… I was using a variable with ~/.ssh/id_rsa and then I changed it to use the full path and when I did… I had a space at the end of the variable value being passed in from the command line via Makefile which was causing this error. Hope this saves someone some time.
Kenster's answer got you past your initial question, but it sounds like from your comment that you were still stuck.
Per my reply to your comment, Packer doesn't seem to support supplying a passphrase, but you CAN tell it to ask the running SSH Agent for a decrypted key if the correct passphrase was supplied when the key was loaded. This should allow you to use Packer to build with a protect SSH key as long as you've loaded it into SSH agent before attempting the build.
https://www.packer.io/docs/templates/communicator.html#ssh_agent_auth
The SSH communicator connects to the host via SSH. If you have an SSH
agent configured on the host running Packer, and SSH agent
authentication is enabled in the communicator config, Packer will
automatically forward the SSH agent to the remote host.
The SSH communicator has the following options:
ssh_agent_auth (boolean) - If true, the local SSH agent will be used
to authenticate connections to the remote host. Defaults to false.

missing jruby gem for logstash

I have downloaded the latest logstash 1.4, and when I run it with the following config:
input {
eventlog {
}
}
output { stdout {} }
I get this error :
D:\logstash-1.4.0\bin>logstash agent -f simpleConfig.config -l logs.log
Sending logstash logs to agent.log.
←[33mUsing milestone 2 input plugin 'eventlog'. This plugin should be stable, bu
t if you see strange behavior, please let us know! For more information on plugi
n milestones, see http://logstash.net/docs/1.4.0/plugin-milestones {:level=>:war
n}←[0m
LoadError: no such file to load -- jruby-win32ole
require at org/jruby/RubyKernel.java:1085
require at file:/D:/logstash-1.4.0/vendor/jar/jruby-complete-1
.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.
rb:55
require at file:/D:/logstash-1.4.0/vendor/jar/jruby-complete-1
.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.
rb:53
require at D:/logstash-1.4.0/lib/logstash/JRUBY-6970.rb:27
require at D:/logstash-1.4.0/vendor/bundle/jruby/1.9/gems/poly
glot-0.3.4/lib/polyglot.rb:65
register at D:/logstash-1.4.0/lib/logstash/inputs/eventlog.rb:3
7
start_inputs at D:/logstash-1.4.0/lib/logstash/pipeline.rb:135
each at org/jruby/RubyArray.java:1613
start_inputs at D:/logstash-1.4.0/lib/logstash/pipeline.rb:134
run at D:/logstash-1.4.0/lib/logstash/pipeline.rb:72
execute at D:/logstash-1.4.0/lib/logstash/agent.rb:136
run at D:\logstash-1.4.0\lib\logstash\runner.rb:190
call at org/jruby/RubyProc.java:271
initialize at D:/logstash-1.4.0/vendor/bundle/jruby/1.9/gems/stud
-0.0.17/lib/stud/task.rb:12
I think that the package win32ole jruby is missing, but I don't know how to add it.
Thanks in advance for your help
I installed the logstash-contrib-1.4.x.tar.gz.
I didn't found a download link so I copy the logstash download link and add "-contrib" in the filename eg: https://download.elasticsearch.org/logstash/logstash/logstash-contrib-1.4.2.tar.gz
This works fine in my case.
The installation is only unzip the file on home directoy and override all files. Now it works.

Monit service name error

So I have the following in my monitrc file:
check process apache with pidfile /usr/local/apache/logs/httpd.pid
group apache
start program = "/etc/init.d/httpd start"
stop program = "/etc/init.d/httpd stop"
if failed host XXX port 80 protocol http
and request "/monit/token" then restart
if cpu is greater than 60% for 2 cycles then alert
if cpu 80% for 5 cycles then restart
if totalmem 500 MB for 5 cycles then restart
if children 250 then restart
if loadavg(5min) greater than 10 for 8 cycles then stop
if 3 restarts within 5 cycles then timeout
but I keep getting the error that:
Error: service name conflict, apache already defined '/usr/local/apache/logs/httpd.pid'
If the hostname of the server is 'apache' then the conflict is with the default rule for monitoring the system load.
Monit seems to have the implicit rule of 'check system hostname', where the hostname is the output of hostname command.
You can overwrite that by adding just a line like:
check system newhostname
For example:
check system localhost
I saw this error when I forgot to comment out the line:
include /etc/monit/conf.d/*
in a custom /etc/monit/conf.d/myprogram.conf file, so it was recursively including that file.
By any chance do you have an entry with a host name apache beneath this entry or in a separate monit config file?
You have the same service defined more than once. Check all your monit config files for that service. This includes your monitrc and all files listed under the "Includes" section (like include /etc/monit/conf.d/*).
If you redefine "Includes" within a file in one of your "Includes" directories, you will run into recursive reference problems.
Very very important thing : you need monit 5.5
For example in ubuntu 12.04 available in repo only 5.3
So you need to download and install from other repo.
Solution for me , for example :
wget http://mirrors.kernel.org/ubuntu/pool/universe/m/monit/monit_5.5.1-1_amd64.deb && sudo dpkg -i monit_5.5.1-1_amd64.deb
For my case, I simply had to restart monit to get rid of the service name error:
sudo service monit restart
Check if you have had any conflicts for Apache defined in any of the monit conf files under /etc/monit.d/ directory, I accidentally did added nginx for my puma.conf and ran into the same error before.