I am looking for someone that might know how to override the default install of MySql 5.5 on the Opsworks MySql layer.
I have tried enabling the ius repo and then a custom recipe to install the mysql56u-server, mysql56u-common packages, however all attempts thus far have failed due to the fact that mysql is being installed way earlier in the setup process.
I have not located the actual recipe that is selecting the packages for mysql55.
Anyone have any insight on this?
Any help much appreciated!
Looking at the mysql Cookbook provided by AWS, the recipe that installs the client (recipes/client_install.rb) includes the following:
case node[:platform]
when "redhat", "centos", "fedora", "amazon"
package mysql_name
else "ubuntu"
package "mysql-client"
end
The mysql_name variable is set earlier in the recipe:
mysql_name = node[:mysql][:name] || "mysql"
Looking at the attributes file (attributes/server.rb), the default values are set according to the Host's OS:
if rhel7?
default[:mysql][:name] = "mysql55-mysql"
else
default[:mysql][:name] = "mysql"
end
You can overwrite the name value to suit your needs:
default[:mysql][:name] = "mysql56u"
This can be achieved by provisioning your own customize attributes file in your Custom Cookbooks, or simply utilize the following Custom JSON in your Stack settings:
{
"mysql": {
"name": "mysql56u"
}
}
Related
This question already has answers here:
Error message "error:0308010C:digital envelope routines::unsupported"
(50 answers)
Closed 12 months ago.
I'm having an issue with a Webpack build process that suddenly broke, resulting in the following error...
<s> [webpack.Progress] 10% building 0/1 entries 0/0 dependencies 0/0 modules
node:internal/crypto/hash:67
this[kHandle] = new _Hash(algorithm, xofLen);
^
Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:67:19)
at Object.createHash (node:crypto:130:10)
at BulkUpdateDecorator.hashFactory (/app/node_modules/webpack/lib/util/createHash.js:155:18)
at BulkUpdateDecorator.update (/app/node_modules/webpack/lib/util/createHash.js:46:50)
at OriginalSource.updateHash (/app/node_modules/webpack-sources/lib/OriginalSource.js:131:8)
at NormalModule._initBuildHash (/app/node_modules/webpack/lib/NormalModule.js:888:17)
at handleParseResult (/app/node_modules/webpack/lib/NormalModule.js:954:10)
at /app/node_modules/webpack/lib/NormalModule.js:1048:4
at processResult (/app/node_modules/webpack/lib/NormalModule.js:763:11)
at /app/node_modules/webpack/lib/NormalModule.js:827:5 {
opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ],
library: 'digital envelope routines',
reason: 'unsupported',
code: 'ERR_OSSL_EVP_UNSUPPORTED'
}
command terminated with exit code 1
I've tried googling ERR_OSSL_EVP_UNSUPPORTED webpack which yielded almost no useful results, but it did highlight an issue using MD4 as provided by OpenSSL (which is apparently deprecated?) to generate hashes.
The webpack.config.js code is as follows:
const path = require('path');
const webpack = require('webpack');
/*
* SplitChunksPlugin is enabled by default and replaced
* deprecated CommonsChunkPlugin. It automatically identifies modules which
* should be splitted of chunk by heuristics using module duplication count and
* module category (i. e. node_modules). And splits the chunks…
*
* It is safe to remove "splitChunks" from the generated configuration
* and was added as an educational example.
*
* https://webpack.js.org/plugins/split-chunks-plugin/
*
*/
/*
* We've enabled TerserPlugin for you! This minifies your app
* in order to load faster and run less javascript.
*
* https://github.com/webpack-contrib/terser-webpack-plugin
*
*/
const TerserPlugin = require('terser-webpack-plugin');
module.exports = {
mode: 'development',
entry: './src/js/scripts.js',
output: {
path: path.resolve(__dirname, 'js'),
filename: 'scripts.js'
},
devtool: 'source-map',
plugins: [new webpack.ProgressPlugin()],
module: {
rules: []
},
optimization: {
minimizer: [new TerserPlugin()],
splitChunks: {
cacheGroups: {
vendors: {
priority: -10,
test: /[\\/]node_modules[\\/]/
}
},
chunks: 'async',
minChunks: 1,
minSize: 30000,
name: 'true'
}
}
};
How do I change the hashing algorithm used by Webpack to something else?
I was able to fix it via:
export NODE_OPTIONS=--openssl-legacy-provider
sachaw's comment to Node.js v17.0.0 - Error starting project in development mode #30078
But they say they fixed it: ijjk's comment to Node.js v17.0.0 - Error starting project in development mode #30078:
Hi, this has been updated in v11.1.3-canary.89 of Next.js, please update and give it a try!
For me, it worked only with the annotation above.
I also want to point out that npm run start works with -openssl-legacy-provider, but npm run dev won't.
It seems that there is a patch:
Node.js 17: digital envelope routines::unsupported #14532
I personally downgraded to 16-alpine.
I had this problem too. I'd accidentally been running on the latest Node.js (17.0 at time of writing), not the LTS version (14.18) which I'd meant to install. Downgrading my Node.js install to the LTS version fixed the problem for me.
There is a hashing algorithm that comes with Webpack v5.54.0+ that does not rely on OpenSSL.
To use this hash function that relies on a npm-provided dependency instead of an operating system-provided dependency, modify the webpack.config.cjs output key to include the hashFunction: "xxhash64" option.
module.exports = {
output: {
hashFunction: "xxhash64"
}
};
Ryan Brownell's answer is the ideal solution if you are using Webpack v5.54.0+.
If you're using an older version of Webpack, you can still solve this by changing the hash function to one that is not deprecated. (It defaults to the ancient md4, which OpenSSL has removed support for, which is the root cause of the error.) The supported algorithms are any supported by crypto.createHash. For example, to use SHA-256:
module.exports = {
output: {
hashFunction: "sha256"
}
};
Finally, if you are unable to change the Webpack configuration (e.g., if it's a transitive dependency which is running Webpack), you can enable OpenSSL's legacy provider to temporarily enable MD4 during the Webpack build. This is a last resort. Create a file openssl.cnf with this content…
openssl_conf = openssl_init
[openssl_init]
providers = provider_sect
[provider_sect]
default = default_sect
legacy = legacy_sect
[default_sect]
activate = 1
[legacy_sect]
activate = 1
…and then set the environment variable OPENSSL_CONF to the path to that file when running Webpack.
It is not my answer really, but I found this workaround /hack/ to fix my problem Code Check in for a GitHub project... see the bug comments here.
I ran into ERR_OSSL_EVP_UNSUPPORTED after updating with npm install.
I added the following to node_modules\react-scripts\config\webpack.config.js
const crypto = require("crypto");
const crypto_orig_createHash = crypto.createHash;
crypto.createHash = algorithm => crypto_orig_createHash(algorithm == "md4" ? "sha256" : algorithm);
I tried Ryan Brownell's solution and ended up with a different error, but this worked...
This error is mentioned in the release notes for Node.js 17.0.0, with a suggested workaround:
If you hit an ERR_OSSL_EVP_UNSUPPORTED error in your application with Node.js 17, it’s likely that your application or a module you’re using is attempting to use an algorithm or key size which is no longer allowed by default with OpenSSL 3.0. A command-line option, --openssl-legacy-provider, has been added to revert to the legacy provider as a temporary workaround for these tightened restrictions.
I ran into this issue using Laravel Mix (Webpack) and was able to fix it within file package.json by adding in the NODE_OPTIONS=--openssl-legacy-provider (referenced in Jan's answer) to the beginning of the script:
package.json:
{
"private": true,
"scripts": {
"production": "cross-env NODE_ENV=production NODE_OPTIONS=--openssl-legacy-provider node_modules/webpack/bin/webpack.js --progress --hide-modules --config=node_modules/laravel-mix/setup/webpack.config.js"
},
"dependencies": {
...
}
}
Try upgrading your Webpack version to 5.62.2.
I faced the same challenge, but you just need to downgrade Node.js to version 16.13 and everything works well. Download LTS, not the current on Downloads.
I had the same problem with my Vue.js project and I solved it.
macOS and Linux
You should have installed NVM (Node Version Manager). If you never had before, just run this command in your terminal:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
Open your project
Open the terminal in your project
Run the command nvm install 16.13.0 or any older version
After the installation is completed, run nvm use 16.13.0
I faced the same problem in a project I developed with Next.js. For the solution, I ran the project as follows and I solved the problem.
cross-env NODE_OPTIONS='--openssl-legacy-provider' next dev
This means that you have the latest Node.js version. If you are using it for Docker then you need to change the image from
FROM node
to
FROM node:14
I am creating an EMR cluster, and using jupyter notebook to run some spark tasks.
My tasks die after approximately 1 hour of execution, and the error is:
An error was encountered:
Invalid status code '400' from https://xxx.xx.x.xxx:18888/sessions/0/statements/20 with error payload: "requirement failed: Session isn't active."
My understanding is that it is related to the Livy config livy.server.session.timeout, but I don't know how I can set it in the bootstrap of the cluster (I need to do it in the bootstrap because the cluster is created with no ssh access)
Thanks a lot in advance
On EMR, livy-conf is the classification for the properties for livy's livy.conf file, so when creating an EMR cluster, choose advanced options with Livy as an application chosen to install, please pass this EMR configuration in the Enter Configuration field.
[{'classification': 'livy-conf','Properties': {'livy.server.session.timeout':'5h'}}]
On EMR, Livy binary is located at /etc/livy/, and so the config file is at /etc/livy/conf/livy.conf
To verify this,
Create an EMR cluster with a known ec2 key-pair, Livy and above config
Using the ec2 key-pair, login to the EC2 Master node associated with the cluster ssh -i some-ec2-key-pair.pem hadoop#ec2-00-00-00-0.ca-region-n.compute.amazonaws.com
Navigate to /etc/livy/conf, vim livy.conf & see the updated value of livy.server.session.timeout
If you don't want the Livy session to go down at all, then set the property livy.server.session.timeout-check to false in /etc/livy/conf/livy.conf.
Another way to do that if you don’t want to recreate the cluster is:
go to /etc/livy/conf/livy.conf and set the livy.server.session.timeout property to the value you would like.
After that, run sudo restart livy-server to make the configuration applied.
I am trying to set up a brand new Ghost blog on a Centos 7 server. I have Nginx, Node and Ghost installed and have written all of the necessary configuration files. It's pretty close to working, but I wanted to use MySQL instead of SQLite, so I created a new (blank) MySQL database called "ghost_db", set up a MySQL user called "ghost", gave the user permission for the database, and added these lines to config.js:
database: {
client: 'mysql',
connection: {
host: 'localhost',
user: 'ghost',
password: 'mypassword',
database: 'ghost_db'
charset: 'utf8'
filename: path.join(__dirname, '/content/data/ghost-dev.db')
},
debug: false
}, ...
When I try to start it, I get an error that suggests I use knex-migrator to initialize the database.
[john#a ghost]$ npm start
> ghost#1.18.4 start /var/www/ghost
> node index
[2017-12-10 00:08:00] ERROR
NAME: DatabaseIsNotOkError
CODE: MIGRATION_TABLE_IS_MISSING
MESSAGE: Please run knex-migrator init ...
However, some comments on Stackexchange suggest that using knex-migrate may be unnecessary for this version of Ghost, and when I run knex-migrator, it also fails:
[john#a ghost]$ knex-migrator init
[2017-12-09 16:21:33] ERROR
NAME: RollbackError
CODE: SQLITE_ERROR
MESSAGE: delete from "migrations" where "name" = '2-create-fixtures.js' and "version" = 'init' and "currentVersion" = '1.18' - SQLITE_ERROR: no such table: migrations
...[omitted]
Error: SQLITE_ERROR: no such table: migrations
I think the problem may be that the "ghost_db" database I initially created is blank. The "ghost-dev.db" file that is pointed to in the config.js seems to be for SQLite, but I get the same error message if I switch config.js back to using an SQLite database. I don't know what the "migrations" table is. I found the schema that I think Ghost expects at [https://github.com/TryGhost/Ghost/blob/1.16.2/core/server/data/schema/schema.js], but I'm not sure how to use that to initialize the tables, etc., except for doing it very laboriously by hand. I'm stumped!
Knex-migrator is new in Ghost 1.0, which also uses a config.<env>.json file for configuration.
It sounds like you added your database config into a file called config.js which was correct <1.0, however it seems you were installing Ghost 1.0 and therefore your new connection details would have needed to live in config.production.json.
You are correct that Ghost-CLI isn't intended for use on CentOS (it's for Ubuntu), but I'd be very surprised if it failed to install Ghost correctly. The issues with other OSs are mainly in the subtle differences between systemd i.e. keeping Ghost running.
The answer for me was just to not create the database at all and let Ghost do it as part of ghost install.
I took an alternate approach which proved successful, which was to install Ghost as an NPM module. The official Ghost instructions label this as an "advanced" process, but it wasn't too difficult to follow the instructions in the excellent nehalist.io and Stickleback blogs. There was also some useful guidance on the HugeServer knowledgebase. I think ultimately the problem was that the Ghost commandline interface (ghost-cli) wasn't designed for Centos 7.
For testing, I want to be able to run several IPFS nodes on a single machine.
This is the scenario:
I am building small services on top of IPFS core library, following the Making your own IPFS service guide. When I try to put client and server on the same machine (note that each of them will create their own IPFS node), I will get the following:
panic: cannot acquire lock: Lock FcntlFlock of /Users/long/.ipfs/repo.lock failed: resource temporarily unavailable
Usually, when you start with IPFS, you will use ipfs init, which will create a new node. The default data and config stored for that particular node are located at ~/.ipfs. Here is how you can create a new node and config it so it can run besides your default node.
1. Create a new node
For a new node you have to use ipfs init again. Use for instance the following:
IPFS_PATH=~/.ipfs2 ipfs init
This will create a new node at ~/.ipfs2 (not using the default path).
2. Change Address Configs
As both of your nodes now bind to the same ports, you need to change the port configuration, so both nodes can run side by side. For this, open ~/.ipfs2/configand findAddresses`:
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5001",
"Gateway": "/ip4/127.0.0.1/tcp/8080",
"Swarm": [
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001"
]
}
To for example the following:
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5002",
"Gateway": "/ip4/127.0.0.1/tcp/8081",
"Swarm": [
"/ip4/0.0.0.0/tcp/4002",
"/ip6/::/tcp/4002"
]
}
With this, you should be able to run both node .ipfs and .ipfs2 on a single machine.
Notes:
Whenever you use .ipfs2, you need to set the env variable IPFS_PATH=~/.ipfs2
In your example you need to change either your client or server node from ~/.ipfs to ~/.ipfs2
you can also start the daemon on the second node using IPFS_PATH=~/.ipfs2 ipfs daemon &
Hello, I use ipfs2, after running two daemons at the same time, can indeed open localhost:5001 / webui, run the second localhost:5002 / webui has an error, as shown in the attachment
Here are some ways I've used to create multiple nodes/peers ids.
I use windows 10.
1st node go-ipfs (latest version)
2nd node Siderus Orion ifps (connect to Orion node , not local) -- https://orion.siderus.io/
Use VirtualBox to run a minimal ubuntu installation. (You can set up as many as you want)
Repeat the process and you have 4 nodes or as many as you want.
https://discuss.ipfs.io/t/ipfs-manager-download-install-manage-debug-your-ipfs-node/3534 is another gui that installs and lets you manage all ipfs commands without CMD. He just released it a few days ago and it looks well worth lots of reviews.
Disclaimer I am not a coder or computer professional. Just a huge fan of IPFS! I hope we can raise awareness and change the world.
I have a scenario where I need to change several parameters of a hadoop cluster managed by Ambari to document performance of a particular application. The change in the configs entails a restart of the affected components.
I am using the Ambari REST API for achieving this. I figured out how to do this for all service components of hadoop. I' am not sure whether the API provides a way to restart the MySQL server that Hive uses.
I have the following questions:-
Is it the case that a mere stop and start of mysqld on the appropriate machine is enough to ensure that the required configuration changes are recognized by Ambari and the application?
I chose the 'New MySQL database' option while installing Hive via Ambari. Does this mean that restarts are reflected in Ambari only when it is carried out from the Ambari UI?
Your inputs would be highly appreciated.
Thanks!
Found a solution to the problem. I used the following commands using the Ambari REST API for changing configurations and restarting services from the backend.
Login to the host on which the ambari server is running and use the already provided config.sh script as described below.
Modifying configuration files
#!/bin/bash
CLUSTER_NAME=$1
CONFIG_FILE=$2
PROPERTY_NAME=$3
PROPERTY_VALUE=$4
/var/lib/ambari-server/resources/scripts/configs.sh -port <ambari-server-port> set localhost $1 $2 "$3" "$4"
where CONFIG_FILE can take values like tez-site, mapred-site, hadoop-site, hive-site etc. PROPERTY_NAME and PROPERTY_VALUE should be set to values relevant to the specified CONFIG_FILE.
Restarting host components
curl -uadmin:admin -H 'X-Requested-By: ambari' -X POST -d '
{
"RequestInfo":{
"command":"RESTART",
"context":"Restart MySQL server used by Hive Metastore on node3.cluster.com and HDFS client on node1.cluster.com",
"operation_level":{
"level":"HOST",
"cluster_name":"c1"
}
},
"Requests/resource_filters":[
{
"service_name":"HIVE",
"component_name":"MYSQL_SERVER",
"hosts":"node3.cluster.com"
},
{
"service_name":"HDFS",
"component_name":"HDFS_CLIENT",
"hosts":"node1.cluster.com"
}
]
}' http://localhost:<ambari-server-port>/api/v1/clusters/c1/requests
Reference Links:
Restarting components
modifying configurations
Hope this helps!