Truffle Migrate deployment issue - not getting results when I migrate - ethereum

I've successfully deployed my SmartContracts locally to Ganache and want to now take it to the next level by testing them on ROPSTEN.
For some reason, even though I've done this a million times before with other projects, when I run
truffle migrate --network ropsten
I'm not getting any sort of results, which is to say I'm not getting errors, but its also just not succeeding. It just says:
Compiled successfully using:
- solc: 0.5.8+commit.23d335f2.Emscripten.clang
...and puts me back at the prompt line, waiting for my next command.
My DEV ENVIRONMENT is:
Operating System: Mac OS Catalina v.10.15.1
Truffle Version (truffle version): v.5.0.34
Node Version (node --version): v.10.16.3
NPM Version (npm --version): v.6.14.4
├─┬ #truffle/hdwallet-provider#1.0.35
│ └── web3#1.2.1
├─┬ truffle-hdwallet-provider#1.0.17
│ └── web3#1.2.1
└── web3#0.20.7
(Yes, I seem to have two versions of Web3 - but I don't think that's the problem...)
My truffle-config.js file looks like this:
require('dotenv').config();
const HDWalletProvider = require('truffle-hdwallet-provider');
module.exports = {
ropsten: {
provider: function () {
return new HDWalletProvider(
process.env.GANACHE_MNENOMIC,
"https://ropsten.infura.io/${process.env.INFURA_API_KEY}"
)
},
network_id: 3,
from: "0xB4xxxxxxxxxxxxxxxxxxxxxxx",
gas: 8000000,
gasPrice: 20000000000,
confirmations: 2, // # of confs to wait between deployments. (default: 0)
skipDryRun: true
},
My .env file has the MNEMONICs and the INFURA_API_KEY which are all valid.
Any ideas what might be going on here?

You need a faucet and get some funds on your ropsten address so that migration of contract is executed.

Related

Install multiple vs code extensions in CICD

My unit test launch looks like this. As you can see I have exploited CLI options to install a VSIX my CICD has already produced, and then also tried to install ms-vscode-remote.remote-ssh because I want to re-run the tests on a remote workspace.
import * as path from 'path';
import * as fs from 'fs';
import { runTests } from '#vscode/test-electron';
async function main() {
try {
// The folder containing the Extension Manifest package.json
// Passed to `--extensionDevelopmentPath`
const extensionDevelopmentPath = path.resolve(__dirname, '../../');
// The path to the extension test runner script
// Passed to --extensionTestsPath
const extensionTestsPath = path.resolve(__dirname, './suite/index');
const vsixName = fs.readdirSync(extensionDevelopmentPath)
.filter(p => path.extname(p) === ".vsix")
.sort((a, b) => a < b ? 1 : a > b ? -1 : 0)[0];
const launchArgsLocal = [
path.resolve(__dirname, '../../src/test/test-docs'),
"--install-extension",
vsixName,
"--install-extension",
"ms-vscode-remote.remote-ssh"
];
const SSH_HOST = process.argv[2];
const SSH_WORKSPACE = process.argv[3];
const launchArgsRemote = [
"--folder-uri",
`vscode-remote://ssh-remote+testuser#${SSH_HOST}${SSH_WORKSPACE}`
];
// Download VS Code, unzip it and run the integration test
await runTests({ extensionDevelopmentPath, extensionTestsPath, launchArgs: launchArgsLocal });
await runTests({ extensionDevelopmentPath, extensionTestsPath, launchArgs: launchArgsRemote });
} catch (err) {
console.error(err);
console.error('Failed to run tests');
process.exit(1);
}
}
main();
runTests downloads and installs VS Code, and passes through the parameters I supply. For the local file system all the tests pass, so the extension from the VSIX is definitely installed.
But ms-vscode-remote.remote-ssh doesn't seem to be installed - I get this error:
Cannot get canonical URI because no extension is installed to resolve ssh-remote
and then the tests fail because there's no open workspace.
This may be related to the fact that CLI installation of multiple extensions repeats the --install-extension switch. I suspect the switch name is used as a hash key.
What to do? Well, I'm not committed to any particular course of action, just platform independence. If I knew how to do a platform independent headless CLI installation of VS Code:latest in a GitHub Action, that would certainly do the trick. I could then directly use the CLI to install the extensions before the tests, and pass the installation path. Which would also require a unified way to get the path for vs code.
Update 2022-07-20
Having figured out how to do a platform independent headless CLI installation of VS Code:latest in a GitHub Action followed by installation of the required extensions I face new problems.
The test framework options include a path to an existing installation of VS Code. According to the interface documentation, supplying this should cause the test to use the existing installation instead of installing VS Code; this is why I thought the above installation would solve my problems.
However, the option seems to be ignored.
My latest iteration uses an extension dependency on remote-ssh to install it. There's a new problem: how to get the correct version of my extension onto the remote host. By default the remote host uses the marketplace version, which obviously won't be the version we're trying to test.
I would first try with only one --install-extension option, just to check if any extension is installed.
I would also check if the same set of commands works locally (install VSCode and its remote SSH extension)
Testing it locally (with only one extension) also allows to check if that extension has any dependencies (like Remote SSH - Editing)

"npx hardhat accounts" not available in Hardhat latest version 2.9.9

I have installed the latest version of hardhat. It installed fine.
After setting hardhat up and installing all the required packages, when I run:
npx hardhat accounts
It gives an error:
Error HH303: Unrecognized task accounts
It seems like 'account' task has been removed in the latest version of hardhat. My question is now to get the list of wallet accounts that hardhat generates?
I have the same situation on 2022-08-16.
To get the available accounts, I use the npx hardhat node command.
The command sequence I performed was:
$ npx hardhat --version
2.10.1
$ npx hardhat accounts
Error HH303: Unrecognized task accounts
For more info go to https://hardhat.org/HH303 or run Hardhat with --show-stack-traces
$ npx hardhat node
Started HTTP and WebSocket JSON-RPC server at http://127.0.0.1:8545/
Accounts
========
WARNING: These accounts, and their private keys, are publicly known.
Any funds sent to them on Mainnet or any other live network WILL BE LOST.
Account #0: 0xf39Fd6e51a...ffFb92266 (10000 ETH)
Private Key: 0xac0974bec39a1...478cbed5efcae784d7bf4f2ff80
Account #1: 0x70997970C51812...b50e0d17dc79C8 (10000 ETH)
Private Key: 0x59c6995e998f97a5a...9dc9e86dae88c7a8412f4603b6b78690d
.
.
.
This is because the accounts tasks is not included in the latest release. Add the following in your hardhat.config.js
task("accounts", "Prints the list of accounts", async () => {
const accounts = await ethers.getSigners();
for (const account of accounts) {
console.log(account.address);
}
});
im not sure but I managed to fix this by migrating waffle to Beth chai masters as required and rm from config the waffle and add the chai masters
after that could not get accounts els how than "npx hardhat node" looks like accounts are displayed when launching the node if it can help!
I use yarn hardhat node to display a list of hardhat accounts on the terminal

Why npm run serve is throwing ERR_OSSL_EVP_UNSUPPORTED? [duplicate]

This question already has answers here:
Error message "error:0308010C:digital envelope routines::unsupported"
(50 answers)
Closed 12 months ago.
I'm having an issue with a Webpack build process that suddenly broke, resulting in the following error...
<s> [webpack.Progress] 10% building 0/1 entries 0/0 dependencies 0/0 modules
node:internal/crypto/hash:67
this[kHandle] = new _Hash(algorithm, xofLen);
^
Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:67:19)
at Object.createHash (node:crypto:130:10)
at BulkUpdateDecorator.hashFactory (/app/node_modules/webpack/lib/util/createHash.js:155:18)
at BulkUpdateDecorator.update (/app/node_modules/webpack/lib/util/createHash.js:46:50)
at OriginalSource.updateHash (/app/node_modules/webpack-sources/lib/OriginalSource.js:131:8)
at NormalModule._initBuildHash (/app/node_modules/webpack/lib/NormalModule.js:888:17)
at handleParseResult (/app/node_modules/webpack/lib/NormalModule.js:954:10)
at /app/node_modules/webpack/lib/NormalModule.js:1048:4
at processResult (/app/node_modules/webpack/lib/NormalModule.js:763:11)
at /app/node_modules/webpack/lib/NormalModule.js:827:5 {
opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ],
library: 'digital envelope routines',
reason: 'unsupported',
code: 'ERR_OSSL_EVP_UNSUPPORTED'
}
command terminated with exit code 1
I've tried googling ERR_OSSL_EVP_UNSUPPORTED webpack which yielded almost no useful results, but it did highlight an issue using MD4 as provided by OpenSSL (which is apparently deprecated?) to generate hashes.
The webpack.config.js code is as follows:
const path = require('path');
const webpack = require('webpack');
/*
* SplitChunksPlugin is enabled by default and replaced
* deprecated CommonsChunkPlugin. It automatically identifies modules which
* should be splitted of chunk by heuristics using module duplication count and
* module category (i. e. node_modules). And splits the chunks…
*
* It is safe to remove "splitChunks" from the generated configuration
* and was added as an educational example.
*
* https://webpack.js.org/plugins/split-chunks-plugin/
*
*/
/*
* We've enabled TerserPlugin for you! This minifies your app
* in order to load faster and run less javascript.
*
* https://github.com/webpack-contrib/terser-webpack-plugin
*
*/
const TerserPlugin = require('terser-webpack-plugin');
module.exports = {
mode: 'development',
entry: './src/js/scripts.js',
output: {
path: path.resolve(__dirname, 'js'),
filename: 'scripts.js'
},
devtool: 'source-map',
plugins: [new webpack.ProgressPlugin()],
module: {
rules: []
},
optimization: {
minimizer: [new TerserPlugin()],
splitChunks: {
cacheGroups: {
vendors: {
priority: -10,
test: /[\\/]node_modules[\\/]/
}
},
chunks: 'async',
minChunks: 1,
minSize: 30000,
name: 'true'
}
}
};
How do I change the hashing algorithm used by Webpack to something else?
I was able to fix it via:
export NODE_OPTIONS=--openssl-legacy-provider
sachaw's comment to Node.js v17.0.0 - Error starting project in development mode #30078
But they say they fixed it: ijjk's comment to Node.js v17.0.0 - Error starting project in development mode #30078:
Hi, this has been updated in v11.1.3-canary.89 of Next.js, please update and give it a try!
For me, it worked only with the annotation above.
I also want to point out that npm run start works with -openssl-legacy-provider, but npm run dev won't.
It seems that there is a patch:
Node.js 17: digital envelope routines::unsupported #14532
I personally downgraded to 16-alpine.
I had this problem too. I'd accidentally been running on the latest Node.js (17.0 at time of writing), not the LTS version (14.18) which I'd meant to install. Downgrading my Node.js install to the LTS version fixed the problem for me.
There is a hashing algorithm that comes with Webpack v5.54.0+ that does not rely on OpenSSL.
To use this hash function that relies on a npm-provided dependency instead of an operating system-provided dependency, modify the webpack.config.cjs output key to include the hashFunction: "xxhash64" option.
module.exports = {
output: {
hashFunction: "xxhash64"
}
};
Ryan Brownell's answer is the ideal solution if you are using Webpack v5.54.0+.
If you're using an older version of Webpack, you can still solve this by changing the hash function to one that is not deprecated. (It defaults to the ancient md4, which OpenSSL has removed support for, which is the root cause of the error.) The supported algorithms are any supported by crypto.createHash. For example, to use SHA-256:
module.exports = {
output: {
hashFunction: "sha256"
}
};
Finally, if you are unable to change the Webpack configuration (e.g., if it's a transitive dependency which is running Webpack), you can enable OpenSSL's legacy provider to temporarily enable MD4 during the Webpack build. This is a last resort. Create a file openssl.cnf with this content…
openssl_conf = openssl_init
[openssl_init]
providers = provider_sect
[provider_sect]
default = default_sect
legacy = legacy_sect
[default_sect]
activate = 1
[legacy_sect]
activate = 1
…and then set the environment variable OPENSSL_CONF to the path to that file when running Webpack.
It is not my answer really, but I found this workaround /hack/ to fix my problem Code Check in for a GitHub project... see the bug comments here.
I ran into ERR_OSSL_EVP_UNSUPPORTED after updating with npm install.
I added the following to node_modules\react-scripts\config\webpack.config.js
const crypto = require("crypto");
const crypto_orig_createHash = crypto.createHash;
crypto.createHash = algorithm => crypto_orig_createHash(algorithm == "md4" ? "sha256" : algorithm);
I tried Ryan Brownell's solution and ended up with a different error, but this worked...
This error is mentioned in the release notes for Node.js 17.0.0, with a suggested workaround:
If you hit an ERR_OSSL_EVP_UNSUPPORTED error in your application with Node.js 17, it’s likely that your application or a module you’re using is attempting to use an algorithm or key size which is no longer allowed by default with OpenSSL 3.0. A command-line option, --openssl-legacy-provider, has been added to revert to the legacy provider as a temporary workaround for these tightened restrictions.
I ran into this issue using Laravel Mix (Webpack) and was able to fix it within file package.json by adding in the NODE_OPTIONS=--openssl-legacy-provider (referenced in Jan's answer) to the beginning of the script:
package.json:
{
"private": true,
"scripts": {
"production": "cross-env NODE_ENV=production NODE_OPTIONS=--openssl-legacy-provider node_modules/webpack/bin/webpack.js --progress --hide-modules --config=node_modules/laravel-mix/setup/webpack.config.js"
},
"dependencies": {
...
}
}
Try upgrading your Webpack version to 5.62.2.
I faced the same challenge, but you just need to downgrade Node.js to version 16.13 and everything works well. Download LTS, not the current on Downloads.
I had the same problem with my Vue.js project and I solved it.
macOS and Linux
You should have installed NVM (Node Version Manager). If you never had before, just run this command in your terminal:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
Open your project
Open the terminal in your project
Run the command nvm install 16.13.0 or any older version
After the installation is completed, run nvm use 16.13.0
I faced the same problem in a project I developed with Next.js. For the solution, I ran the project as follows and I solved the problem.
cross-env NODE_OPTIONS='--openssl-legacy-provider' next dev
This means that you have the latest Node.js version. If you are using it for Docker then you need to change the image from
FROM node
to
FROM node:14

Heroku not recognizing updated config vars for Watir , and also not sure if config vars are pointing to right chrome files

I have a Ruby (non-Rails) & Watir web scraper working locally, however after I deployed it on Heroku, it looks like Heroku is unable to start Chrome/Chromedriver. I tried following the solutions from here: Heroku: Unable to find chromedriver when using Selenium
Here is my configuration:
args = %w[--disable-infobars --no-sandbox --disable-gpu]
options = {
binary: ENV['GOOGLE_CHROME_BIN'],
prefs: { password_manager_enable: false, credentials_enable_service: false },
args: args
}
b = Watir::Browser.new(:chrome, options: options)
Gemfile
ruby '2.7.1'
gem 'watir', '6.19.0'
gem 'xpath', '3.2.0'
gem 'google-api-client'
gem 'webdrivers'
edit And set the following buildpacks:
heroku buildpacks:set https://github.com/heroku/heroku-buildpack-google-chrome
heroku buildpacks:set https://github.com/heroku/heroku-buildpack-chromedriver
And here is the error message in the Heroku log:
unknown error: Chrome failed to start: exited abnormally. (Selenium::WebDriver::Error::UnknownError)
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /app/.apt/opt/google/chrome/chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
I also ran it with the args set to --headless
args = %w[--disable-infobars --headless window-size=1600,1200 --no-sandbox --disable-gpu]
and got this message:
timed out after 30 seconds, waiting for true condition on #<Watir::Browser:0x7a2c669409432a9a url="https://client.schwab.com/login/signon/customercenterlogin.aspx" title="Login | Charles Schwab"> (Watir::Wait::TimeoutError)
I had originally set the config vars to the following as I was following Heroku: Unable to find chromedriver when using Selenium:
heroku config:set GOOGLE_CHROME_BIN=/app/.apt/opt/google/chrome/chrome
heroku config:set GOOGLE_CHROME_SHIM=/app/.apt/opt/google/chrome/chrome
But I realized it was pointing to the wrong path, when I ran this in Heroku console find /app/ -name "*chrome*", the path I had set didn't exist, and I found the new path: /app/.apt/usr/bin/google-chrome
/app/.apt/usr/bin/google-chrome
/app/.apt/usr/bin/google-chrome-stable
/app/.apt/usr/share/gnome-control-center/default-apps/google-chrome.xml
/app/.apt/usr/share/man/man1/google-chrome-stable.1.gz
/app/.apt/usr/share/man/man1/google-chrome.1.gz
/app/.apt/usr/share/doc/google-chrome-stable
/app/.apt/usr/share/applications/google-chrome.desktop
/app/.apt/usr/share/menu/google-chrome.menu
/app/.apt/usr/share/appdata/google-chrome.appdata.xml
/app/.apt/opt/google/chrome
/app/.apt/opt/google/chrome/chrome_100_percent.pak
/app/.apt/opt/google/chrome/chrome
/app/.apt/opt/google/chrome/google-chrome
/app/.apt/opt/google/chrome/cron/google-chrome
/app/.apt/opt/google/chrome/chrome-sandbox
/app/.apt/opt/google/chrome/chrome_200_percent.pak
/app/.apt/etc/cron.daily/google-chrome
/app/.profile.d/010_google-chrome.sh
/app/.profile.d/chromedriver.sh
/app/.chromedriver
/app/.chromedriver/bin/chromedriver
/app/vendor/bundle/ruby/2.7.0/gems/webdrivers-4.6.0/lib/webdrivers/tasks/chromedriver.rake
/app/vendor/bundle/ruby/2.7.0/gems/webdrivers-4.6.0/lib/webdrivers/chrome_finder.rb
/app/vendor/bundle/ruby/2.7.0/gems/webdrivers-4.6.0/lib/webdrivers/chromedriver.rb
/app/vendor/bundle/ruby/2.7.0/gems/webdrivers-4.6.0/spec/webdrivers/chrome_finder_spec.rb
/app/vendor/bundle/ruby/2.7.0/gems/webdrivers-4.6.0/spec/webdrivers/chromedriver_spec.rb
/app/vendor/bundle/ruby/2.7.0/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/chrome.rb
/app/vendor/bundle/ruby/2.7.0/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/chrome
/app/vendor/bundle/ruby/2.7.0/gems/google-api-client-0.53.0/generated/google/apis/chromeuxreport_v1
/app/vendor/bundle/ruby/2.7.0/gems/google-api-client-0.53.0/generated/google/apis/chromeuxreport_v1.rb
So I then changed my vars to:
heroku config:set GOOGLE_CHROME_BIN=/app/.apt/usr/bin/google-chrome
heroku config:set GOOGLE_CHROME_SHIM=/app/.apt/usr/bin/google-chrome
My questions:
Is my config vars GOOGLE_CHROME_BIN & GOOGLE_CHROME_SHIM pointing to the right files now? or should they be pointing to different bin files?
It looks like Heroku is still not detecting the updated config vars, as the log is still showing /app/.apt/opt/google/chrome/chrome. After I updated them, I've pushed the changes to Heroku, triggering a rebuild, and also resetted the dynos. Even in the config vars settings in dashboard, it is showing the updated vars. What else can I do to make Heroku recognize the updated config vars?
Thanks for any help!

PM2 start script with multiple arguments (serve)

I'm trying to run serve frontend/dist -l 4000 from PM2. This is supposed to serve a Vue app on port 4000.
In my ecosystem.config.js, I have:
{
name: 'parker-frontend',
max_restarts: 5,
script: 'serve',
args: 'frontend/dist -l 4000',
instances: 1,
},
But when I do pm2 start, in the logs I have the following message:
Exposing /var/lib/jenkins/workspace/parker/frontend/dist directory on port NaN
Whereas if I run the same command: serve frontend/dist -l 4000, it runs just fine on port 4000.
After running serve frontend/dist -l 5000 I got an error in the PM2 logs.
In it's call stack I've found:
at Object.<anonymous> (/usr/lib/node_modules/pm2/lib/API/Serve.js:242:4)
Notice the path: /usr/lib/node_modules/pm2/lib/API/Serve.js
There is another command that's called serve in pm2 itself that was ran instead of the correct one. This is not the npm i -g serve I installed before. This is due to how Node package resolution works - it prioritizes local modules first.
To use the globally installed version (the correct one), you need to specify the exact path to your global serve.
To find out the path - on Linux, you can just do:
$ which serve
/usr/local/bin/serve
Then put the path in your ecosystem.config.js script property.
Final working ecosystem.config.js:
{
name: 'parker-frontend',
script: '/usr/local/bin/serve', //pm2 has it's own 'serve' which doesn't work, make sure to use global
args: 'frontend/dist -l 5000',
instances: 1,
},
```