I wanted to config my truffle-config.js with provider. When I run command "truffle migrate --network ropsten", it throws this error:
Error: Web3ProviderEngine does not support synchronous requests.
And the error details told
at Object.run (C:\Users\Bruce\AppData\Roaming\npm\node_modules\truffle\build\webpack:\packages\truffle-migrate\index.js:92:1)
I have no idea about this. I look for the file
"C:\Users\Bruce\AppData\Roaming\npm\node_modules\truffle\build\webpack:\packages\truffle-migrate\index.js:92:1", but I cannot find the path webpack under the "build/". Is it somethind wrong? I install truffle with global and it runs well with default network ganache.
ropsten: {
provider: () => new HDWalletProvider(
privateKeys.split(','),
`https://ropsten.infura.io/v3/${process.env.INFURA_API_KEY}`
),
network_id: 3, // Ropsten's id, mainnet is 1
gas: 5500000, // Ropsten has a lower block limit than mainnet
gasPrice: 2500000000, //2.5 gwei
confirmations: 2, // # of confs to wait between deployments. (default: 0)
timeoutBlocks: 200, // # of blocks before a deployment times out (minimum/default: 50)
skipDryRun: true // Skip dry run before migrations? (default: false for public nets )
},
My HDWalletProvider dependency version:
"dependencies": {
"chai": "^4.2.0",
"chai-as-promised": "^7.1.1",
"dotenv": "^8.1.0",
"eslint": "^6.4.0",
"openzeppelin-solidity": "^2.3.0",
"truffle-hdwallet-provider": "^1.0.17",
"truffle-hdwallet-provider-privkey": "^0.3.0",
"web3": "^1.2.1"
},
And the migrations:
1_initial_migration.js
const Migrations = artifacts.require("Migrations");
module.exports = function(deployer) {
deployer.deploy(Migrations);
};
2_deploy_contract.js
const Token = artifacts.require("TokenInstance");
const DeleToken = artifacts.require("DelegateToken")
module.exports = async function(deployer) {
deployer.deploy(Token);
deployer.deploy(DeleToken);
};
It just cannot compile successfully. But I use the default network with ganache is OK!
You are still using the old repository that has been deprecated.
You should use truffle monorepo instead
npm install #truffle/hdwallet-provider
and replace
const HDWalletProvider = require("#truffle/hdwallet-provider");
also you don't need to use truffle-hdwallet-provider-privkey
Related
I'm trying to deploy to Goerli, but my deploy script seems to ignore the --network parameter.
Here is my hardhat.config.ts:
import { HardhatUserConfig } from "hardhat/config";
import "#nomicfoundation/hardhat-toolbox";
import "hardhat-gas-reporter"
import "#nomiclabs/hardhat-ethers";
import * as dotenv from 'dotenv'
dotenv.config();
const env:any = process.env;
const config: HardhatUserConfig = {
solidity: {
[...]
},
networks: {
hardhat: {
[...]
},
goerli: {
url: 'https://goerli.infura.io/v3/',
accounts: [env['DEPLOYER_PRIVATE_KEY']]
},
},
[...]
};
export default config;
Then I run:
npx hardhat run scripts/deploy.ts --network goerli
And in my deploy.ts:
async function main() {
const [deployer] = await ethers.getSigners();
console.log('Using RPC ', ethers.provider.connection.url);
console.log('Deploying from address', deployer.address);
[...] // contract deployment code
}
However it fails with error "could not detect network". It makes sense because it also logs (from my code):
Using RPC http://localhost:8545
Deploying from address 0x3a5Bd3fBc2a17f2eECf2Cff44aef38bd7dc4fd7c
My address is correct, the address logged indeed corresponds to the account that I provided with the private key from dotenv, so it's being read from the config correctly. However, the RPC URL is incorrect: it seems that it's trying to connect to my local RPC and failing.
Why isn't Hardhat respecting the url property in the config, and still trying to connect to my local instance?
Change
const config: HardhatUserConfig = {
solidity: {
[...]
},
to
module.exports = {
solidity: "0.8.4",
networks: {
goerli: {
url: 'https://goerli.infura.io/v3/',
accounts: [env['DEPLOYER_PRIVATE_KEY']]
}
}
};
If you want to deploy on Hardhat you can run npx hardhat node and remove goerli and const env:any = process.env;.
I am following https://docs.aws.amazon.com/lambda/latest/dg/images-create.html and created an image from an AWS base image for Lambda and tested it locally first before pushing it to ECR. On invoking function locally, I get expected results i.e. Error Code 0. But after pushing image on ECR and invoking it from AWS lambda console, I get following issue.
START RequestId: 55930a26-5d88-4d1f-9a5b-14599b369585 Version: $LATEST
[04:46:19] I/launcher - Running 1 instances of WebDriver
[04:46:19] I/direct - Using ChromeDriver directly...
[04:46:22] E/runner - Unable to start a WebDriver session.
[04:46:22] E/launcher - Error: NoSuchSessionError: invalid session id
at Object.throwDecodedError (/var/task/node_modules/selenium-webdriver/lib/error.js:514:15)
at parseHttpResponse (/var/task/node_modules/selenium-webdriver/lib/http.js:519:13)
at /var/task/node_modules/selenium-webdriver/lib/http.js:441:30
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at async Promise.all (index 0)
[04:46:22] E/launcher - Process exited with error code 100
2021-08-11T04:46:22.089Z 55930a26-5d88-4d1f-9a5b-14599b369585 INFO error code: 100
END RequestId: 55930a26-5d88-4d1f-9a5b-14599b369585
REPORT RequestId: 55930a26-5d88-4d1f-9a5b-14599b369585 Duration: 9465.93 ms Billed Duration: 10270 ms Memory Size: 10000 MB Max Memory Used: 314 MB Init Duration: 803.88 ms
My conf.js
exports.config = {
directConnect: true,
ignoreUncaughtExceptions: true,
SELENIUM_PROMISE_MANAGER: false,
'specs': ['index.js'],
jasmineNodeOpts: {
defaultTimeoutInterval: 1000 * 6,
realtimeFailure: true,
showColors: true,
isVerbose: true,
includeStackTrace: true,
displaySpecDuration: true,
print: function () {},
},
'capabilities': {
'browserName': 'chrome',
acceptInsecureCerts: true,
acceptSslCerts: true,
chromeOptions: {
binary: "/usr/bin/google-chrome",
"excludeSwitches": [ "enable-automation" ],
"useAutomationExtension": false,
args: ["--no-sandbox","--disable-web-security","--headless","--disable-dev-shm-usage","--disable-extensions", "--disable-gpu", "--start-maximized", "--disable-infobars"]
}
},
framework: "jasmine"
};
My package.json
{
"name": "protractor",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "node_modules/.bin/protractor conf.js",
"runlocal": "node_modules/.bin/sls invoke local --function protractor",
"deploy": "node_modules/.bin/sls deploy"
},
"author": "",
"license": "ISC",
"dependencies": {
"jasmine": "^3.8.0",
"npm": "^6.14.14",
"protractor": "^7.0.0",
"protractor-beautiful-reporter": "^1.3.6"
}
}
My dockerfile
FROM amazon/aws-lambda-nodejs:12
COPY index.js conf.js handler.js package*.json screenshots ./
ADD install-google-chrome.sh .
RUN chmod +x install-google-chrome.sh
RUN curl https://intoli.com/install-google-chrome.sh | bash
ADD gtk-firefox.sh .
RUN chmod 755 gtk-firefox.sh
RUN ./gtk-firefox.sh
RUN npm install
RUN node ./node_modules/protractor/bin/webdriver-manager clean
RUN node ./node_modules/protractor/bin/webdriver-manager update
CMD [ "handler.runtest" ]
handler.js
'use strict';
module.exports.runtest = (event, context, callback) => {
var npm = require('npm');
var path = require('path');
var childProcess = require('child_process');
var args = ['conf.js'];
npm.load(function() {
var child = childProcess
.fork(path.join(npm.root, 'protractor/bin/protractor'), args)
.on('close', function(errorCode) {
console.log('error code: ', errorCode);
// Use this block to have Lambda respond when the tests are done.
const response = {
statusCode: 200,
body: JSON.stringify({
message: `Selenium Test executed on Chrome ! Child process Error Code: ${errorCode}`,
}),
};
callback(null, response);
});
process.on('SIGINT', child.kill);
});
};
index.js
const {by} = require("protractor");
const {protractor} = require("protractor");
const {browser} = require("protractor");
describe('Google\'s Search Functionality', function() {
it('can find search results', async function() {
const signin = "Sign in"
await browser.waitForAngularEnabled(false);
await browser.get('https://google.com/ncr');
await browser.findElement(by.name('q')).sendKeys('BrowserStack', protractor.Key.ENTER);
const texti = browser.findElement(by.xpath('//a[contains(#href,"sign_in")]'));
expect(await texti.getText()).toEqual(signin);
});
});
Al these files are at same level in root project directory.
I googled a lot but not able to find proper resolution. Is it some permission issue?
As I don't understand what makes the same lambda container image fail in lambda console and not in local.
I'm using hardhat locally and have a react frontend up and running but I can't call the methods without errors.
I've tried both ethers.js and web3.
Here's my code and attempts. Please let me know if you see what I'm doing wrong.
I'm trying to interact with contracts that are deployed in the local hardhat env through web3
I'm unable to get back the data from the contract, here's the info
I have:
var list = await contract.methods.getList();
console.log("list ", list );
which gets me
list {arguments: Array(0), call: ƒ, send: ƒ, encodeABI: ƒ, estimateGas: ƒ, …}
When I do
var list = await contract.methods.getList().call();
console.log("list ", list );
I get this error in the browser:
Returned values aren't valid, did it run Out of Gas? You might also see this error if you are not using the correct ABI for the contract you are retrieving data from, requesting data from a block number that does not exist, or querying a node which is not fully synced.
I do:
Setup in console:
npx hardhat node
>Started HTTP and WebSocket JSON-RPC server at http://127.0.0.1:8545/
>Accounts
>========
>...
npx hardhat compile
> Nothing to compile
npx hardhat run scripts/deploy.js --network hardhat
Note: In the deploy.js file, I do a
const list = await contract.getList();
console.log("list", list ); // correctly outputs ["string", "string"]
The method:
mapping(uint256 => address) internal list;
uint256 internal listCount;
function getList() public override view returns (address[] memory) {
address[] memory assets = new address[](listCount);
for (uint256 i = 0; i < listCount; i++) {
assets[i] = list[i];
}
return assets;
}
In react App.js:
import Contract_from './data/abi/Contract_.json'; // Contract_ is a placer
var contract = new web3.eth.Contract(Contract_, address_given_on_deploy);
var contractAddress = await contract .options.address; // correctly outputs
var list= await contract.methods.getList().call();
console.log("list", list);
As you see, this doesn't return the values from the method. What am I doing wrong here?
For any reason and may be likely the issue, here's my config:
require("#nomiclabs/hardhat-waffle");
// openzeppelin adds
require("#nomiclabs/hardhat-ethers");
require('#openzeppelin/hardhat-upgrades');
//abi
require('hardhat-abi-exporter');
// This is a sample Hardhat task. To learn how to create your own go to
// https://hardhat.org/guides/create-task.html
task("accounts", "Prints the list of accounts", async () => {
const accounts = await ethers.getSigners();
for (const account of accounts) {
console.log(account.address);
}
});
// You need to export an object to set up your config
// Go to https://hardhat.org/config/ to learn more
/**
* #type import('hardhat/config').HardhatUserConfig
*/
module.exports = {
networks: {
hardhat: {
gas: 12000000,
blockGasLimit: 0x1fffffffffffff,
allowUnlimitedContractSize: true,
timeout: 1800000,
chainId: 1337
}
},
solidity: {
compilers: [
{
version: "0.8.0",
settings: {
optimizer: {
enabled: true,
runs: 1000
}
}
},
{
version: "0.8.2",
settings: {
optimizer: {
enabled: true,
runs: 1000
}
}
},
],
},
abiExporter: {
path: './frontend/src/data/abi',
clear: true,
flat: true,
only: [],
spacing: 2
}
}
__
I thought maybe i would try ethers.js since that is what i do my testing in but same issue.
For whatever reason, I can "get" the contracts, print the methods that belong to them, but I can't actually call the methods.
Here's my ethers.js brevity:
provider = new ethers.providers.Web3Provider(window.ethereum);
if(provider != null){
const _contract = new ethers.Contract(address, _Contract, provider);
var list= await _contract.getList().call();
console.log("list", list);
}
The error i get from this is:
Error: call revert exception (method="getList()", errorArgs=null, errorName=null, errorSignature=null, reason=null, code=CALL_EXCEPTION, version=abi/5.4.0)
I've tried numerous contracts in the protocol and same thing for each
I need to pass the connection argument while calling lighthouse
https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/index.js#L41
async function lighthouse(url, flags = {}, configJSON, connection) {
// verify the url is valid and that protocol is allowed
if (url && (!URL.isValid(url) || !URL.isProtocolAllowed(url))) {
throw new LHError(LHError.errors.INVALID_URL);
}
// set logging preferences, assume quiet
flags.logLevel = flags.logLevel || 'error';
log.setLevel(flags.logLevel);
const config = generateConfig(configJSON, flags);
connection = connection || new ChromeProtocol(flags.port, flags.hostname);
// kick off a lighthouse run
return Runner.run(connection, {url, config});
}
And in my testcafe my tests look like
test('Run lighthouse, async t => {
lighthouse('https://www.youtube.com', {}, {}, ????)
})
I am unable to retrieve the connection of the chrome instance that testcafe had opened up, instead of spawning a new chromeRunner
there is an npm library called testcafe-lighthouse which helps to audit web pages using TestCafe. It also has the capability to produce an HTML detailed report.
Install the plugin by:
$ yarn add -D testcafe-lighthouse
# or
$ npm install --save-dev testcafe-lighthouse
Audit with default threshold
import { testcafeLighthouseAudit } from 'testcafe-lighthouse';
fixture(`Audit Test`).page('http://localhost:3000/login');
test('user performs lighthouse audit', async () => {
const currentURL = await t.eval(() => document.documentURI);
await testcafeLighthouseAudit({
url: currentURL,
cdpPort: 9222,
});
});
Audit with custom Thresold:
import { testcafeLighthouseAudit } from 'testcafe-lighthouse';
fixture(`Audit Test`).page('http://localhost:3000/login');
test('user page performance with specific thresholds', async () => {
const currentURL = await t.eval(() => document.documentURI);
await testcafeLighthouseAudit({
url: currentURL,
thresholds: {
performance: 50,
accessibility: 50,
'best-practices': 50,
seo: 50,
pwa: 50,
},
cdpPort: 9222,
});
});
you need to kick start the test like below:
# headless mode, preferable for CI
npx testcafe chrome:headless:cdpPort=9222 test.js
# non headless mode
npx testcafe chrome:emulation:cdpPort=9222 test.js
I hope it will help your automation journey.
I did something similar, I launch ligthouse with google chrome on a specific port using CLI
npm run testcafe -- chrome:headless:cdpPort=1234
Then I make the lighthouse function to get port as an argument
export default async function lighthouseAudit(url, browser_port){
let result = await lighthouse(url, {
port: browser_port, // Google Chrome port Number
output: 'json',
logLevel: 'info',
});
return result;
};
Then you can simply run the audit like
test(`Generate Light House Result `, async t => {
auditResult = await lighthouseAudit('https://www.youtube.com',1234);
});
Hopefully It helps
I'm trying to make a JSON call with async/await using Cloud Functions for Firebase.
Any idea how to fix the following code? My plan is Blaze.
My inspiration is https://www.valentinog.com/blog/http-requests-node-js-async-await/
DEPLOY ERROR
functions[setDetails]: Deployment error.
const getDetails = async url => {
SyntaxError: Unexpected identifier
at createScript (vm.js:56:10)
at Object.runInThisContext (vm.js:97:10)
at Module._compile (module.js:542:28)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
at require (internal/module.js:20:19)
at getUserFunction (/var/tmp/worker/worker.js:378:24)
INDEX.JS
'use strict';
const functions = require('firebase-functions');
const admin = require('firebase-admin');
// Promise based HTTP client for the browser and node.js
const axios = require('axios');
admin.initializeApp(functions.config().firebase);
const url = 'https://api.xxxx.com/json?partnumber=';
const getDetails = async url => {
try {
const response = await axios.get(url);
const data = response.data;
const getDet = data.results[0].details;
return getDet;
} catch (error) {
console.log(error);
return error;
}
};
exports.setDetails = functions.database.ref('/equipment/{pushId}').onWrite((event) => {
const post = event.data.val();
if (post.details){ return };
const number = post.number;
const details = getDetails(url + number);
admin.database().ref('/equipment/{pushId}').push({number: number, details: details});
});
PACKAGE.JSON
{
"name": "look-at-details",
"description": "bla bla bla",
"dependencies": {
"axios": "^0.18.0",
"firebase-admin": "^5.9.1",
"firebase-functions": "^0.8.1"
},
"scripts": {
"serve": "firebase serve --only functions",
"shell": "firebase experimental:functions:shell",
"start": "npm run shell",
"deploy": "firebase deploy --only functions",
"logs": "firebase functions:log"
}
}
async/await is not yet supported natively by Cloud Functions. Cloud Functions runs node 6, which doesn't use a version of JavaScript that supports async/await. The deploy is failing because it doesn't know the async keyword.
Instead, you could initialize your project to use TypeScript, which supports async/await. The Firebase CLI will automatically transpile your code to ES6 that uses Promises to implement async/await.
Another solution is rather than using async wait you could use another library called request-promise-any