Truffle Migrate Not Deploying After second migration - ethereum

I'm trying to test gas expenditures of different types and implementations of common contracts (e.g. OpenZeppelin).
I am using Truffle migrate to deploy these contracts, and have an instance of ganache-cli mainnet fork running in the background on local host, forked from an Infura node.
When I first spin up the ganache instance, then go on truffle console to migrate - everything works fine. It compiles, then deploys, and I am able to see how much gas was used, block timestamp, etc.
However, when I add another solidity file (representing another contract to test) to the contracts folder or replace the old one with a new contract, then do truffle migrate, it only compiles it. I read that truffle migrate sees that the network is up-to-date and does not update. Ok, fine.
So I try deleting the build folder, and hit migrate --reset. Same thing, no deployment.
When I reset the ganache-cli fork and also restart truffle console (coupled with deleting the build folder), now it works and I am able to see the transaction information that I need from the deployment. But it's more tedious than it needs to be, and I think I'm missing something here.
Can anyone point me in the right direction here? Am I overlooking a truffle command or migrate option? Do I need to change my initial migration js file? It looks like this:
const Migrations = artifacts.require("Migrations");
module.exports = function (deployer) {
deployer.deploy(Migrations);
};
The contract migration files are pretty much the same except with the smart contract info:
const GameItems = artifacts.require("GameItems");
module.exports = function (deployer) {
deployer.deploy(GameItems);
};
Migrations.sol is the one that comes with truffle init:
// SPDX-License-Identifier: MIT
pragma solidity >=0.4.22 <0.9.0;
contract Migrations {
address public owner = msg.sender;
uint public last_completed_migration;
modifier restricted() {
require(
msg.sender == owner,
"This function is restricted to the contract's owner"
);
_;
}
function setCompleted(uint completed) public restricted {
last_completed_migration = completed;
}
}
This is what my truffle-config.js file looks like:
/**
* Use this file to configure your truffle project. It's seeded with some
* common settings for different networks and features like migrations,
* compilation and testing. Uncomment the ones you need or modify
* them to suit your project as necessary.
*
* More information about configuration can be found at:
*
* trufflesuite.com/docs/advanced/configuration
*
* To deploy via Infura you'll need a wallet provider (like #truffle/hdwallet-provider)
* to sign your transactions before they're sent to a remote public node. Infura accounts
* are available for free at: infura.io/register.
*
* You'll also need a mnemonic - the twelve word phrase the wallet uses to generate
* public/private key pairs. If you're publishing your code to GitHub make sure you load this
* phrase from a file you've .gitignored so it doesn't accidentally become public.
*
*/
const HDWalletProvider = require("#truffle/hdwallet-provider");
// const infuraKey = "....";
//
// const fs = require('fs');
// const mnemonic = fs.readFileSync(".secret").toString().trim();
module.exports = {
/**
* Networks define how you connect to your ethereum client and let you set the
* defaults web3 uses to send transactions. If you don't specify one truffle
* will spin up a development blockchain for you on port 9545 when you
* run `develop` or `test`. You can ask a truffle command to use a specific
* network from the command line, e.g
*
* $ truffle test --network <network-name>
*/
networks: {
// Useful for testing. The `development` name is special - truffle uses it by default
// if it's defined here and no other network is specified at the command line.
// You should run a client (like ganache-cli, geth or parity) in a separate terminal
// tab if you use this network and you must also set the `host`, `port` and `network_id`
// options below to some value.
//
development: {
host: "127.0.0.1", // Localhost (default: none)
port: 8545, // Standard Ethereum port (default: none)
network_id: "*", // Any network (default: none)
},
// Another network with more advanced options...
// advanced: {
// port: 8777, // Custom port
// network_id: 1342, // Custom network
// gas: 8500000, // Gas sent with each transaction (default: ~6700000)
// gasPrice: 20000000000, // 20 gwei (in wei) (default: 100 gwei)
// from: <address>, // Account to send txs from (default: accounts[0])
// websocket: true // Enable EventEmitter interface for web3 (default: false)
// },
// Useful for deploying to a public network.
// NB: It's important to wrap the provider as a function.
rinkeby: {
provider: () =>
new HDWalletProvider(
"",
`api end piont`
),
network_id: 4, // Rinkeby's id
gas: 5500000, // Ropsten has a lower block limit than mainnet
confirmations: 2, // # of confs to wait between deployments. (default: 0)
timeoutBlocks: 200, // # of blocks before a deployment times out (minimum/default: 50)
skipDryRun: true, // Skip dry run before migrations? (default: false for public nets )
},
// mainnet: {
// provider: () =>
// new HDWalletProvider(
// "",
// "
// ),
// network_id: 1, // mainnet
// gas: 5500000,
// gasPrice: 2000000000, // check https://ethgasstation.info/
// confirmations: 2, // # of confs to wait between deployments. (default: 0)
// timeoutBlocks: 200, // # of blocks before a deployment times out (minimum/default: 50)
// skipDryRun: false, // Skip dry run before migrations? (default: false for public nets )
// },
// Useful for private networks
// private: {
// provider: () => new HDWalletProvider(mnemonic, `https://network.io`),
// network_id: 2111, // This network is yours, in the cloud.
// production: true // Treats this network as if it was a public net. (default: false)
// }
},
// Set default mocha options here, use special reporters etc.
mocha: {
// timeout: 100000
},
// Configure your compilers
compilers: {
solc: {
version: "0.8.7", // Fetch exact version from solc-bin (default: truffle's version)
// docker: true, // Use "0.5.1" you've installed locally with docker (default: false)
// settings: { // See the solidity docs for advice about optimization and evmVersion
// optimizer: {
// enabled: false,
// runs: 200
// },
// evmVersion: "byzantium"
// }
},
},
plugins: ["truffle-plugin-verify"],
api_keys: {
etherscan: "",
},
// Truffle DB is currently disabled by default; to enable it, change enabled: false to enabled: true
//
// Note: if you migrated your contracts prior to enabling this field in your Truffle project and want
// those previously migrated contracts available in the .db directory, you will need to run the following:
// $ truffle migrate --reset --compile-all
db: {
enabled: false,
},
};
Much appreciation in advance.

Related

How to call functions of a contract deployed on hardhat forking mainnet

I am trying to use my own contract's functions deployed to hardhat forking mainnet. To do that I have some steps like this:
I added the forking configuration in hardhat.config.js
networks: {
hardhat: {
forking: {
url: "https://eth-mainnet.alchemyapi.io/v2/<key>",
}
}
}
I ran npx hardhat node in a terminal to start rpc server.
I deployed the contract by using hardhat forking as network. So I took the deployed contract address. I used the network parameter as --network hardhat Then it gave me deployed contract address. This address I guess is in the mainnet forking.
I wrote a js file to interact with this contract and call its functions.(For testing I added a simple pure function in contract, returning a uint256)
const { ethers } = require('ethers');
const provider2 = new ethers.providers.getDefaultProvider('http://127.0.0.1:8545/')
const hrdhatAccountPrivate = "0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"//owner
const hrdhatAccountPublic = "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266"//owner
const ERC_20_ABI = require("./erc20_abi.json")
const FORK_MAIN_ABI = [
"function myTestFunction() pure returns (uint256)",
]
const walletFork = new ethers.Wallet(hrdhatAccountPrivate, provider2)
async function main() {
const fundxPoolAddr = '0xBbc18b580256A82dC0F9A86152b8B22E7C1C8005'//forking
const fundxPoolContract = new ethers.Contract(fundxPoolAddr, FORK_MAIN_ABI, provider2)
const fundxPoolContractWithWallet = fundxPoolContract.connect(hrdhatAccountPublic)//walletFork
const tx = await fundxPoolContractWithWallet.myTestFunction()//{gasLimit:300000}
console.log(tx);
}
main()
.catch((error) => {
console.error(error);
process.exitCode = 1;
});
But when I try to run this script (to call contract function), it throws an error.
Error: call revert exception [ See: https://links.ethers.org/v5-errors-CALL_EXCEPTION ] (method="myTestFunction()", data="0x", errorArgs=null, errorName=null, errorSignature=null, reason=null, code=CALL_EXCEPTION, version=abi/5.6.3)
at Logger.makeError (C:\Users\***\node_modules\#ethersproject\logger\lib\index.js:233:21)
at Logger.throwError (C:\Users\***\node_modules\#ethersproject\logger\lib\index.js:242:20)
at Interface.decodeFunctionResult (C:\Users\***\node_modules\#ethersproject\abi\lib\interface.js:388:23)
at Contract.<anonymous> (C:\Users\***\node_modules\#ethersproject\contracts\lib\index.js:395:56)
at step (C:\Users\***\node_modules\#ethersproject\contracts\lib\index.js:48:23)
at Object.next (C:\Users\***\node_modules\#ethersproject\contracts\lib\index.js:29:53)
at fulfilled (C:\Users\***\node_modules\#ethersproject\contracts\lib\index.js:20:58)
at processTicksAndRejections (node:internal/process/task_queues:96:5) {
reason: null,
code: 'CALL_EXCEPTION',
method: 'myTestFunction()',
data: '0x',
errorArgs: null,
errorName: null,
errorSignature: null,
address: '0xB9d9e972100a1dD01cd441774b45b5821e136043',
args: [],
transaction: {
data: '0xd100387c',
to: '0xB9d9e972100a1dD01cd441774b45b5821e136043',
from: '0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266'
}
}
When I deploy to --network goerli, it works well. But in --network hardhat it occurs an error. How can I solve this?
I tried to deploy my contract into the goerli testnet, then I called this contract's function and it worked. But for hardhat forking mainnet, it isn't deployed properly or I am doing something wrong during calling the contract's function. Otherwise it wouldn't work in goerli.
When I comment/remove the networks parameters in hardhat.config.js and then start the hardhat forking sw with command line just like this: npx hardhat node --fork https://eth-mainnet.alchemyapi.io/v2/<key>, it works well. The problem was about network. I had started the forking network and deployed the contract into it. But in my script I was trying to interact with this contract in the local network.
I very thank nick.tran in KyberNetwork Team for his help.

Why the bytecodes for the same smart contract in testnet explorer and Remix/solcjs are different?

I am using Remix to compile and deploy my smart contract to Rinkeby and RSK test networks.
I don't understand why the bytecode of my contract on Rinkeby explorer is different from the metadata.data.deployedBytecode.object in Remix artifacts and also different from evm.deployedBytecode.object coming from solcjs compiler. I also tried this on RSK and I got the same issue.
That's what I do in Remix:
I create a MegaHonk.sol file in contracts folder
// SPDX-License-Identifier: GPL-3.0
pragma solidity 0.8.7;
contract MegaHonk {
uint256 public count;
event LoudSound(address indexed source);
function honk() external {
require(tx.origin != msg.sender, 'EOA not allowed');
count += 1;
emit LoudSound(tx.origin);
}
}
I select the appropriate compiler version 0.8.7, select Environment: Injected provider - Metamask, compile and deploy to Rinkeby. The contract successfully deploys. Then I go to Rinkeby explorer, find my contract bytecode and compare it with the one in remix-file-explorer/contracts/artifacts/MegaHonk.js in the metadata.data.deployedBytecode.object property.
Bytecode from Rinkeby is 1348 symbols long and
Bytecode from Remix is 1350 symbols long.
The exact same thing happens when I am compiling the same smart contract with solcjs compiler. I use the right version 0.8.7 and these input parameters:
const input = {
language: 'Solidity',
settings: {
outputSelection: {
'*': {
'*': ['evm.deployedBytecode', 'evm.bytecode'],
},
},
optimizer: {
enabled: false,
},
},
sources: {
'MegaHonk.sol': {
content: MegaHonk,
},
},
};
Why does this happen? What compiler parameters should I use to make the bytecodes identical in the Remix artifacts, solcjs compiler and in blockchain explorer (Ethereum or RSK testnets)?

aws cdk ecs task scheduling specify existing securitygroup

When defining an ECS Task Schedule, I can't seem to find a way of specifying an existing security group. Any pointers on where this can be configured using aws cdk?
In the code snippet below, you'll see I am able to create a cron, specify the docker image to schedule and create the schedule itself by specifying the existing cluster and vpc. However, there is no option to specify an existing security group... Is it possible to specify an existing security group?
schedule_cron = scaling.Schedule.cron(minute=manifest['schedule']['minute'],
hour=manifest['schedule']['hour'],
day=manifest['schedule']['day'],
month=manifest['schedule']['month'],
year=manifest['schedule']['year'])
image_option = ecs_patterns.ScheduledFargateTaskImageOptions(image=img,
cpu=manifest["resources"]["cpu"],
memory_limit_mib=manifest["resources"]["memory"],
log_driver=ecs.AwsLogDriver(log_group=log_group,
stream_prefix=manifest["app_name"]),
secrets=secrets,
environment= env)
schedule_pattern = ecs_patterns.ScheduledFargateTask(self, f"scheduledtask{app_name}",
schedule= schedule_cron, scheduled_fargate_task_image_options=image_option, cluster=cluster,
desired_task_count=manifest["replica_count"], vpc=vpc)
The ECS Patterns does not support this yet. The underlying constructs however do. Therefore you must specify the TaskDefinition, Event and Event Target yourself.. With Event the schedule is specified and with Event Target the SecurityGroup is set.
Here is an example implementation using TypeScript. Please adjust this to Python using the aws_cdk.aws_events and aws_cdk.aws_events_targets modules.
import aas = require('#aws-cdk/aws-applicationautoscaling');
import cdk = require('#aws-cdk/core');
import events = require("#aws-cdk/aws-events")
import event_targets = require("#aws-cdk/aws-events-targets");
import ec2 = require('#aws-cdk/aws-ec2');
const securityGroup = new ec2.SecurityGroup(this, "SecurityGroup", {
vpc: vpc,
});
const task = new ecs.FargateTaskDefinition(this, "TaskDefinition", {
family: "ScheduledTask",
cpu: ..,
memoryLimitMiB: ..,
});
task.addContainer("app_name", ...);
const rule = new events.Rule(this, "Rule", {
description: "ScheduledTask app_name Trigger",
enabled: true,
schedule: aas.Schedule.rate(cdk.Duration.hours(1)),
targets: [
new event_targets.EcsTask({
cluster: cluster,
taskDefinition: task,
securityGroup: securityGroup,
}),
],
});
Please note that the EcsTask event target only allows one security group. This issue was raised a while ago on GitHub: https://github.com/aws/aws-cdk/issues/3312

SSH to Google Compute instance using NodeJS, without gcloud

I'm trying to create a SSH tunnel into a compute instance, from an environment that doesn't have gcloud installed (App Engine Standard NodeJS Environment).
What are the steps needed to do that? How does gcloud compute ssh command does it? Is there a NodeJS library that already does it?
I created the package gcloud-ssh-tunnel that does the necessary steps:
Create a private/public key using sshpk
Imports the public key using the OS Login API
SSH using ssh2 (and specifically create a tunnel, because this was the use case I needed - see the Why? section in the package)
Delete the public key using the OS Login API (to not overflow the account or leave security access)
You can use ssh2 to do that in nodejs.
"gcloud compute ssh" generates persistent SSH keys for the user. The public key is stored in project or instance SSH keys metadata, and the Guest Environment creates the necessary local user and places ~/.ssh/authorized_keys in its home directory.
You can manually add your public key to the instance, and then connect to it via ssh using a node ssh library1.
Or you can set a startup script for the instance when you are creating it2.
As Cloud Ace pointed out, you can use the ssh2 module3 for node.js compatibility.
In order to SSH into a GCP instance you have to:
Enable OS Login
Create a service account and assign it "Compute OS Admin Login" role.
Create SSH key and import it into the service account.
Use that SSH key and POSIX username.
The first 2 steps already link to the documentation.
Create SSH key:
import {
generatePrivateKey,
} from 'sshpk';
const keyPair = generatePrivateKey('ecdsa');
const privateKey = keyPair.toString();
const publicKey = keyPair.toPublic().toString();
Import key:
const osLoginServiceClient = new OsLoginServiceClient({
credentials: googleCredentials,
});
const [result] = await osLoginServiceClient.importSshPublicKey({
parent: osLoginServiceClient.userPath(googleCredentials.client_email),
sshPublicKey: {
expirationTimeUsec: ((Date.now() + 10 * 60 * 1_000) * 1_000).toString(),
key: publicKey,
},
});
SSH using the key:
const ssh = new NodeSSH();
await ssh.connect({
host,
privateKey,
username: loginProfile.posixAccounts[0].username,
});
In this example, I am using node-ssh but you can use anything.
The only other catch is that you need to figure out the public host. Implementation for that looks like this:
const findFirstPublicIp = async (
googleCredentials: GoogleCredentials,
googleZone: string,
googleProjectId: string,
instanceName: string,
) => {
const instancesClient = new InstancesClient({
credentials: googleCredentials,
});
const instances = await instancesClient.get({
instance: instanceName,
project: googleProjectId,
zone: googleZone,
});
for (const instance of instances) {
if (!instance || !('networkInterfaces' in instance) || !instance.networkInterfaces) {
throw new Error('Unexpected result.');
}
for (const networkInterface of instance.networkInterfaces) {
if (!networkInterface || !('accessConfigs' in networkInterface) || !networkInterface.accessConfigs) {
throw new Error('Unexpected result.');
}
for (const accessConfig of networkInterface.accessConfigs) {
if (accessConfig.natIP) {
return accessConfig.natIP;
}
}
}
}
throw new Error('Could not locate public instance IP address.');
};
Finally, to clean up, you have to call deleteSshPublicKey with the name of the key that you've imported:
const fingerprint = crypto
.createHash('sha256')
.update(publicKey)
.digest('hex');
const sshPublicKey = loginProfile.sshPublicKeys?.[fingerprint];
if (!sshPublicKey) {
throw new Error('Could not locate SSH public key with a matching fingerprint.');
}
const ssh = new NodeSSH();
await ssh.connect({
host,
privateKey,
username: loginProfile.posixAccounts[0].username,
});
await osLoginServiceClient.deleteSshPublicKey({
name: sshPublicKey.name,
});
In general, you'd need to reserve & assign a static external IP address to begin with (unless trying to SSH from within the same network). And a firewall rule needs to be defined for port tcp/22, which then can be applied as a "label" to the network interface, which has that external IP assigned.
The other way around works with gcloud app instances ssh:
SSH into the VM of an App Engine Flexible instance
which might be less effort & cost to setup, because a GCP VM usually has gcloud installed.

Out of gas while migrating a contract

I have looked at the other "out of gas" SO posts and they haven't solved my problem.
I am using ganache-cli started with
ganache-cli --account="0xce2ddf7d4509856c2b7256d002c004db6e34eeb19b37cee04f7b493d2b89306d, 2000000000000000000000000000000"
I then execute
truffle migrate --reset
It returns with an error
Error encountered, bailing. Network state unknown. Review successful transactions manually.
Error: VM Exception while processing transaction: out of gas
(Full error is at the end)
These are the files involved;
truffle.js
module.exports = {
networks: {
development: {
host: "localhost",
port: 8545,
network_id: "*",
gas: 470000
}
}
};
1_initial_migration.js
var Migrations = artifacts.require("./Migrations.sol");
module.exports = function(deployer) {
deployer.deploy(Migrations, {gas: 4500000});
};
2_deploy_contracts.js
var Voting = artifacts.require("./Voting.sol");
module.exports = function(deployer){
deployer.deploy(Voting, ['Rama', 'Nick', 'Jose'], {gas: 290000});
}
Voting.sol
pragma solidity ^0.4.18;
contract Voting {
mapping (bytes32 => uint8) public votesReceived;
bytes32[] public candidateList;
function Voting(bytes32[] candidateNames) public {
candidateList = candidateNames;
}
function totalVotesFor(bytes32 candidate) view public returns (uint8) {
require(validCandidate(candidate));
return votesReceived[candidate];
}
function voteForCandidate(bytes32 candidate) public {
require(validCandidate(candidate));
votesReceived[candidate] += 1;
}
function validCandidate(bytes32 candidate) view public returns (bool) {
for(uint i = 0; i < candidateList.length; i++) {
if (candidateList[i] == candidate) {
return true;
}
}
return false;
}
}
Full error
Replacing Migrations...
... 0xaf3b7d40ac17f297a4970b75e1cc55659e86dea3ba7bcf13dd9f82e2b6cf0086
Migrations: 0x1ea6ea9d7528a8ac4b378ae799d2c38fe006b9b6
Saving successful migration to network...
... 0xa8400e873da3cb15719c2c31804ec558e73aa9bfa91c4dc48e922c0ed0db736f
Saving artifacts...
Running migration: 2_deploy_contracts.js
Deploying Voting...
... 0x72947eda435cf854abeeeb5483c9625efad45b664f3bcc7c2085f8aabdbb1076
Error encountered, bailing. Network state unknown. Review successful transactions manually.
Error: VM Exception while processing transaction: out of gas
at Object.InvalidResponse (C:\Users\Paul\AppData\Roaming\npm\node_modules\truffle\build\webpack:\~\web3\lib\web3\errors.js:38:1)
at C:\Users\Paul\AppData\Roaming\npm\node_modules\truffle\build\webpack:\~\web3\lib\web3\requestmanager.js:86:1
at C:\Users\Paul\AppData\Roaming\npm\node_modules\truffle\build\webpack:\~\truffle-migrate\index.js:225:1
at C:\Users\Paul\AppData\Roaming\npm\node_modules\truffle\build\webpack:\~\truffle-provider\wrapper.js:134:1
at XMLHttpRequest.request.onreadystatechange (C:\Users\Paul\AppData\Roaming\npm\node_modules\truffle\build\webpack:\~\web3\lib\web3\httpprovider.js:128:1)
at XMLHttpRequestEventTarget.dispatchEvent (C:\Users\Paul\AppData\Roaming\npm\node_modules\truffle\build\webpack:\~\xhr2\lib\xhr2.js:64:1)
at XMLHttpRequest._setReadyState (C:\Users\Paul\AppData\Roaming\npm\node_modules\truffle\build\webpack:\~\xhr2\lib\xhr2.js:354:1)
at XMLHttpRequest._onHttpResponseEnd (C:\Users\Paul\AppData\Roaming\npm\node_modules\truffle\build\webpack:\~\xhr2\lib\xhr2.js:509:1)
at IncomingMessage.<anonymous> (C:\Users\Paul\AppData\Roaming\npm\node_modules\truffle\build\webpack:\~\xhr2\lib\xhr2.js:469:1)
at emitNone (events.js:91:20)
The error message is correct. You're not sending enough gas for contract creation.
When you deploy a contract, gas is consumed for 3 different phases of the deployment:
Intrinsic gas: This is the baseline amount used in any transaction. For all transactions, there is an initial cost of 21,000 gas. For contract creation, there is an additional 32,000. Therefore, before anything is actually done for deployment, you're already in for 53,000 gas.
Constructor execution: This is the gas used for the OPCODES executed by your constructor. I deployed this contract on Rinkeby and you can see all of the OPCODES for constructor execution, and their costs, here. This portion consumed 81,040 in gas.
Contract code storage: Finally, you have the cost of storing your contract code. If you look at gas estimation tools, this is referred to as the "code deposit". This costs 200 gas for every byte of runtime contract code stored. To get the size of your contract code, run solc --optimize Voting.sol --bin-runtime -o . and look at the size of the resulting file. Your contract is 1116 bytes (I'm using solc version 0.4.19, so your size on .18 may be slightly different), which results in 223,200 gas consumed.
In total, this comes out to 357,240 gas, so your 290,000 limit is too low (The actual contract run on Rinkeby consumed 351,640 gas. Again, I believe the small discrepancy is due to slight differences in compiler version output. I'm not 100% sure of this, but the difference is small enough - effectively 28 bytes of contract code - that I didn't dig deeper to find the underlying reason).
There's a great write up on HackerNoon that goes into detail of each calculation with an example.
If anyone else is getting the error:
Error encountered, bailing. Network state unknown. Review successful transactions manually.
Error: VM Exception while processing transaction: out of gas
Delete the ./build directory and enable the solc optimizer in truffle.js:
module.exports = {
networks: {
development: {
host: "localhost",
port: 8545, // Using ganache as development network
network_id: "*",
gas: 4698712,
gasPrice: 25000000000
}
},
solc: {
optimizer: {
enabled: true,
runs: 200
}
}
};
I had the same problem and now it's fixed.
Runtime : VM out of gas can occur in two situations -
While using truffle migrate --reset
Make sure to enable optimiser of solidity compiler.
Add -
solc: {
optimizer: {
enabled: true,
runs: 200
}
}
in truffle.js in windows and truffle-config.js in mac
Also in networks->development please give high gasPrice and gasLimit and make sure to match it with ganache-cli (if using)
While sending transaction or when connecting with solidity-
This is the most common error faced, when contract runs fine on remix but Runtime on truffle
When using web3 the default gas it uses is 90000 and some calls fail. So whenever you are sending a transaction do remember to give sufficient gas.
Sample
await this.state.instance.methods.sendingTransactionFunction().send({from : this.state.account, gas : 1000000})
For the latest truffle version,
The truffle-config.js is slightlg updated in the compilers section, need to add settings attributes for optimizer, please check the following truffle-config.js
module.exports= {
...the networks setting
compilers: {
solc: {
version: "0.8.4",
settings: {
optimizer: {
enabled: true, //reduce the size of the contract
runs: 200,
},
evmVersion: "berlin",
},
},
},
}