I'm altering the fabcar version of hyperledger fabric and wrote some functions. When I executed, I got an error mentioned below (command mentioned below is of shell script)
$ peer chaincode invoke -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls --cafile $ORDERER_CA -C $CHANNEL_NAME -n cloud $PEER_CONN_PARMS --isInit -c '{"function":"uploadData","Args":["DATA1","ID12345","/home/samplefile___pdf","3"]}'
Error: endorsement failure during invoke. response: status:500 message:"error in simulation: transaction returned with failure: Function uploadData not found in contract SmartContract"
Below is the chaincode (abstractly mentioned)
type SmartContract struct {
contractapi.Contract
}
type Data struct {
Owner string `json:"owner"`
File string `json:"file"`
FileChunkNumber string `json:"filechunknumber"`
SHA256 string `json:"sha256"`
}
// Uploads new data to the world state with given details
func (s *SmartContract) uploadData(ctx contractapi.TransactionContextInterface, args []string) error {
/*...*/
}
I don't get where to alter the changes
I assume that you have updated the chaincode version number or chaincode name while installation and instantiation. (1.4.6)
Have you tried pre-existing functions of the chaincode,Are they working with your invoke command.
If not,please follow this invoke command:
peer chaincode invoke -o orderer.example.com:7050 -C $CHANNEL_NAME -n cloud $PEER_CONN_PARMS -c '{"Args":["uploadData","DATA1","ID12345","/home/samplefile___pdf","3"]}'
I had faced a similar problem before; there can be 2 possible errors:
Fabric might be using the old chaincode docker image; hence try
deleting that image and re-create the docker image with the updated
chaincode.
There might be some problem in the body of your uploadData function (could be a syntactical or logical error) which you'll have to debug.
Hope that helps!
Related
I built the input file (decoded base64 file into p12 file) as CERTIFICATE_PATH, P12_PASSWORD is password in secret, KEYCHAIN_PATH is defined. when I run the command on CLI, I get "1 item imported" success message. but when I run from *.yml file on GitHub action, I get "security: SecKeychainItemImport: One or more parameters passed to a function were not valid." error. any suggestions?
security import $CERTIFICATE_PATH -P $P12_PASSWORD -A -t cert -f pkcs12 -k $KEYCHAIN_PATH
CERTIFICATE_PATH - file that contains cert.p12 data,
KEYCHAIN_PATH is TEMP/app-signing.keychain-db
Another reason in Github actions could be that you are using the wrong environment.
Take a look at this ---> Difference between Github's "Environment" and "Repository" secrets?.
Set the right environment:
environment: production
found the issue.. was passing wrong cert file.. once added correct file in the security build , was able to get it working
In https://packer.io/guides/hcl/from-json-v1/, it says
Note: Starting from version 1.5.0 Packer can read HCL2 files.
And my packer is packer_1.5.5_linux_amd64.zip which is suppose to be able to read HCL2 files. However, when I tried it, I got
$ packer build -only=docker hcl-example
Failed to parse template: Error parsing JSON: invalid character '#' looking for beginning of value
At line 1, column 1 (offset 1):
1: #
^
==> Builds finished but no artifacts were created.
$ packer build -h
Usage: packer build [options] TEMPLATE
Will execute multiple builds in parallel as defined in the template.
The various artifacts created by the template will be outputted.
Options:
-color=false Disable color output. (Default: color)
-debug Debug mode enabled for builds.
-except=foo,bar,baz Run all builds and post-procesors other than these.
-only=foo,bar,baz Build only the specified builds.
-force Force a build to continue if artifacts exist, deletes existing artifacts.
-machine-readable Produce machine-readable output.
-on-error=[cleanup|abort|ask] If the build fails do: clean up (default), abort, or ask.
-parallel=false Disable parallelization. (Default: true)
-parallel-builds=1 Number of builds to run in parallel. 0 means no limit (Default: 0)
-timestamp-ui Enable prefixing of each ui output with an RFC3339 timestamp.
-var 'key=value' Variable for templates, can be used multiple times.
-var-file=path JSON file containing user variables. [ Note that even in HCL mode this expects file to contain JSON, a fix is comming soon ]
and I don't see any switches from above to switch to HCL2 mode.
What I'm missing here?
$ packer version
Packer v1.5.5
$ cat hcl-example
# the source block is what was defined in the builders section and represents a
# reusable way to start a machine. You build your images from that source.source
"amazon-ebs" "example" {
ami_name = "packer-test"
region = "us-east-1"
instance_type = "t2.micro"
}
[UPDATE:]
To address Matt's comment/concern, I've changed the content of hcl-example to the whole list in https://packer.io/guides/hcl/from-json-v1/, and
mv hcl-example hcl-example.hcl
$ packer validate hcl-example.hcl
Failed to parse template: Error parsing JSON: invalid character '#' looking for beginning of value
At line 1, column 1 (offset 1):
1: #
^
Named it with .pkr.hcl extension solved the problem.
I am getting an error when I try and invoke a lambda function from the AWS CLI. I am using version 2 of the CLI.
I understand that I should pass the --payload argument as a string containing a JSON object.
aws lambda invoke --function-name testsms --invocation-type Event --payload '{"key": "test"}' response.json
I get the following error:
Invalid base64: "{"key": "test"}"
I have tried all sorts of variants for the JSON escaping characters etc. I have also tried to use the file://test.json option I receive the same error.
As #MCI said, AWS V2 defaults to base 64 input. For your case to work, simply add a --cli-binary-format raw-in-base64-out parameter to your command, so it'd be
aws lambda invoke --function-name testsms \
--invocation-type Event \
--cli-binary-format raw-in-base64-out \
--payload '{"key": "test"}' response.json
Looks like awscli v2 requires some parameters be base64-encoded.
By default, the AWS CLI version 2 now passes all binary input and binary output parameters as base64-encoded strings. A parameter that requires binary input has its type specified as blob (binary large object) in the documentation.
The payload parameter to lamba invoke is one of these blob types that must be base64-encoded.
--payload (blob)
The JSON that you want to provide to your Lambda function as input.
One solution is to use openssl base64 to encode your payload.
echo '{"key": "test"}' > clear_payload
openssl base64 -out encoded_payload -in clear_payload
aws lambda invoke --function-name testsms --invocation-type Event --payload file://~/encoded_payload response.json
Firstly, a string is a valid json.
In my case I had this problem
$ aws --profile diegosasw lambda invoke --function-name lambda-dotnet-function --payload "Just Checking If Everything is OK" out
An error occurred (InvalidRequestContentException) when calling the Invoke operation: Could not parse request body into json: Could not parse payload into json: Unrecognized token 'Just': was expecting ('true', 'false' or 'null')
at [Source: (byte[])"Just Checking If Everything is OK"; line: 1, column: 6]
and it turns out the problem was due to the AWS CLI trying to convert it to JSON. Escaping the double quotes did the trick
$ aws --profile diegosasw lambda invoke --function-name lambda-dotnet-function --payload "\"Just Checking If Everything is OK\"" out
{
"StatusCode": 200,
"ExecutedVersion": "$LATEST"
}
In Windows, I have tried the following, which worked for me
aws lambda invoke --function-name testsms --invocation-type Event --cli-binary-format raw-in-base64-out --payload {\"key\": \"test\"} response.json
Note that, added --cli-binary-format raw-in-base64-out in the command and escaped " to \" in payload
This solution worked for me and I find it simpler than having to remember/check the man page for the correct flags each time.
aws lambda invoke --function-name my_func --payload $(echo "{\"foo\":\"bar\"}" | base64) out
On my windows PowerShell running LocalStack I had to use:
--payload '{\"key\": \"test\"}' response.json
After running my protractor tests I may be left with chromedriver.exe running. The simple question is: how do I kill it? There are several things to note here:
I cannot just kill based on process name since several other chromedrivers may be running and may be needed by other tests.
I already stop the selenium server using "curl http://localhost:4444/selenium-server/driver/?cmd=shutDownSeleniumServer"
I noticed that the chromedriver is listening on port 33107 (is it possible to specify this port somehow?), but I do not know how should I call it to quit.
Probably I should be using driver.quit() in my tests, but on some occasions it might not get called (eg. when the build is cancelled).
Any ideas how to kill the proper chromedriver process from command line (eg. using curl)?
The proper way to do it's as you mentioned by using driver.quit() in your tests.
Actually, to be exact in your test cleanup method, since you want a fresh instance of the browser every time.
Now, the problem with some Unit Test Frameworks (like MSTest for example) is that if your test initialize method fails, the test cleanup one will not be called.
As a workaround for this you can surround in a try-catch statement you test initialize with catch calling and executing your test cleanup.
public void TestInitialize()
{
try
{
//your test initialize statements
}
catch
{
TestCleanup();
//throw exception or log the error message or whatever else you need
}
}
public void TestCleanup()
{
driver.Quit();
}
EDIT:
For the case when the build is canceled, you can create a method that kills all open instances of Chrome browser and ChromeDriver that gets executed before you start a new suite of tests.
E.g. if your Unit Testing Framework used has something similar to Class Initialize or Assembly Initialize you can do it there.
However, on a different post I found this approach:
PORT_NUMBER=1234
lsof -i tcp:${PORT_NUMBER} | awk 'NR!=1 {print $2}' | xargs kill
Breakdown of command
(lsof -i tcp:${PORT_NUMBER}) -- list all processes that is listening on that tcp port
(awk 'NR!=1 {print $2}') -- ignore first line, print second column of each line
(xargs kill) -- pass on the results as an argument to kill. There may be several.
Here, to be more exact: How to find processes based on port and kill them all?
Is there a way to capture the JSON objects from the Azure NodeJS CLI from within a NodeJS script? I could do something like exec( 'azure vm list' ) and write a promise to process the deferred stdout result, or I could hijack the process.stream.write method, but looking at the CLI code, which is quite extensive, I thought there might be a way to pass a callback to the cli function or some other option that might directly return the JSON result. I see you are using the winston logger module -- I might be familiar with this, but perhaps there is a hook there that could be used.
azure vm list does have a --json option:
C:\>azure vm list -h
help: List Azure VMs
help:
help: Usage: vm list [options]
help:
help: Options:
help: -h, --help output usage information
help: -s, --subscription <id> use the subscription id
help: -d, --dns-name <name> only show VMs for this DNS name
help: -v, --verbose use verbose output
help: --json use json output
You can get the json result in the callback of an exec(...) call. Would this work for your?
Yes you can, check this gist: https://gist.github.com/4415326 and you'll see how without doing exec. You basically override the logger hanging off the CLI.
As a side note I am about to publish a new module, azure-cli-buddy that will make it easy to call the CLI using this technique and to receive results in JSON.