Having problems using google cloud functions - google-cloud-functions

message: "Build failed: *** Error compiling './main.py'...
File "./main.py", line 85
"prompt": prompt,
^
SyntaxError: invalid syntax; Error ID: 49c34848"
if user_request == "longer responses":
model_config = {
"engine": "gpt3",
"model": ["text-davinci-002", "text-davinci-003", "text-curie-001", "text-babbage-001", "text-ada-001",]
"prompt": prompt,
"temperature": 0.9, # set the temperature to a high value
"max_tokens": 300, # set the max_tokens filter to 160 tokens
It doesnt work. I tried add ing semicolons. It doesnt work

I think your code is incorrect, it remains } character to write correctly the Python Dict. I also removed a useless coma in the array :
if user_request == "longer responses":
model_config = {
"engine": "gpt3",
"model": ["text-davinci-002", "text-davinci-003", "text-curie-001", "text-babbage-001", "text-ada-001"]
"prompt": prompt,
"temperature": 0.9, # set the temperature to a high value
"max_tokens": 300 # set the max_tokens filter to 160 tokens
}

Related

Filtering JSON data with equality operator

Given the query
#.records[*].issues[].[type, message]
on the JSON
{
"records":[
{
"id":"db7bb828-60e2-5fa8-048c-06542abd98d2",
"parentId":"3dc8fd7e-4368-5a92-293e-d53cefc8c4b3",
"type":"Task",
"name":"PublishBuildArtifacts",
"startTime":"2022-09-28T14:06:41.3266667Z",
"finishTime":"2022-09-28T14:06:41.3266667Z",
"currentOperation":null,
"percentComplete":null,
"state":"completed",
"result":"skipped",
"resultCode":"Evaluating: SucceededNode()\r\nResult: False\r\n",
"changeId":29,
"lastModified":"0001-01-01T00:00:00",
"workerName":"AgentSalam7WithPat",
"order":19,
"details":null,
"errorCount":0,
"warningCount":0,
"url":null,
"log":null,
"task":{
"id":"2ff763a7-ce83-4e1f-bc89-0ae63477cebe",
"name":"PublishBuildArtifacts",
"version":"1.158.3"
},
"attempt":1,
"identifier":null
},
{
"id":"d56f7c92-f706-53be-685b-17b89c98baa6",
"parentId":"3dc8fd7e-4368-5a92-293e-d53cefc8c4b3",
"type":"Task",
"name":"SonarQubePublish",
"startTime":"2022-09-28T14:06:31.7066667Z",
"finishTime":"2022-09-28T14:06:41.31Z",
"currentOperation":null,
"percentComplete":null,
"state":"completed",
"result":"failed",
"resultCode":null,
"changeId":31,
"lastModified":"0001-01-01T00:00:00",
"workerName":"AgentSalam7WithPat",
"order":11,
"details":null,
"errorCount":1,
"warningCount":0,
"url":null,
"log":{
"id":14,
"type":"Container",
"url":"https://azuredevops2k19.salam.net/Sierac-Utilities/6f9f1b22-cd2b-4ed4-a2c9-37822128b7c6/_apis/build/builds/201/logs/14"
},
"task":{
"id":"291ed61f-1ee4-45d3-b1b0-bf822d9095ef",
"name":"SonarQubePublish",
"version":"5.0.1"
},
"attempt":1,
"identifier":null,
"issues":[
{
"type":"error",
"category":"General",
"message":"[SQ] Task failed with status FAILED, Error message: Fail to extract report AYOEa2gdtfNdJFd6edM9 from database",
"data":{
"type":"error",
"logFileLineNumber":"9"
}
},
{
"type":"warning",
"category":"General",
"message":"Unable to get default branch, defaulting to 'master': Error: enable to verify the first certificate",
"data":{
"type":"warning",
"logFileLineNumber":"10"
}
}
]
}
]
}
I get the resulting JSON:
[
[
"error",
"[SQ] Task failed with status FAILED, Error message: Fail to extract report AYOEa2gdtfNdJFd6edM9 from database"
],
[
"warning",
"Unable to get default branch, defaulting to 'master': Error: enable to verify the first certificate"
]
]
Now I need to add a filter like [type = error], so I only get the messages of type error.
How can this be achieved? In the documentation, this is not very clear to me.
Filtering and multiselect lists do need a question mark in the array notation brackets – [?this > `that`] – and the equality test is a double equal sign – ==.
So your query should be:
#.records[*].issues[?type == `error`].[type, message]
Which gives the resulting JSON:
[
[
[
"error",
"[SQ] Task failed with status FAILED, Error message: Fail to extract report AYOEa2gdtfNdJFd6edM9 from database"
]
]
]
Should you need to flatten the multiple arrays of arrays, you can use the flatten operator, and with the query:
#.records[*].issues[?type == `error`].[type, message][][]
You will, then, end up with this resulting JSON:
[
"error",
"[SQ] Task failed with status FAILED, Error message: Fail to extract report AYOEa2gdtfNdJFd6edM9 from database"
]

AWS CLI: Error parsing parameter '--config-rule': Invalid JSON:

cat <<EOF > S3ProhibitPublicReadAccess.json
{
"ConfigRuleName": "S3PublicReadProhibited",
"Description": "Checks that your S3 buckets do not allow public read access. If an S3
bucket policy or bucket ACL allows public read access, the bucket is noncompliant.",
"Scope": {
"ComplianceResourceTypes": [
"AWS::S3::Bucket"
]
},
"Source": {
"Owner": "AWS",
"SourceIdentifier": "S3_BUCKET_PUBLIC_READ_PROHIBITED"
}
}
EOF
aws configservice put-config-rule --config-rule file://S3ProhibitPublicReadAccess.json
When I go upload my config rule after configuring it gives me the error below of Error parsing parameter '--config-rule': Invalid JSON: Invalid control character at: line 3 column 87 (char 132) JSON received: I first tried this on Windows Powershell to start but then went to try on Linux to see if I would get a different result but am still getting the same error on both machines.
Error:
Error parsing parameter '--config-rule': Invalid JSON: Invalid control character at: line 3 column 87 (char 132)
JSON received: {
"ConfigRuleName": "S3PublicReadProhibited",
"Description": "Checks that your S3 buckets do not allow public read access. If an S3
bucket policy or bucket ACL allows public read access, the bucket is noncompliant.",
"Scope": {
"ComplianceResourceTypes": [
"AWS::S3::Bucket"
]
},
"Source": {
"Owner": "AWS",
"SourceIdentifier": "S3_BUCKET_PUBLIC_READ_PROHIBITED"
}
}
The answer is right there, this is how i read the error message...
Invalid JSON: Invalid control character at: line 3 column 87 (char 132)
"Invalid control character" - ie characters like new-lines and line-feeds - ie invisible "control" characters.
"line 3 column 87" - tells you where it thinks the error is (this is not always totally accurate, but its normally close to the error). In this case line 3 column 87 is the end of the below line:
"Description": "Checks that your S3 buckets do not allow public read access. If an S3
"char 132" - this is the ASCII code for the character (its the " character btw) which is what it was expecting to find at the end of the line.
So, what does all the mean, basically it was expecting a " and it found a line ending control character instead.
The fix is to make the description key and value into a single line, so:
"Description": "Checks that your S3 buckets do not allow public read access. If an S3
bucket policy or bucket ACL allows public read access, the bucket is noncompliant.",
becomes:
"Description": "Checks that your S3 buckets do not allow public read access. If an S3 bucket policy or bucket ACL allows public read access, the bucket is noncompliant.",
I used https://jsonlint.com/ to quickly validate the JSON, and i was able to tweak it and re-validate it until it was correct.

Converting Packer 1.6 vsphere-iso configuration code from JSON to HCL2

With the release of Packer 1.6 came several depreciated fields in the vsphere-iso builder. From the looks of it, seems to be a format/type change because the fields actually still exists but just as properties it seems. An example of the changes are the following:
Working in Packer 1.5.6:
JSON
"disk_size": 123456,
"disk_thin_provisioned": true
"network": "VM Network",
"network_card": "vmxnet3"
Working in Packer 1.6.0:
JSON
"storage": [
{
"disk_size": 123456,
"disk_thin_provisioned": true
}
],
"network_adapters": [
{
"network": "VM Network",
"network_card": "vmxnet3"
}
]
The issue I have at the moment is I'm using Packer 1.6.0 and am trying to convert the above working JSON code to HCL2. I can't figure out the HCL2 syntax that supports the changes that were made in Packer 1.6.0.
I've tried the following:
network_adapters = {
network_card = "vmxnet3"
network = "VM Network"
}
Output:
An argument named "network_adapter" is not expected here.
network_adapters = (
network_card = "vmxnet3"
network = "VM Network"
)
Output:
Error: Unbalanced parentheses
on .\Packer\ConfigFileName.pkr.hcl line 19, in source "vsphere-iso"
"Test": 18: storage = ( 19: disk_thin_provisioned = true
Expected a closing parenthesis to terminate the expression.
network_adapters = [
network_card = "vmxnet3",
network = "VM Network"
]
Output:
Error: Missing item separator
on .\Packer\ConfigFileName.pkr.hcl line 19, in source "vsphere-iso"
"Test": 18: storage = [ 19: disk_thin_provisioned =
true,
Expected a comma to mark the beginning of the next item.
I've also tried several other permutations of different collection syntax together with no luck so far. Any suggestions or tips would greatly be appreciated
The correct syntax is the following:
network_adapters {
network_card = "vmxnet3",
network = "VM Network"
}
Note that it's not using an assignment operator = between network_adapters and {
Credit goes to SwampDragons over on the Packer forums for pointing this out.
If you're interested in knowing why: There was a change to how maps are treated in HCL2 back in May 2020 with the release of Packer 1.5.6
core/hcl2: Maps are now treated as settable arguments as opposed to blocks. For example tags = {} instead of tags {} [GH-9035]
Reference: https://github.com/hashicorp/packer/blob/master/CHANGELOG.md#156-may-1-2020

AWS Batch Job container_properties is invalid: Error decoding JSON: invalid character 'v' looking for beginning of value

I'm using terraform to create aws batch job definition:
resource "aws_batch_job_definition" "test" {
name = "jobtest"
type = "container"
container_properties =<<CONTAINER_PROPERTIES
{
"image": var.image,
"memory": 512,
"vcpus": 1,
"jobRoleArn": "${aws_iam_role.job_role.arn}"
}
CONTAINER_PROPERTIES
}
When I run terraform I get this error:
AWS Batch Job container_properties is invalid: Error decoding JSON: invalid character 'v' looking for beginning of value
on admin/prd/batch.tf line 1, in resource "aws_batch_job_definition" "test":
1: resource "aws_batch_job_definition" "test" {
I don't know what's wrong here. I couldn't find any answers in the other StackOverflow questions.

Importing JSON file into Firebase error

I keep getting that there is an error uploading/importing my JSON file into Firebase. I initially had an excel spreadsheet that I saved as a CSV file, then I used a CSV to JSON converter.
I validated the JSON file (which have the .json extension) with a couple of online tools.
Though, I'm still getting an error.
Here is an example of my JSON:
{
"Rk": 1,
"Tm": "SEA",
"H/A": "H",
"DOW": "Sun",
"Opp": "CLE",
"QB": "Russell Wilson",
"Grade": "BLUE",
"Def mu pts": 4,
"Inj status": 0,
"Notes": "Got to wonder if not having a proven power RB under center will negatively impact Wilson's production.",
"TFS $50K": "$8,300",
"Init sal": "$8,300",
"Var": "$0",
"WC": 0
}
The issue is your key's..
Firebase keys must be:
UTF-8 encoded, cannot contain . $ # [ ] / or ASCII control characters
0-31 or 127
your $50k key and the H/A are the issues.