AWS CLI: Error parsing parameter '--config-rule': Invalid JSON: - json

cat <<EOF > S3ProhibitPublicReadAccess.json
{
"ConfigRuleName": "S3PublicReadProhibited",
"Description": "Checks that your S3 buckets do not allow public read access. If an S3
bucket policy or bucket ACL allows public read access, the bucket is noncompliant.",
"Scope": {
"ComplianceResourceTypes": [
"AWS::S3::Bucket"
]
},
"Source": {
"Owner": "AWS",
"SourceIdentifier": "S3_BUCKET_PUBLIC_READ_PROHIBITED"
}
}
EOF
aws configservice put-config-rule --config-rule file://S3ProhibitPublicReadAccess.json
When I go upload my config rule after configuring it gives me the error below of Error parsing parameter '--config-rule': Invalid JSON: Invalid control character at: line 3 column 87 (char 132) JSON received: I first tried this on Windows Powershell to start but then went to try on Linux to see if I would get a different result but am still getting the same error on both machines.
Error:
Error parsing parameter '--config-rule': Invalid JSON: Invalid control character at: line 3 column 87 (char 132)
JSON received: {
"ConfigRuleName": "S3PublicReadProhibited",
"Description": "Checks that your S3 buckets do not allow public read access. If an S3
bucket policy or bucket ACL allows public read access, the bucket is noncompliant.",
"Scope": {
"ComplianceResourceTypes": [
"AWS::S3::Bucket"
]
},
"Source": {
"Owner": "AWS",
"SourceIdentifier": "S3_BUCKET_PUBLIC_READ_PROHIBITED"
}
}

The answer is right there, this is how i read the error message...
Invalid JSON: Invalid control character at: line 3 column 87 (char 132)
"Invalid control character" - ie characters like new-lines and line-feeds - ie invisible "control" characters.
"line 3 column 87" - tells you where it thinks the error is (this is not always totally accurate, but its normally close to the error). In this case line 3 column 87 is the end of the below line:
"Description": "Checks that your S3 buckets do not allow public read access. If an S3
"char 132" - this is the ASCII code for the character (its the " character btw) which is what it was expecting to find at the end of the line.
So, what does all the mean, basically it was expecting a " and it found a line ending control character instead.
The fix is to make the description key and value into a single line, so:
"Description": "Checks that your S3 buckets do not allow public read access. If an S3
bucket policy or bucket ACL allows public read access, the bucket is noncompliant.",
becomes:
"Description": "Checks that your S3 buckets do not allow public read access. If an S3 bucket policy or bucket ACL allows public read access, the bucket is noncompliant.",
I used https://jsonlint.com/ to quickly validate the JSON, and i was able to tweak it and re-validate it until it was correct.

Related

Parse JSON with missing fields using cjson Lua module in Openresty

I am trying to parse a json payload sent via a POST request to a NGINX/Openresty location. To do so, I combined Openresty's content_by_lua_block with its cjson module like this:
# other locations above
location /test {
content_by_lua_block {
ngx.req.read_body()
local data_string = ngx.req.get_body_data()
local cjson = require "cjson.safe"
local json = cjson.decode(data_string)
local endpoint_name = json['endpoint']['name']
local payload = json['payload']
local source_address = json['source_address']
local submit_date = json['submit_date']
ngx.say('Parsed')
}
}
Parsing sample data containing all required fields works as expected. A correct JSON object could look like this:
{
"payload": "the payload here",
"submit_date": "2018-08-17 16:31:51",
},
"endpoint": {
"name": "name of the endpoint here"
},
"source_address": "source address here",
}
However, a user might POST a differently formatted JSON object to the location. Assume a simple JSON document like
{
"username": "JohnDoe",
"password": "password123"
}
not containing the desired fields/keys.
According to the cjson module docs, using cjson (without its safe mode) will raise an error if invalid data is encountered. To prevent any errors being raised, I decided to use its safe mode by importing cjson.safe. This should return nil for invalid data and provide the error message instead of raising the error:
The cjson module will throw an error during JSON conversion if any invalid data is encountered. [...]
The cjson.safe module behaves identically to the cjson module, except when errors are encountered during JSON conversion. On error, the cjson_safe.encode and cjson_safe.decode functions will return nil followed by the error message.
However, I do not encounter any different error handling behavior in my case and the following traceback is shown in Openresty's error.log file:
2021/04/30 20:33:16 [error] 6176#6176: *176 lua entry thread aborted: runtime error: content_by_lua(samplesite:50):16: attempt to index field 'endpoint' (a nil value)
Which in turn results in an Internal Server Error:
<html>
<head><title>500 Internal Server Error</title></head>
<body>
<center><h1>500 Internal Server Error</h1></center>
<hr><center>openresty</center>
</body>
</html>
I think a workaround might be writing a dedicated function for parsing the JSON data and calling it with pcall() to catch any errors. However, this would make the safe mode kind of useless. What am I missing here?
Your “simple JSON document” is a valid JSON document. The error you are facing is not related to cjson, it's a standard Lua error:
resty -e 'local t = {foo = 1}; print(t["foo"]); print(t["foo"]["bar"])'
1
ERROR: (command line -e):1: attempt to index field 'foo' (a number value)
stack traceback:
...
“Safeness” of cjson.safe is about parsing of malformed documents:
cjson module raises an error:
resty -e 'print(require("cjson").decode("[1, 2, 3"))'
ERROR: (command line -e):1: Expected comma or array end but found T_END at character 9
stack traceback:
...
cjson.safe returns nil and an error message:
resty -e 'print(require("cjson.safe").decode("[1, 2, 3"))'
nilExpected comma or array end but found T_END at character 9

AWS Batch Job container_properties is invalid: Error decoding JSON: invalid character 'v' looking for beginning of value

I'm using terraform to create aws batch job definition:
resource "aws_batch_job_definition" "test" {
name = "jobtest"
type = "container"
container_properties =<<CONTAINER_PROPERTIES
{
"image": var.image,
"memory": 512,
"vcpus": 1,
"jobRoleArn": "${aws_iam_role.job_role.arn}"
}
CONTAINER_PROPERTIES
}
When I run terraform I get this error:
AWS Batch Job container_properties is invalid: Error decoding JSON: invalid character 'v' looking for beginning of value
on admin/prd/batch.tf line 1, in resource "aws_batch_job_definition" "test":
1: resource "aws_batch_job_definition" "test" {
I don't know what's wrong here. I couldn't find any answers in the other StackOverflow questions.

How can I use control-A character as csvDelimiter in AWS-DMS Target S3 Endpoint?

I am extracting the data from Aurora to S3 using AWS DMS, and would like to use csvDelimiter of my choice, which is ^A (i.e. control-A, octal representation \001) while loading data to S3. How do I do that?. By default when S3 is used as target for DMS, it uses "," as default delimiter
compressionType=NONE;csvDelimiter=,;csvRowDelimiter=\n;
But I want to use something as below
compressionType=NONE;csvDelimiter='\001';csvRowDelimiter=\n;
But it prints the delimiter as a text in the output:
I'\001'12345'\001'Abc'
I am using AWS DMS Console to set the Target Endpoint
I tried below delimiters but did not work:
\\001
\u0001
'\u0001'
\u01
\001
Actual Result:
I'\001'12345'\001'Abc'
Expected Result:
I^A12345^AAbc
Here is what I did to resolve this:
I used aws command line to set this delimiter in my target s3 endpoint.
https://docs.aws.amazon.com/translate/latest/dg/setup-awscli.html
aws cli command:
aws dms modify-endpoint --endpoint-arn arn:aws:dms:us-west-2:000001111222:endpoint:OXXXXXXXXXXXXXXXXXXXX4 --endpoint-identifier dms-ep-tgt-s3-abc --endpoint-type target --engine-name s3 --extra-connection-attributes "bucketFolder=data/folderx;bucketname=bkt-xyz;CsvRowDelimiter=^D;CompressionType=NONE;CsvDelimiter=^A;" --service-access-role-arn arn:aws:iam::000001111222:role/XYZ-Datalake-DMS-Role --s3-settings ServiceAccessRoleArn=arn:aws:iam::000001111222:role/XYZ-Datalake-DMS-Role,BucketName=bkt-xyz,CompressionType=NONE
Output:
{
"Endpoint": {
"Status": "active",
"S3Settings": {
"CompressionType": "NONE",
"EnableStatistics": true,
"BucketFolder": "data/folderx",
"CsvRowDelimiter": "\u0004",
"CsvDelimiter": "\u0001",
"ServiceAccessRoleArn": "arn:aws:iam::000001111222:role/XYZ-Datalake-DMS-Role",
"BucketName": "bkt-xyz"
},
"EndpointType": "TARGET",
"ServiceAccessRoleArn": "arn:aws:iam::000001111222:role/XYZ-Datalake-DMS-Role",
"SslMode": "none",
"EndpointArn": "arn:aws:dms:us-west-2:000001111222:endpoint:OXXXXXXXXXXXXXXXXXXXX4",
"ExtraConnectionAttributes": "bucketFolder=data/folderx;bucketname=bkt-xyz;CompressionType=NONE;CsvDelimiter=\u0001;CsvRowDelimiter=\u0004;",
"EngineDisplayName": "Amazon S3",
"EngineName": "s3",
"EndpointIdentifier": "dms-ep-tgt-s3-abc"
}
}
Note: After you run the aws cli command, the DMS console will not show you the delimiter in the endpoint, (not visible as it is a special character). But once you run the task it appears in the data in your s3 files.

Importing JSON file into Firebase error

I keep getting that there is an error uploading/importing my JSON file into Firebase. I initially had an excel spreadsheet that I saved as a CSV file, then I used a CSV to JSON converter.
I validated the JSON file (which have the .json extension) with a couple of online tools.
Though, I'm still getting an error.
Here is an example of my JSON:
{
"Rk": 1,
"Tm": "SEA",
"H/A": "H",
"DOW": "Sun",
"Opp": "CLE",
"QB": "Russell Wilson",
"Grade": "BLUE",
"Def mu pts": 4,
"Inj status": 0,
"Notes": "Got to wonder if not having a proven power RB under center will negatively impact Wilson's production.",
"TFS $50K": "$8,300",
"Init sal": "$8,300",
"Var": "$0",
"WC": 0
}
The issue is your key's..
Firebase keys must be:
UTF-8 encoded, cannot contain . $ # [ ] / or ASCII control characters
0-31 or 127
your $50k key and the H/A are the issues.

I m trying to use 'ffprobe' with Java or groovy

As per my understanding "ffprobe" will provide file related data in JSON format. So, I have installed the ffprobe in my Ubuntu machine but I don't know how to access the ffprobe JSON response using Java/Grails.
Expected response format:
{
"format": {
"filename": "/Users/karthick/Documents/videos/TestVideos/sample.ts",
"nb_streams": 2,
"nb_programs": 1,
"format_name": "mpegts",
"format_long_name": "MPEG-TS (MPEG-2 Transport Stream)",
"start_time": "1.430800",
"duration": "170.097489",
"size": "80425836",
"bit_rate": "3782576",
"probe_score": 100
}
}
This is my groovy code
def process = "ffprobe -v quiet -print_format json -show_format -show_streams HelloWorld.mpeg ".execute()
println "Found ${process.text}"
render process as JSON
I m able to get the process object and i m not able to get the json response
Should i want to convert the process object to json object?
OUTPUT:
Found java.lang.UNIXProcess#75566697
org.codehaus.groovy.grails.web.converters.exceptions.ConverterException: Error converting Bean with class java.lang.UNIXProcess
Grails has nothing to do with this. Groovy can execute arbitrary shell commands in a very simplistic way:
"mkdir foo".execute()
Or for more advanced features, you might look into using ProcessBuilder. At the end of the day, you need to execute ffprobe and then capture the output stream of JSON to use in your app.
Groovy provides a simple way to execute command line processes. Simply
write the command line as a string and call the execute() method.
The execute() method returns a java.lang.Process instance.
println "ffprobe <options>".execute().text
[Source]