I'm having the following dataset which I'd like to export into a CSV:
Dataset:
{
"data": {
"activeFindings": {
"findings": [
{
"findingId": "risk#80703",
"accountId": "00000000-000000-0000000-000000",
"products": [
"GWSERVER01"
],
"findingDisplayName": "risk#80703",
"severity": "CRITICAL",
"findingDescription": "PSOD with re-formatting a valid dedup metadata block.",
"findingImpact": "Potential ESXi host crash",
"recommendations": [
"This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523)",
"This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804)"
],
"kbLinkURLs": [
"https://kb.vmware.com/s/article/80703"
],
"recommendationsVCF": [
"This issue is resolved with VMware Cloud Foundation 4.1"
],
"kbLinkURLsVCF": [
"https://kb.vmware.com/s/article/80703"
],
"categoryName": "Storage",
"findingTypes": [
"UPGRADE"
],
"firstObserved": 1629806351877,
"totalAffectedObjectsCount": 12,
"affectedObjects": [
{
"sourceName": "GWSERVER01.corp.contoso.org",
"objectName": "server01.corp.contoso.org",
"objectType": "ESX",
"version": "6.7.0",
"buildNumber": "17499825",
"solutionTags": [],
"firstObserved": 1629806351877
},
{
"sourceName": "GWSERVER01.corp.contoso.org",
"objectName": "server02.corp.contoso.org",
"objectType": "ESX",
"version": "6.7.0",
"buildNumber": "17499825",
"solutionTags": [],
"firstObserved": 1629806351877
},
{
"sourceName": "GWSERVER01.corp.contoso.org",
"objectName": "server03.corp.contoso.org",
"objectType": "ESX",
"version": "6.7.0",
"buildNumber": "17499825",
"solutionTags": [],
"firstObserved": 1629806351877
},
{
"sourceName": "GWSERVER01.corp.contoso.org",
"objectName": "server04.corp.contoso.org",
"objectType": "ESX",
"version": "6.7.0",
"buildNumber": "17499825",
"solutionTags": [],
"firstObserved": 1629806351877
},
{
"sourceName": "GWSERVER01.corp.contoso.org",
"objectName": "server05.corp.contoso.org",
"objectType": "ESX",
"version": "6.7.0",
"buildNumber": "17499825",
"solutionTags": [],
"firstObserved": 1629806351877
},
{
"sourceName": "GWSERVER01.corp.contoso.org",
"objectName": "server06.corp.contoso.org",
"objectType": "ESX",
"version": "6.7.0",
"buildNumber": "17499825",
"solutionTags": [],
"firstObserved": 1629806351877
},
{
"sourceName": "GWSERVER01.corp.contoso.org",
"objectName": "server07.corp.contoso.org",
"objectType": "ESX",
"version": "6.7.0",
"buildNumber": "17499825",
"solutionTags": [],
"firstObserved": 1629806351877
}
]
}
],
"totalRecords": 1,
"timeTaken": 56
}
}
}
{
"data": {
"activeFindings": {
"findings": [
{
"findingId": "risk#80703",
"accountId": "00000000-000000-0000000-000000",
"products": [
"GWSERVER02.corp.contoso.org"
],
"findingDisplayName": "risk#80703",
"severity": "CRITICAL",
"findingDescription": "PSOD with re-formatting a valid dedup metadata block.",
"findingImpact": "Potential ESXi host crash",
"recommendations": [
"This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523)",
"This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804)"
],
"kbLinkURLs": [
"https://kb.vmware.com/s/article/80703"
],
"recommendationsVCF": [
"This issue is resolved with VMware Cloud Foundation 4.1"
],
"kbLinkURLsVCF": [
"https://kb.vmware.com/s/article/80703"
],
"categoryName": "Storage",
"findingTypes": [
"UPGRADE"
],
"firstObserved": 1635968448112,
"totalAffectedObjectsCount": 2,
"affectedObjects": [
{
"sourceName": "GWSERVER02.corp.contoso.org",
"objectName": "server10.corp.contoso.org",
"objectType": "ESX",
"version": "6.7.0",
"buildNumber": "17167734",
"solutionTags": [],
"firstObserved": 1635968448112
},
{
"sourceName": "GWSERVER02.corp.contoso.org",
"objectName": "server11.corp.contoso.org",
"objectType": "ESX",
"version": "6.7.0",
"buildNumber": "17167734",
"solutionTags": [],
"firstObserved": 1635968448112
}
]
}
],
"totalRecords": 1,
"timeTaken": 51
}
}
}
And header would be as follow:
"Finding Id","Issue Description","Risk if no Action Taken","Severity","Recommendations","Source Name","Object Name","Object Type","Host Version","Build","First Observed","Reference"
Header keys mapping as follow:
Finding Id = findingId
Issue Description = findingDescription
Risk if no Action Taken = findingImpact
Severity = severity
Recommendations = recommendations
Source Name = sourceName
Object Name = objectName
Object Type = objectType
Host Version = version
Build = buildNumber
First Observed = firstObserved
Reference = kbLinkURLs
Unfortunately, we have to perform an API call per each finding & product (eg: we're not able to pull all the findings for all products at once - the API does not allow us to perform such query and thus, we have to make several calls to get all the findings with its associated objects.)
With that said, what would be the preferred approach to export the data into a csv ? Would using jq's #CSV work though we would have to loop through several nodes ?
Any help/guidance would be appreciated.
Thanks!
Note 1:
A stripped version of the dataset as requested by chepner
{
"data": {
"activeFindings": {
"findings": [
{
"findingId": "risk#80703",
"accountId": "00000000-000000-0000000-000000",
"products": [
"GWSERVER01"
],
"findingDisplayName": "risk#80703",
"severity": "CRITICAL",
"findingDescription": "PSOD with re-formatting a valid dedup metadata block.",
"findingImpact": "Potential ESXi host crash",
"recommendations": [
"This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523)",
"This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804)"
],
"kbLinkURLs": [
"https://kb.vmware.com/s/article/80703"
],
"recommendationsVCF": [
"This issue is resolved with VMware Cloud Foundation 4.1"
],
"kbLinkURLsVCF": [
"https://kb.vmware.com/s/article/80703"
],
"categoryName": "Storage",
"findingTypes": [
"UPGRADE"
],
"firstObserved": 1629806351877,
"totalAffectedObjectsCount": 12,
"affectedObjects": [
{
"sourceName": "GWSERVER01.corp.contoso.org",
"objectName": "server01.corp.contoso.org",
"objectType": "ESX",
"version": "6.7.0",
"buildNumber": "17499825",
"solutionTags": [],
"firstObserved": 1629806351877
},
{
"sourceName": "GWSERVER01.corp.contoso.org",
"objectName": "server02.corp.contoso.org",
"objectType": "ESX",
"version": "6.7.0",
"buildNumber": "17499825",
"solutionTags": [],
"firstObserved": 1629806351877
},
]
}
],
"totalRecords": 1,
"timeTaken": 56
}
}
}
{
"data": {
"activeFindings": {
"findings": [
{
"findingId": "risk#80703",
"accountId": "00000000-000000-0000000-000000",
"products": [
"GWSERVER02.corp.contoso.org"
],
"findingDisplayName": "risk#80703",
"severity": "CRITICAL",
"findingDescription": "PSOD with re-formatting a valid dedup metadata block.",
"findingImpact": "Potential ESXi host crash",
"recommendations": [
"This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523)",
"This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804)"
],
"kbLinkURLs": [
"https://kb.vmware.com/s/article/80703"
],
"recommendationsVCF": [
"This issue is resolved with VMware Cloud Foundation 4.1"
],
"kbLinkURLsVCF": [
"https://kb.vmware.com/s/article/80703"
],
"categoryName": "Storage",
"findingTypes": [
"UPGRADE"
],
"firstObserved": 1635968448112,
"totalAffectedObjectsCount": 2,
"affectedObjects": [
{
"sourceName": "GWSERVER02.corp.contoso.org",
"objectName": "server10.corp.contoso.org",
"objectType": "ESX",
"version": "6.7.0",
"buildNumber": "17167734",
"solutionTags": [],
"firstObserved": 1635968448112
},
{
"sourceName": "GWSERVER02.corp.contoso.org",
"objectName": "server11.corp.contoso.org",
"objectType": "ESX",
"version": "6.7.0",
"buildNumber": "17167734",
"solutionTags": [],
"firstObserved": 1635968448112
}
]
}
],
"totalRecords": 1,
"timeTaken": 51
}
}
}
And the resulted CSV file:
"Finding Id","Issue Description","Risk if no Action Taken","Severity","Recommendations","Source Name","Object Name","Object Type","Host Version","Build","First Observed","Reference"
"risk#80703","PSOD with re-formatting a valid dedup metadata block.","Potential ESXi host crash","CRITICAL","This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804);This issue is resolved with VMware Cloud Foundation 4.1","GWSERVER01.corp.contoso.org","server01.corp.contoso.org","HostSystem","6.7.0","17499825","1629806351877","https://kb.vmware.com/s/article/80703"
"risk#80703","PSOD with re-formatting a valid dedup metadata block.","Potential ESXi host crash","CRITICAL","This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804);This issue is resolved with VMware Cloud Foundation 4.1","GWSERVER01.corp.contoso.org","server02.corp.contoso.org","HostSystem","6.7.0","17499825","1629806351877","https://kb.vmware.com/s/article/80703"
"risk#80703","PSOD with re-formatting a valid dedup metadata block.","Potential ESXi host crash","CRITICAL","This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804);This issue is resolved with VMware Cloud Foundation 4.1","GWSERVER02.corp.contoso.org","server10.corp.contoso.org","HostSystem","6.7.0","17167734","1635968448112","https://kb.vmware.com/s/article/80703"
"risk#80703","PSOD with re-formatting a valid dedup metadata block.","Potential ESXi host crash","CRITICAL","This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804);This issue is resolved with VMware Cloud Foundation 4.1","GWSERVER02.corp.contoso.org","server11.corp.contoso.org","HostSystem","6.7.0","17167734","1635968448112","https://kb.vmware.com/s/article/80703"
I would combine jq with spyql, here's how:
jq -c '.data.activeFindings.findings[]' full_sample.json | spyql "SELECT json->findingId AS 'Finding Id', json->findingDescription AS 'Issue Description', json->findingImpact AS 'Risk if no Action Taken', json->severity AS Severity, ';'.join(json->recommendations) AS Recommendations, json->affectedObjects->sourceName AS 'Source Name', json->affectedObjects->objectName AS 'Object Name', json->affectedObjects->objectType AS 'Object Type', json->affectedObjects->version AS 'Host Version', json->affectedObjects->buildNumber AS Build, json->affectedObjects->firstObserved AS 'First Observed', ';'.join(json->kbLinkURLsVCF) AS Reference FROM json EXPLODE json->affectedObjects TO csv"
Finding Id,Issue Description,Risk if no Action Taken,Severity,Recommendations,Source Name,Object Name,Object Type,Host Version,Build,First Observed,Reference
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER01.corp.contoso.org,server01.corp.contoso.org,ESX,6.7.0,17499825,1629806351877,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER01.corp.contoso.org,server02.corp.contoso.org,ESX,6.7.0,17499825,1629806351877,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER01.corp.contoso.org,server03.corp.contoso.org,ESX,6.7.0,17499825,1629806351877,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER01.corp.contoso.org,server04.corp.contoso.org,ESX,6.7.0,17499825,1629806351877,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER01.corp.contoso.org,server05.corp.contoso.org,ESX,6.7.0,17499825,1629806351877,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER01.corp.contoso.org,server06.corp.contoso.org,ESX,6.7.0,17499825,1629806351877,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER01.corp.contoso.org,server07.corp.contoso.org,ESX,6.7.0,17499825,1629806351877,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER02.corp.contoso.org,server10.corp.contoso.org,ESX,6.7.0,17167734,1635968448112,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER02.corp.contoso.org,server11.corp.contoso.org,ESX,6.7.0,17167734,1635968448112,https://kb.vmware.com/s/article/80703
I am using jq to extract the part of the JSON we need, while compressing the output to JSON lines (required by spyql). Then, spyql takes care of the rest, namely joining arrays (expressions are python with some optional syntax sugar), renaming columns and generating the CSV.
If you want to convert the firstObserved timestamp to datetime you can do it like this (assuming UTC timestamp):
$ jq -c '.data.activeFindings.findings[]' full_sample.json | spyql "SELECT json->findingId AS 'Finding Id', json->findingDescription AS 'Issue Description', json->findingImpact AS 'Risk if no Action Taken', json->severity AS Severity, ';'.join(json->recommendations) AS Recommendations, json->affectedObjects->sourceName AS 'Source Name', json->affectedObjects->objectName AS 'Object Name', json->affectedObjects->objectType AS 'Object Type', json->affectedObjects->version AS 'Host Version', json->affectedObjects->buildNumber AS Build, datetime.utcfromtimestamp(json->affectedObjects->firstObserved/1000) AS 'First Observed', ';'.join(json->kbLinkURLsVCF) AS Reference FROM json EXPLODE json->affectedObjects TO csv"
Finding Id,Issue Description,Risk if no Action Taken,Severity,Recommendations,Source Name,Object Name,Object Type,Host Version,Build,First Observed,Reference
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER01.corp.contoso.org,server01.corp.contoso.org,ESX,6.7.0,17499825,2021-08-24 11:59:11.877000,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER01.corp.contoso.org,server02.corp.contoso.org,ESX,6.7.0,17499825,2021-08-24 11:59:11.877000,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER01.corp.contoso.org,server03.corp.contoso.org,ESX,6.7.0,17499825,2021-08-24 11:59:11.877000,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER01.corp.contoso.org,server04.corp.contoso.org,ESX,6.7.0,17499825,2021-08-24 11:59:11.877000,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER01.corp.contoso.org,server05.corp.contoso.org,ESX,6.7.0,17499825,2021-08-24 11:59:11.877000,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER01.corp.contoso.org,server06.corp.contoso.org,ESX,6.7.0,17499825,2021-08-24 11:59:11.877000,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER01.corp.contoso.org,server07.corp.contoso.org,ESX,6.7.0,17499825,2021-08-24 11:59:11.877000,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER02.corp.contoso.org,server10.corp.contoso.org,ESX,6.7.0,17167734,2021-11-03 19:40:48.112000,https://kb.vmware.com/s/article/80703
risk#80703,PSOD with re-formatting a valid dedup metadata block.,Potential ESXi host crash,CRITICAL,This issue is resolved in VMware ESXi 6.7 upgrade to Patch 05 (17700523);This issue is resolved in VMware ESXi 7.0 upgrade to Update 1 (16850804),GWSERVER02.corp.contoso.org,server11.corp.contoso.org,ESX,6.7.0,17167734,2021-11-03 19:40:48.112000,https://kb.vmware.com/s/article/80703
If you don't need millisecond precision in your datetime you can use integer division (i.e. datetime.utcfromtimestamp(json->affectedObjects->firstObserved//1000)).
Disclaimer: I am the author of spyql
You can apply #csv at the last part after removing the yielded null values through use of delpaths([path(.[] | select(.==null))]) in order to prevent generating successive redundant commas such as
jq -r '.data.activeFindings.findings[]
| [.findingId , .findingDescription, .findingImpact, .severity, (.recommendations | join(",")) , .affectedObjects[].sourceName, .affectedObjects[].objectName, .affectedObjects[].objectType, .affectedObjects[].version, .affectedObjects[].buildNumber, .firstObserved, (.kbLinkURLs | join(",")) ]
| delpaths([path(.[] | select(.==null))])
| #csv'
Demo
Related
I have a container running (wordpress container if being more specific), which tries to connect to mysql rds instance.
Parameters for the fargate ecs service container:
{
"executionRoleArn": "ignore-this",
"containerDefinitions": [
{
"name": "MyCoolContainer",
"image": "wordpress:latest",
"essential": true,
"environment": [
{"name": "WORDPRESS_DB_HOST", "value": "host:3306"},
{"name": "WORDPRESS_DB_USER", "value": "user"},
{"name": "WORDPRESS_DB_PASSWORD", "value": "password"},
{"name": "WORDPRESS_DB_NAME", "value": "name"}
],
"portMappings": [
{
"hostPort": 80,
"protocol": "tcp",
"containerPort": 80
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/aws/ecs/fargate/prefix",
"awslogs-region": "eu-west-1",
"awslogs-stream-prefix": "prefix"
}
}
}
],
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"cpu": "256",
"memory": "512",
"family": "wordpress"
}
Also, for security groups, I have opened 22, 80, 443, 3306 ports for any IP address.
But the container in ECS still fails to start with the reason:
[17-Sep-2019 08:42:24 UTC] PHP Warning: mysqli::__construct():
(HY000/2002): Connection timed out in Standard input code on line 22
MySQL Connection Error: (2002) Connection timed out
MySQL Connection Error: (2002) Connection timed out
However I can ensure that the RDS instance is accessable, when trying to connect from a local machine with a command:
mysql -uuser -ppassword -hhost -P3306
Also, I can ensure that a (wordpress) container successfuly runs on local machine and successfully connects to a remote RDS database with no timeouts.
EDIT
This is how my environment looks like from ECS UI panel:
(I have tried to copy paste these values into my local mysql command and it connected successfully.)
I suspect there is something wrong with aws services configuration. Any ideas?
Thanks to Adiii and some other articles found on the internet i have a complete solution to this problem.
You need to simply attach a NAT Gateway to the subnet in which you are launching your ECS Fargate instance.
Simply launching in a public subnet with an Internet Gateway for some weird reason does not solve the problem (even though logically thinking it should).
TL;DR:
NAT Gateway is needed. AWS is f****d up.
I have
"Update": "true" in Dockerrun.aws.json
which should automatically update the image and container in the EC2 ionstance when i update the image in ECR.
But when i ssh into the instance after pushing a new image , i still see the container and image not updated.
[root#ip-10-20-60-125 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8e3bab8da13 258e7bc272bd "./graphhopper.sh we…" 8 days ago Up 8 days 8989/tcp tender_mayer
[root#ip-10-20-60-125 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
aws_beanstalk/current-app latest 258e7bc272bd 8 days ago 813MB
openjdk 8-jdk b8d3f94869bb 6 weeks ago 625MB
Dockerrun.aws.json has this
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "xxxxx",
"Key": "xxxxx"
},
"Image": {
"Name": "213074117100.dkr.ecr.us-east-1.amazonaws.com/xxxxx:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8989"
}
],
"Volumes": [
{
"HostDirectory": "/data",
"ContainerDirectory": "/data"
}
],
"Logging": "/var/log/eb",
"Command": "xxxxx"
}
Is there a setting somewhere other than update: "true" ?
If i do a eb deploy, it will pull and update. But "Update": "true" should pull and update automatically when i update the image, which is not happening.
From this AWS Documentation and this thread AWS Beanstalk docker image automatic update doesn't work it seems that update=true just does the docker pull before docker run and it will not update the container on a new image update.
From my current research, it seems there is no way to automate this process at this moment.
In my environment running mesos-slave, mesos-master marathon and mesos-dns in standalone mode.
I deployed mysql app to marathon to run as docker container.
MySql app configurations as follows.
{
"id": "mysql",
"cpus": 0.5,
"mem": 512,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "mysql:5.6.27",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 3306,
"hostPort": 32000,
"protocol": "tcp"
}
]
}
},
"constraints": [
[
"hostname",
"UNIQUE"
]],
"env": {
"MYSQL_ROOT_PASSWORD": "password"
},
"minimumHealthCapacity" :0,
"maximumOverCapacity" : 0.0
}
Then I deploy app called mysql client. Mysql client app needs to connect to mysql app.
mysql app config as follows.
{
"id": "mysqlclient",
"cpus": 0.3,
"mem": 512.0,
"cmd": "/scripts/create_mysql_dbs.sh",
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "mysqlclient:latest",
"network": "BRIDGE",
"portMappings": [{
"containerPort": 3306,
"hostPort": 0,
"protocol": "tcp"
}]
}
},
"env": {
"MYSQL_ENV_MYSQL_ROOT_PASSWORD": "password",
"MYSQL_PORT_3306_TCP_ADDR": "mysql.marathon.slave.mesos.",
"MYSQL_PORT_3306_TCP_PORT": "32000"
},
"minimumHealthCapacity" :0,
"maximumOverCapacity" : 0.0
}
My mesos-dns config.json. as follows
{
"zk": "zk://127.0.0.1:2181/mesos",
"masters": ["127.0.0.1:5050"],
"refreshSeconds": 60,
"ttl": 60,
"domain": "mesos",
"port": 53,
"resolvers": ["127.0.0.1"],
"timeout": 5,
"httpon": true,
"dnson": true,
"httpport": 8123,
"externalon": true,
"listener": "127.0.0.1",
"SOAMname": "ns1.mesos",
"SOARname": "root.ns1.mesos",
"SOARefresh": 60,
"SOARetry": 600,
"SOAExpire": 86400,
"SOAMinttl": 60,
"IPSources": ["mesos", "host"]
}
I can ping with service name mysql.marathon.slave.mesos. from host machine. But when I try to ping from mysql docker container I get host unreachable. Why docker container cannot resolve hsot name?
I tried with set dns parameter to apps. But its not work.
EDIT:
I can ping mysql.marathon.slave.mesos. from master/slave hosts. But I cannot ping from mysqlclient docker container. It says unreachable. How can I fix this?
Not sure what your actual question is, by guessing I think you want to know how you can resolve a Mesos DNS service name to an actual endpoint the MySQL client.
If so, you can use my mesosdns-resolver bash script to get the endpoint from Mesos DNS:
mesosdns-resolver.sh -sn mysql.marathon.mesos -s <IP_ADDRESS_OF_MESOS_DNS_SERVER>
You can use this in your create_mysql_dbs.sh script (whatever it does) to get the actual IP address and port where your mysql app is running.
You can pass in an environment variable like
"MYSQL_ENV_SERVICE_NAME": "mysql.marathon.mesos"
and then use it like this in the image/script
mesosdns-resolver.sh -sn $MYSQL_ENV_SERVICE_NAME -s <IP_ADDRESS_OF_MESOS_DNS_SERVER>
Also, please note that Marathon is not necessarily the right tool for running one-off operations (I assume you initialize your DBs with the second app). Chronos would be a better choice for this.
I am trying to convert a simple json file to avro using avro tools (1.7.7).
The command I've been running
java -jar ~/Downloads/avro-tools-1.7.7.jar fromjson
--schema-file src/main/avro/twitter.avsc tweet.json > tweet.avro
on this schema
{
"type": "record",
"name": "tweet",
"namespace": "co.feeb.avro",
"fields": [
{
"name": "username",
"type": "string",
"doc": "screen name of the user on twitter.com"
},
{
"name": "text",
"type": "string",
"doc": "the content of the user's message"
},
{
"name": "timestamp",
"type": "long",
"doc": "unix epoch time in seconds"
}
],
"doc": "Schema for twitter messages"
}
I see this exception after running this command:
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:189)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:159)
at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:216)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:409)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:395)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:1436)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1337)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:122)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.avro.tool.Util.openFromFS(Util.java:88)
at org.apache.avro.tool.DataFileWriteTool.run(DataFileWriteTool.java:82)
at org.apache.avro.tool.Main.run(Main.java:84)
at org.apache.avro.tool.Main.main(Main.java:73)
Caused by: java.lang.NumberFormatException: For input string: "810d:340:1770::1"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at com.sun.jndi.dns.DnsClient.<init>(DnsClient.java:127)
at com.sun.jndi.dns.Resolver.<init>(Resolver.java:61)
at com.sun.jndi.dns.DnsContext.getResolver(DnsContext.java:573)
at com.sun.jndi.dns.DnsContext.c_getAttributes(DnsContext.java:434)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_getAttributes(ComponentDirContext.java:235)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:141)
at com.sun.jndi.toolkit.url.GenericURLDirContext.getAttributes(GenericURLDirContext.java:103)
at sun.security.krb5.KrbServiceLocator.getKerberosService(KrbServiceLocator.java:85)
at sun.security.krb5.Config.checkRealm(Config.java:1120)
at sun.security.krb5.Config.getRealmFromDNS(Config.java:1093)
at sun.security.krb5.Config.getDefaultRealm(Config.java:987)
at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:81)
Trying to prefer IPv4 over IPv6 using -Djava.net.preferIPv4Stack=true didn't help. (I am running Mac OSX 10.10.3 and Java 1.8.0_25-b17).
Oh snap ... I've solved it myself right after posting this. My local router added an IPv6 nameserver to my local machine.
Changing the assigned nameserver manually to Google's 8.8.8.8 fixed the issue.
I'm trying to set up the following environment on Google Cloud and have 3 major problems with it:
Database Cluster
3 nodes
one port open to world, a few ports open to the compute cluster
Compute Cluster
- 5 nodes
- communicated with the database cluster
- two ports open to the world
- runs Docker containers
a) The database cluster runs fine, I have the configuration port open to world, but I don't know how to limit the other ports to only the compute cluster?
I managed to get the first Pod and Replication-Controller running on the compute cluster and created a service to open the container to the world:
controller:
{
"id": "api-controller",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 2,
"replicaSelector": {
"name": "api"
},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "apiController",
"containers": [{
"name": "api",
"image": "gcr.io/my/api",
"ports": [{
"name": "api",
"containerPort": 3000
}]
}]
}
},
"labels": {
"name": "api"
}
}
}
}
service:
{
"id": "api-service",
"kind": "Service",
"apiVersion": "v1beta1",
"selector": {
"name": "api"
},
"containerPort": "api",
"protocol": "TCP",
"port": 80,
"selector": { "name": "api" },
"createExternalLoadBalancer": true
}
b) The container exposes port 3000, the service port 80. Where's the connection between the two?
The firewall works with labels. I want 4-5 different pods running in my compute cluster with 2 of them having open ports to the world. There can be 2 or more containers running on the same instance. The labels however are specific to the nodes, not the containers.
c) Do I expose all nodes with the same firewall configuration? I can't assign labels to containers, so not sure how to expose the api service for example?
I'll try my best to answer all of your questions as best I can.
First off, you will want to upgrade to using v1 of the Kubernetes API because v1beta1 and v1beta3 will no longer be available after Aug. 5th:
https://cloud.google.com/container-engine/docs/v1-upgrade
Also, Use YAML. It's so much less verbose ;)
--
Now on to the questions you asked:
a) I'm not sure I completely understand what you are asking here but it sounds like running the services in the same cluster (with resource limits) would be way easier than trying to deal with cross cluster networking.
b) You need to specify a targetPort so that the service knows what port to use on the container. This should match port 3000 that you have in your resource controller. See the docs for more info.
{
"kind": "Service",
"apiVersion": "v1",
"metadata: {
"labels": [{
"name": "api-service"
}],
},
"spec": {
"selector": {
"name": "api"
},
"ports": [{
"port": 80,
"targetPort": 3000
}]
"type": "LoadBalancer"
}
}
c) Yes. In Kubernetes the kube-proxy accepts traffic on any node and routes it to the appropriate node or local pod. You don't need to worry about mapping the load balancer to, or writing firewall rules for those specific nodes that happen to be running your pods (it could actually change if you do a rolling update!). kube-proxy will route traffic to the right place even if your service is not running on that node.