One way to get the resource quota values in kubernetes is to use the following command
>kubectl describe resourcequotas
Name: default-quota
Namespace: my-namespace
Resource Used Hard
-------- ---- ----
configmaps 19 100
limits.cpu 13810m 18
limits.memory 25890Mi 36Gi
But issue is this display all the values in text file format. Anyone knows how I can get in json format!
Of course, I can parse the output and get the individual entry and construct the json.
kubectl describe quota | grep limits.cpu | awk '{print $2}'
13810m
But I am looking for something inbuilt or some quick way of doing it. Thanks for your help.
Thanks for your messages. Let me answer my own question, I have found one.
jq has solved my problem.
To get the Max limit of resources in json format
kubectl get quota -ojson | jq -r .items[].status.hard
To get the Current usage of resources in json format
kubectl get quota -ojson | jq -r .items[].status.used
kubectl itself provides a mechanism to provide jsonpath using -o jsonpath option. One of the main issues that I initially faced was with respect to having dot(.) in key. eg., limits.cpu.
This issue can be solved by using the expression "limits\.cpu" (escaping dot)
kubectl get resourcequota -o jsonpath={.items[*].spec.hard."limits\.cpu"}
Related
I'm sure this has been asked, but I've searched and can't find anything to answer my question...
I'm looking for a simple way to output a json list as a non json list to process using a while loop.. Which means I need to strip the quotes, commas and brackets.. I know I could do it with cut but I'm sure I'm missing an easier way..
Say I'm using an aws command to get a list of resources and then I want to process that list in a while loop...
aws eks list-clusters | jq .clusters | while read cluster; do something; done
Something along these lines.. am I being stupid here?
Use [] to turn the list into a series of individual strings, and --raw-output / -r to output them without quotes:
With this option, if the filter's result is a string then it will be written directly to standard output rather than being formatted as a JSON string with quotes. This can be useful for making jq filters talk to non-JSON-based systems.
aws eks list-clusters | jq -r '.clusters[]' | while read cluster; do something; done
Am using jq to get multiple responses from the JSON file using the below command.
.components| to_entries[]| "\(.key)- \(.value.status)"
which gives me below
Server2- UP
server1 - UP
Splunk- UP
Datameer - UP
Platfora - UP
diskSpace- Good
But I want to select only a few I tried giving in braces of to_entries[] but it didn't work.
Expected output:
Server1 - UP
Splunk -UP
Platfora - UP
Is there any way to pick only a few values.
Appreciate your help. Thank you.
With the -r command-line option, the following transforms the given input to the desired output, and is perhaps close to what you're looking for:
.components
| to_entries[]
| select(.key == ("server1", "Splunk", "Platfora"))
| "\(.key)- \(.value.status)"
If the list of components is available as a JSON list, then you could modify the selection criterion accordingly, e.g. using IN (uppercase) or index.
I'm using jq to parse some of my logs, but some of the log lines can't be parsed for various reasons. Is there a way to have jq ignore those lines? I can't seem to find a solution. I tried to use the --seq argument that was recommended by some people, but --seq ignores all the lines in my file.
Assuming that each log entry is exactly one line, you can use the -R or --raw-input option to tell jq to leave the lines unparsed, after which you can prepend fromjson? | to your filter to make jq try to parse each line as JSON and throw away the ones that error.
I have log stream where some messages are in json format.
I want to pipe the json messages through jq, and just echo the rest.
The json messages are on a single line.
Solution: use grep and tee to split the lines in two streams, those starting with "^{" pipe through jq and the rest just echo to terminal.
kubectl logs -f web-svjkn | tee >(grep -v "^{") | grep "^{" | jq .
or
cat logs | tee >(grep -v "^{") | grep "^{" | jq .
Explanation:
tee generates 2nd stream, and grep -v prints non json info, 2nd grep only pipes what looks like json opening bracket to jq.
This is an old thread, but here's another solution fully in jq. This allows you to both process proper json lines and also print out non-json lines.
jq -R . as $line | try (fromjson | <further processing for proper json lines>) catch $line'
There are several Q&As on the FAQ page dealing with the topic of "invalid JSON", but see in particular the Q:
Is there a way to have jq keep going after it hits an error in the input file?
In particular, this shows how to use --seq.
However, from the the sparse details you've given (SO recommends a minimal example be given), it would seem it might be better simply to use inputs. The idea is to process one JSON entity at a time, using "try/catch", e.g.
def handle: inputs | [., "length is \(length)"] ;
def process: try handle catch ("Failed", process) ;
process
Don't forget to use the -n option when invoking jq.
See also Processing not-quite-valid JSON.
If JSON in curly braces {}:
grep -Pzo '\{(?>[^\{\}]|(?R))*\}' | jq 'objects'
If JSON in square brackets []:
grep -Pzo '\[(?>[^\[\]]|(?R))*\]' | jq 'arrays'
This works if there are no []{} in non-JSON lines.
How to parse the json to retrieve a field from output of
kubectl get pods -o json
From the command line I need to obtain the system generated container name from a google cloud cluster ... Here are the salient bits of json output from above command :
click here to see entire json output
So the top most json key is an array : items[] followed by metadata.labels.name where the search critera value of that compound key is "web" (see above image green marks). On a match, I then need to retrieve field
.items[].metadata.name
which so happens to have value :
web-controller-5e6ij // I need to retrieve this value
Here are docs on jsonpath
I want to avoid text parsing output of
kubectl get pods
which is
NAME READY STATUS RESTARTS AGE
mongo-controller-h714w 1/1 Running 0 12m
web-controller-5e6ij 1/1 Running 0 9m
Following will correctly parse this get pods command yet I feel its too fragile
kubectl get pods | tail -1 | cut -d' ' -f1
After much battling this one liner does retrieve the container name :
kubectl get pods -o=jsonpath='{.items[?(#.metadata.labels.name=="web")].metadata.name}'
when this is the known search criteria :
items[].metadata.labels.name == "web"
and this is the desired field to retrieve
items[].metadata.name : "web-controller-5e6ij"
If you want to filter by labels. You could just use the kubectl -l flag. The following will do the same:
kubectl get pods -l name=web -o=jsonpath='{.items..metadata.name}'
In addition to Scott Stensland answer, a way to format your results:
kubectl get pods -o=jsonpath='{range .items[?(#.metadata.labels.name=="web")]}{.metadata.name}{"/n"}'
This add newlines. You can also do {", "} to output comma with space.
Another solution:
Use JQ to get a nicely formatted json result:
kubectl get pods -o json | jq -r '.items[] | [filter] | [formatted result]' | jq -s '.'
Example of [filter]:
select(.metadata.labels.name=="web")
Example of [formatted result] (you can add more fields if your want):
{name: .metadata.name}
jq -s '.', for putting result objects in array.
To wrap it up:
kubectl get pods -o json | jq -r '.items[] | select(.metadata.labels.name=="web") | {name: .metadata.name}' | jq -s '.'
Then afterwards you can use this json data to get the desired output result.
There is a super easy way to do this with jq
Just pipe your output with parameters you have already given.
As your searched value is positioned as second, please put index in brackets. No index given, means list all fitting.
<output> | jq .items[1].metadata.name
Example you have given ( i have put -r for raw output with no quotation)
curl -s https://gist.githubusercontent.com/scottstensland/278ce94dc6873aa54e44/raw/b2fc423bc4063a7cd16825f612e19d9a7faf5699/output%2520of%2520kubectl%2520get%2520pods%2520%2520-o%2520json| jq .items[1].metadata.name -r
web-controller-5e6ij
I can get the latest commit from the GitHub api using :
$ curl 'https://api.github.com/repos/dwkns/test/commits?per_page=1'
However the resulting JSON doesn't contain any reference to the tag I created when I did that commit.
I can get a list of tags using :
$ curl 'https://api.github.com/repos/dwkns/test/tags'
However the resulting JSON, while it contains the names of tags I want, is not in the order in which they were created - there is no way of telling which tag is the latest one.
EDIT : The latest tag created was LatestLatestLatest
My question then is what API call(s) do I need to do to get the name of the latest tag in my repository?
Semantic Versioning Example
NOTE: If you're in a hurry and don't need all the fine details explained, just jump down to "The Solution" and execute the command.
This solution uses curl and grep to match the LATEST semantically versioned release number. An example will be demonstrated using my own Github repo "pi-ap" (a pile of bash scripts which automates config of a Raspberry Pi into a wireless AP).
You can test the example I give you on the CLI and after you're satisfied it works as intended, you can tweak it to your own use-case.
Versioning Format Construction:
Since we're using grep to match the version number, I need to explain its' construction. 3 pairs of integers separated by 2 dots and prefaced by a "v":
vXX.XX.XX
^ ^ ^
| | |
| | Patch
| Minor
Major
NOTE: If a field only has a single digit, I'll pad it with a zero to ensure the resulting format is predictable: always 3 pairs of integers separated by 2 dots.
The Solution:
Github Username: F1Linux
Github Repo Name: pi-ap (NOTE: exclude the ".git" suffix)
curl -s 'https://github.com/f1linux/pi-ap/tags/'|grep -Eo "$Version v[0-9]{1,2}.[0-9]{1,2}.[0-9]{1,2}"|sort -r|head -n1
Validate the Result Correct:
In your browser, go to:
https://github.com/f1linux/pi-ap/tags
And validate that the latest tag was returned from the command.
The above is fairly extensible for most use-cases. Just need to change the user & repo names and remove/replace the "v" if you don't use this convention in tagging your repos.
Using jq in combination with curl you can have a pretty straightforward command:
curl -s \
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/repos/dwkns/test/tags \
| jq -r '.[0].name'
Output (as of today):
v56
Explanation on jq command:
-r is for "raw", avoid json quotes on jq's output
.[0] selects the first (latest) tag object in json array we got from github
.name selects the name property in this lastest json object
#!/bin/sh
curl -s https://github.com/dwkns/test/tags |
awk '/tag-name/{print $3;exit}' FS='[<>]'
Or
#!/bin/awk -f
BEGIN {
FS = "[<>]"
while ("curl -s https://github.com/dwkns/test/tags" | getline) {
if(/tag-name/){print $3;exit}
}
}