I canĀ“t find a solution to reading the amount of likes from a facebook page. It is not possible to pass the variable $name correctly. Would highly appreciate help!
name=$(grep "NAME_OF_FACEBOOK_PAGE" output3.txt |
echo "$name"
curl -s "http://graph.facebook.com/?ids=https://www.facebook.com/$name/likes" -o output3.txt
cat output3.txt | grep "\"likes\"" -A1 -B0
If you just want to read the number of likes of a certain page, why don't you use
https://graph.facebook.com/{page_name}?fields=id,name,likes,talking_about_count
Your call to the Graph API is incorrect.
Related
I am running curl commands on ~50 URL's and each have JSON that looks like this (but with different values for 'country' with each curl command, but the values for 'names' can possibly repeat or be unique:
e.g. one curl command can give JSON that looks like this:
{"names":["Mary","Tom","Sue","Rob"],"country":"USA"}
while the next curl command will give this:
{"names":["Sue"],"country":"Russia"}
and the next curl command will give this:
{"names":["Tom","Jenny"],"country":"Nigeria"}
and so on and so forth.
I have a separate list of names (e.g. Tom, Sarah, Jenny, Trinh, Nancy) and I want to find out if they're associated with a country in any of the JSON's I'm running the curl command on. If they exist in "names", I want to put the name of the person and the country into a new text file (or JSON file, doesn't matter - i just want it formatted properly), so at the end I have an output file that associates the name of the person and the country they belong to. If a country has multiple people, there shouldn't be a duplicate value for country in the output file; the names of the people should be listed under that one country.
I've tried multiple ways to solve this, but I'm not able to figure it out as it's my first time trying to write a script.
Last command that I tried:
curl "https://..." | jq -r 'select(.names[] as $a | ["Tom","Sarah","Jenny","Trinh","Nancy"] | index($a) | while read output; do tee -a listOfCountries; done; done
^This gave duplicates and I wasnt sure how to format the output so that there were spaces between each output and that the country had only the specific names of the people under it
The output file (given above example) should be something like:
USA: Tom
Nigeria: Tom, Jenny
Please let me know if you have any suggestions, it'll greatly be appreciated. Thank you!
Side question: If the list of names to search is extremely long (100+ names), what is the best way to script this?
With all your JSON objects in a file, say output.jsons:
jq -c -n --argjson list '[ "Tom", "Sarah", "Jenny", "Trinh", "Nancy"]' '
(reduce inputs as $in ({}; reduce $in.names[] as $name (.; .[$name] += [$in.country]))) as $dict
| reduce $list[] as $name ({};
if $dict[$name]
then reduce $dict[$name][] as $country (.; .[$country] += [$name])
else . end)
' output.jsons
produces:
{"USA":["Tom"],"Nigeria":["Tom","Jenny"]}
You can easily transform this into the desired output.
One way to ensure uniqueness of the elements of each array would be to append the following to the filter: map_values(unique).
Re the side question: instead of --argjson you could use --argfile or --slurpfile.
One way to get the resource quota values in kubernetes is to use the following command
>kubectl describe resourcequotas
Name: default-quota
Namespace: my-namespace
Resource Used Hard
-------- ---- ----
configmaps 19 100
limits.cpu 13810m 18
limits.memory 25890Mi 36Gi
But issue is this display all the values in text file format. Anyone knows how I can get in json format!
Of course, I can parse the output and get the individual entry and construct the json.
kubectl describe quota | grep limits.cpu | awk '{print $2}'
13810m
But I am looking for something inbuilt or some quick way of doing it. Thanks for your help.
Thanks for your messages. Let me answer my own question, I have found one.
jq has solved my problem.
To get the Max limit of resources in json format
kubectl get quota -ojson | jq -r .items[].status.hard
To get the Current usage of resources in json format
kubectl get quota -ojson | jq -r .items[].status.used
kubectl itself provides a mechanism to provide jsonpath using -o jsonpath option. One of the main issues that I initially faced was with respect to having dot(.) in key. eg., limits.cpu.
This issue can be solved by using the expression "limits\.cpu" (escaping dot)
kubectl get resourcequota -o jsonpath={.items[*].spec.hard."limits\.cpu"}
I need to automatically move new cases (TheHive-Project) to LimeSurvey every 5 minutes. I have figured out the basis of the API script to add responses to LimeSurvey. However, I can't figure out how to add only new cases, and how to parse the Hive case data for the information I want to add.
So far I've been using curl to get a list of cases from hive. The following is the command and the output.
curl -su user:pass http://myhiveIPaddress:9000/api/case
[{"createdBy":"charlie","owner":"charlie","createdAt":1498749369897,"startDate":1498749300000,"title":"test","caseId":1,"user":"charlie","status":"Open","description":"testtest","tlp":2,"tags":[],"flag":false,"severity":1,"metrics":{"Time for Alert to Handler Pickup":2,"Time from open to close":4,"Time from compromise to discovery":6},"updatedBy":"charlie","updatedAt":1498751817577,"id":"AVz0bH7yqaVU6WeZlx3w","_type":"case"},{"createdBy":"charlie","owner":"charlie","title":"testtest","caseId":3,"description":"ddd","user":"charlie","status":"Open","createdAt":1499446483328,"startDate":1499446440000,"severity":2,"tlp":2,"tags":[],"flag":false,"id":"AV0d-Z0DqHSVxnJ8z_HI","_type":"case"},{"createdBy":"charlie","owner":"charlie","createdAt":1499268177619,"title":"test test","user":"charlie","status":"Open","caseId":2,"startDate":1499268120000,"tlp":2,"tags":[],"flag":false,"description":"s","severity":1,"metrics":{"Time from open to close":2,"Time for Alert to Handler Pickup":3,"Time from compromise to discovery":null},"updatedBy":"charlie","updatedAt":1499268203235,"id":"AV0TWOIinKQtYP_yBYgG","_type":"case"}]
Each field is separated by the delimiter },{.
In regards to parsing out specific information from each case, I previously tried to just use the cut command. This mostly worked until I reached "metrics"; it doesn't always work for metrics because they will not always be listed in the same order.
I have asked my boss for help, and he told me this command might get me going in the right direction to adding only new hive cases to the survey, but I'm still very lost and want to avoid asking too much again.
curl -su user:pass http://myhiveIPaddress:9000/api/case | sed 's/},{/\n/g' | sed 's/\[{//g' | sed 's/}]//g' | awk -F '"caseId":' {'print $2'} | cut -f 1 -d , | sort -n | while read line; do echo '"caseId":'$line; done
Basically, I'm in way over my head and feel like I have no idea what I'm doing. If I need to clarify anything, or if it would help for me to post what I have so far in my API script, please let me know.
Update
Here is the potential logic for the script I'd like to write.
get list of hive cases (curl ...)
read each field, delimited by },{
while read each field, check /tmp/addedHiveCases to see if caseId of field already exists
--> if it does not exist in file, add case to limesurvey and add caseId to /tmp/addedHiveCases
--> if it does exist, skip to next field
why are you thinking that the fields are separated by a "},{" delimiter?
The response of the /api/case API is a valid JSON format, that lists the cases.
Can you use a Python script to play with the API? If yes, I can help you write the script you need.
I am trying to take the JSON result from a curl and set each result for a particular JSON object to separate variables.
Using the following line in my script to retrieve results:
PROFILE=$(curl --user admin:admin -k -X GET https://192.168.1.1:8000/rest/call/profiles.json | jq '[.profiles[].id]')
with the above line my results might look something like this (but i could have 1 to many lines returned):
[
"myprofile",
"myprofile1",
"myprofile2",
"myprofile3"
]
Next, trying to determine the best route to set each id that is returned to a unique variable to be used later on in the script. .id could return 1 to 30 results so i'm assuming a do while loop and using the split command is in need here?
Any help is much appreciated, thank you in advance!
i'm not entirely sure what you're asking, but maybe this helps:
echo '[ "myprofile", "myprofile1", "myprofile2", "myprofile3" ]' |
grep -o '"[^"]\+"' | tr -d '"' | while read x; do
echo $x
# do your thing
done
output:
myprofile
myprofile1
myprofile2
myprofile3
I tried to parse firefox bookmark(JSON exported version), using this efforts:
cat boo.json | grep '\"uri\"\:\"^http\://[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,3}\"'
cat boo.json | grep '"uri"\:"^http\://[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,3}'
cat boo.json | grep '"uri"\:"^http\://[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,3}"'
And few others but all fails, json bookmarked file will look like this:
.........."uri":"http://www.google.com/?"......"uri":"http://stackoverflow.com/"
So, the output should be like this:
"uri":"http://www.google.com/?"
"uri":"http://stackoverflow.com/"
What is the missing part on my regular expression?
UPDATE:
Url's on bookmark file ending with one of this special character:
/, ex: "uri":"http://stackoverflow.com/"
", ex: "uri":"http://stackoverflow.com/questions/13148794/parsing-firefox-bookmarks-using-regular-expression"
}, ex: "uri":"https://fr.add-ons.mozilla.com/fr/firefox/bookmarks/"}
With this modified regular expression:
$ egrep -o "(http|https)://([^ ]*).(*\/)" boo.json
Result:
http://fr.fxfeeds.mozilla.com/fr/firefox/headlines.xml"},{"name":"livemark/siteURI","flags":0,"expires":4,"mimeType":null,"type":3,"value":"http://www.lemonde.fr/"}],"type":"text/x-moz-place-container","children":[]}]},{"index":2,"title":"Tags","id":4,"parent":1,"dateAdded":1344432674984000,"lastModified":1344432674984000,"type":"text/
http://stackoverflow.com/questions/13148794/parsing-firefox-bookmarks-using-regular-expression","charset":"UTF-8"},{"index":29,"title":"adrusi/
http://stackoverflow.com/
...
But with this still doesn't get me only url's.
Have you tried JSON.sh? Its works great!
https://github.com/dominictarr/JSON.sh
I use this regex to extract urls , it's works great
cat *.html | grep -Eo "(http|https)://[a-zA-Z0-9./?=_-]*" | sort | uniq
Mr Jeff Atwood had posted an article the problem with urls, With his proposed Regular Expression, I managed to extract all the url's from FireFox bookmark:
egrep -o "\(?\bhttp://[-A-Za-z0-9+&##/%?=~_()|!:,.;]*[-A-Za-z0-9+&##/%=~_()|]" my-bookmark.json