I would like to get list of artifacts which are created 30 days ago and before.
I have a script which it was providing with in time period bound, but where I need to change time in milliseconds every time. Its bit tough every time so I need to get list of artifacts which are created 30 days back with out modifying my script every time.
This is what i am using now
RESULTS=`curl -s -X GET -u <username>:<password> \
"https://<domain>.artifactoryonline.com/<domain>/api/search/creation?from=$START_TIME&to=$END_TIME&repos=$REPO" \
| grep uri \
| awk '{print $3}' \
| sed s'/.$//' \
| sed s'/.$//' \
| sed -r 's/^.{1}//'`
Your best option here is probably to use JFrog's AQL and query for artifacts with
"created" older than X days, for example, you can use an AQL query like:
items.find({"created" : {"$before" : "30d"}})
You can read more about AQL in general and about "Relative Time Operators" specifically, here
So, an example curl with a limit of 10 artifacts would look like:
curl -X POST -u <user>:<password> -H "content-type: text/plain" -d 'items.find({"created":{"$before":"30d"}}).sort({"$desc" : ["created"]}).limit(10)' https://<your Artifactory server>:<port>/artifactory/api/search/aql
Related
Need to query an API endpoint for specific parameters, but there's a parameter limit of 20.
params are gathered into an array & stored in a JSON file, and ref'd in a variable tacked onto the end of my curl command, which generates the full curl API request.
curl -s -g GET '/api/endpoint?parameters='$myparams
eg.
curl -s -g GET '/api/endpoint?parameters=["1","2","3","etc"]'
This works fine when the params json is small and below the parameter limit per request. Only problem is params list fluctuates but is many times larger than the request limit.
My normal thinking would be to iterate through the param lines, but that would create many requests and probably block me too.
What would a good approach be to parse the parameter array json and generate curl API requests respectful of the parameter limit, with the minimum requests? Say its 115 params now, so that'd create 5 api requests of 20 params tacked on & 1 of 15..
You can chunk the array with undocumented _nwise function and then use that, e.g.:
<<JSON jq -r '_nwise(3) | "/api/endpoint?parameters=\(.)"'
["1","2","3","4","5","6","7","8"]
JSON
Output:
/api/endpoint?parameters=["1","2","3"]
/api/endpoint?parameters=["4","5","6"]
/api/endpoint?parameters=["7","8"]
This will generate the URLs for your curl calls, which you can then save in a file or consume directly:
<input.json jq -r ... | while read -r url; do curl -s -g -XGET "$url"; done
Or generate the query string only and use it in your curl call (pay attention to proper escaping/quoting):
<input.json jq -c '_nwise(3)' | while read -r qs; do curl -s -g -XGET "/api/endpoint?parameters=$qs"; done
Depending on your input format and requirements regarding robustness, you might not need jq at all; sed and paste can do the trick:
<<IN sed 's/\\/&&/g;s/"/\\"/g' | sed 's/^/"/;s/$/"/' | paste -sd ',,\n' | while read -r items; do curl -s -g -XGET "/api/endpoint?parameters=[$items]" done;
1
2
3
4
5
6
7
8
IN
Output:
curl -s -g -XGET /api/endpoint?parameters=["1","2","3"]
curl -s -g -XGET /api/endpoint?parameters=["4","5","6"]
curl -s -g -XGET /api/endpoint?parameters=["7","8"]
Explanation:
sed 's/\\/&&/g;s/"/\\"/g': replace \ with \\ and " with \".
sed 's/^/"/;s/$/"/': wrap each line/item in quotes
paste -sd ',,\n': take 3 lines and join them by a comma (repeat the comma character as many times as you need items minus 1)
while read -r items; do curl -s -g -XGET "/api/endpoint?parameters=[$items]"; done;: read generated items, wrap them in brackets and run curl
As title says. I am facing why this mirror on GitHub don't show output in terminal using this command:
wget --no-check-cert -q -O - "http://api.github.com/repos/bminor/glibc/branches" | grep release
but on GitHub there is for example:
The response is paginated. If you look at response headers:
Link: <https://api.github.com/repositories/13868694/branches?page=2>; rel="next", <https://api.github.com/repositories/13868694/branches?page=10>; rel="last"
you'll see that you're looking at page 1 of 10. You can increase the number of records per page using a query string parameter up to 100:
curl -Lv 'http://api.github.com/repos/bminor/glibc/branches?per_page=100'
but you still have to get the other pages in your case; the desired page is selected using the page query string parameter, e.g.:
$ curl -sL 'http://api.github.com/repos/bminor/glibc/branches?page=2&per_page=100' \
| jq -r '.[] | select(.name | contains("release")).name'
hjl/release/2.20/master
hjl/x32/release/2.12
hjl/x32/release/2.15
Alternatively, using the GitHub CLI:
gh api --method GET repos/bminor/glibc/branches \
--raw-fielg page=2 --raw-field per_page=100 \
--jq '.[] | select(.name | contains("release")).name'
I am trying to fire 40000 requests towards an API using 40000 different JSON files.
Normally I could do something like this:
for file in /dir/*.json
do
#ab -p $file -T application/json -c1 -n1 <url>
curl -X POST -d#"$file" <url> -H "Content-Type: application/json"
done;
My problem is that I want to run simultaneous requests, e.g. 100 and I want the total time it took to send all requests etc. recorded. I can't use the -c 100 -n 40000 in ab since its the same URL with different files.
The files/requests all look something like
{"source":"000000000000","type":"A"}
{"source":"000000000001","type":"A"}
{"source":"000000000003","type":"A"}
I was not able to find any tool that supports this out of the box (e.g. Apache Benchmark - ab).
I came across this example here on SO (modded for this question).
Not sure I understand why that example would "cat /tmp" when mkfifo tmp is a file and not a dir though. Might work?
mkfifo tmp
counter=0
for file in /dir/*.json
do
if [ $counter -lt 100 ]; then
curl -X POST -H "Content-Type: application/json" -d#"$file" <url> &
let $[counter++];
else
read x < tmp
curl -X POST -H "Content-Type: application/json" -d#"$file" <url> &
fi
done;
cat /tmp > /dev/null
rm tmp
How should I go about achieving this in perl, ksh, bash or similar or does anyone know any tools that supports this out of the box?
Thanks!
If your request is just to time the total time take for sending these 40000 curl requests with different JSON each time, you can use good use of GNU parallel. The tool has great ways achieve job concurrency by making use of multiple cores on your machine.
The download procedure is quite simple. Follow How to install GNU parallel (noarc.rpm) on CentOS 7 for quick and easy list of steps. The tool has a lot more complicated flags to solve multiple use-cases. For your requirement though, just go the folder containing these JSON files and do
parallel --dry-run -j10 curl -X POST -H "Content-Type: application/json" -d#{} <url> ::: *.json
The above command tries to dry run your command, in terms of how parallel sets up the flags and processes its arguments and starts running your command. Here {} represents your JSON file. We've specified here to run 10 jobs at a time and increase the number depending on how fast it runs on your machine and by checking the number of cores on your machine. There are also flags to limit the overall CPU to be allowed use by parallel, so that it doesn't totally choke your system.
Remove --dry-run to run your actual command. And to clock the time taken for the process to complete, use the time command just prefix it before the actual command as time parallel ...
I got this link to get the Build Time Trend along with other Data in jenkins
https://jenkins:8080/view/<view-name>/job/<job-name>/<buildnumber>/api/json
This works well in a web browser but this does not seem to work with curl, does not give any result when I run along with curl command
This is what I tried
curl -u user:api_token -s -k "https://jenkins:8080/view/<view-name>/job/<job-name>/<buildnumber>/api/json"
This syntax worked with other API's.
Not sure what is wrong here.
curl -u userid:api_token -s -k "https://jenkins:8080/view/<view-name>/job/<job-name>/<buildnumber>/api/json" | jq.'causes[]|{result}'
jq.causes[]|{result}: command not found
You need a space between jq and its arguments (and probably not a period).
... | jq 'causes[]|{result}'
^
space here
I have a mysql database, with a table :
url | words
And datas like, for example :
------Column URL------- -------Column Words------
www.firstwebsite.com | hello, hi
www.secondwebsite.com | someword, someotherword
I want to loop through that table to check if the word is present in the content of the website specified by the url.
I have something like this :
!/bin/bash
mysql --user=USERNAME --password=PASSWORD DATABASE --skip-column-names -e "SELECT url, keyword FROM things" | while read url keyword; do
content=$(curl -sL $url)
echo $content | egrep -q $keyword
status=$?
if test $status -eq 0 ; then
# Found...
else
# Not found...
fi
done
One problems :
It's very slow : how set curl to optimize the load time of each website, don't load images, things like that ?
Also, Is it a good idea to put things like that in a shell script, or is it better to create a php script, and call it with curl ?
Thanks !
As it stands your script will not work as you might expect when you have multiple keywords per row as in your example. The reason is that when you pass hello, hi to egrep it will look for the exact string "hello, hi" in its input, not for either "hello" or "hi". You can fix this without making changes to what's in your database by turning each list of keywords into an egrep-compatible regular expression with sed. You'll also need to remove the | from mysql's output, e.g, with awk.
curl doesn't retrieve images when downloading a webpage's HTML. If the order in which the URLs are queried does not matter to you then you can speed things up by making the whole thing asynchronous with &.
#!/bin/bash
handle_url() {
if curl -sL "$1" | egrep -q "$2"; then
echo 1 # Found...
else
echo 0 # Not found...
fi
}
mysql --user=USERNAME --password=PASSWORD DATABASE --skip-column-names -e "SELECT url, keyword FROM things" | awk -F \| '{ print $1, $2 }' | while read url keywords; do
keywords=$(echo $keywords | sed -e 's/, /|/g;s/^/(/;s/$/)/;')
handle_url "$url" "$keywords" &
done