Ubuntu JSON API script to search IPs - json

I'm after firstly formatting a curl JSON API link in Ubuntu, as you can see in my picture, on the website JSON is formatted correctly, via Ubuntu Its just a bunch of word-wrapped code.
I tried using | jq at the end but that didn't work like so
curl https://www.abuseipdb.com/check/51.38.41.14/json?
key=my_key_here&days=7&verbose | jq
(not including my API key) (51.38.41.14) Is a spammer IP
Once this is figured out I would then want to script it so I run an alias called IPDB that asks me the IP and displays the curl address API request
Any guidance would be apprecaited

And again I figured it out for myself, no Idea why I joined today seen as I answered all my own questions :)
It might be helpful to someone in the future.
Make a text file and call it abuse.sh and copy the below text into it, then run the script and it will ask for a IP
echo Please enter IP to search
read -p 'IPaddress: ' IP
echo curl -s https://www.abuseipdb.com/check/$IP/json?key=your_key_here=7&verbose' | jq
I have taken my API key out, but you can get a free one from their website.
This script is for IP checking to see if the IP Is an abuser(spammer, hacker etc). I work for an ISP and wanted to automate how to check for IPs besides using good ole mxtoolbox.

Related

Jenkins - Storing curl output into file for future use

I'm currently trying to make Jenkins job that runs if there's difference between the version number between 2 runs.
What I'm currently thinking on doing is by running curl on my webapp endpoint to get the webapp version. I'm looking to store this webapp version information onto jenkinsfile or any file.
The next time the jenkins job runs, it will do curl again to my webapp endpoint and compare the version between the current curl output and the saved version information from last run.
However, as I'm still kind of new to Jenkins, i have no idea on where to start to create the file to store the information i want, anyone have some recommendation or advice for me on how to solve this problem ?
Thanks
you can write version number to any file(JSON,txt..etc) and read file into your workspace by using pipeline-utility-steps by specifying the path in your jenkinsfile
you can write a file to represent the state in a shared location and check it next run (no need any plugins):
# this will create initial file in jenkins home folder
curl http://version-check > ~/version
when u run the job u can use a simple sh script:
stage {
sh """
LAST_VER=$(cat ~/version)
CURRENT_VER=$(curl http://version-check)
if [ "$LAST_VER" = "$CURRENT_VER" ]; then
echo "versions are equal"
else
echo "version are not equal"
echo $CURRENT_VER > ~/version # update new state
fi
"""
}

wget not displaying all website data

Input: wget -qO- http://runescape.com/community | grep -i playerCount
Output: <li class="header-top__right-option"><strong id="playerCount">0</strong> Online</li>
In browser:
Using cygwin..I am trying to use wget to pull a number out of a webpage. As shown in the example above, the playerCount is 0. If you actually load the webpage up and look at the same code, it is a completely different number. How can I get the real number? I was told it may be something with cookies or a user agent. This just started not working a few weeks ago.
That value appears to be filled in via javascript (though I can't find the request at a quick glance). If that's the case then you cannot get it with something like wget or curl in this way. You would need to find the specific request and send that.
Given the URL indicated by aadarshs (which I saw but mistested when I looked at it the first time) something like this should work.
curl -s 'http://www.runescape.com/player_count.js?varname=iPlayerCount&callback=jQuery000000000000000000000_0000000000000' | awk -F '[()]' '{print $2}'
This worked for me
curl http://runescape.com/community | grep -i playercount
EDIT: Adding the player count link
curl http://www.runescape.com/player_count.js\?varname\=iPlayerCount\&callback\=jQuery111004241600367240608_1434074587842\&_\=1434074587843

running a perl cgi script from a webpage (html)

I have been searching for a tutorial on how to run a perl program from an html webpage. I cannot find a tutorial or even a good starting point that explains clearly how to do this...
What I'm trying to do is use WWW::mechanize in perl to fill in some information for me on the back end of a wordpress site. Before I can do that I'd like to just see the retrieved html displayed in the browser like the actual website would be displayed in the browser. Here is my perl:
print "Content-type: text/html\n\n";
use CGI;
use WWW::Mechanize;
my $m = WWW::Mechanize->new();
use WWW::Mechanize;
$url = 'http://www.storagecolumbusohio.com/wp-admin';
$m->post($url);
$m->form_id('loginform');
$m->set_fields('log' => 'username', 'pwd' => 'password');
$page = $m->submit();
$m->add_handler("request_send", sub { shift->dump; return });
$m->add_handler("response_done", sub { shift->dump; return });
print $page->decoded_content;
This code works from the command prompt (actually I'm on mac, so terminal). However I'd like it to work from a website when the user clicks on a link.
I have learned a few things but it's confusing for me since I'm a perl noob. It seems that there are two ways to go about doing this (and I could be wrong but this is what I've gathered from what iv'e read). One way people keep talking about is using some kind of "template method" such as embperl or modperl. The other is to run the perl program as a cgi script. From what I've read on various sites, it seems like cgi is the simplest and most common solution? In order to do that I'm told I need to change a few lines in the httpd.conf file. Where can I find that file to alter it? I know I'm on an apache server, but my site is hosted by dreamhost. Can I still access this file and if so how?
Any help would be greatly appreciated as you can probably tell I don't have a clue and am very confused.
To use a cgi script on dreamhost, it is sufficient to
give the script a .cgi extension
put the script somewhere visible to the webserver
give the script the right permissions (at least 0755)
You may want to see if you can get a toy script, say,
#!/usr/bin/perl
print "Content-type: text/plain\n\nHello world\n";
working before you tackle debugging your larger script.
That said, something I don't see in your script is the header. I think you'll want to say something like
print "Content-type: text/html\n\n";
before your other print call.
I would suggest that you test your code first on your local server.
I assume you are using windows or something similar with your questions, so use xamp http://www.apachefriends.org/en/xampp.html or wamp http://www.wampserver.com/en/ or get a real OS like http://www.debian.org (you can run it in a vm as well).
You should not print the content type like that, but use "print header", see this page:
http://perldoc.perl.org/CGI.html#CREATING-A-STANDARD-HTTP-HEADER%3a
Make sure you have your apache server configured properly for perl, see also these commons problems:
http://oreilly.com/openbook/cgi/ch12_01.html
Also see How can I send POST and GET data to a Perl CGI script via the command line? for testing on the command line.

curl: downloading from dynamic url

I'm trying to download an html file with curl in bash. Like this site:
http://www.registrar.ucla.edu/schedule/detselect.aspx?termsel=10S&subareasel=PHYSICS&idxcrs=0001B+++
When I download it manually, it works fine. However, when i try and run my script through crontab, the output html file is very small and just says "Object moved to here." with a broken link. Does this have something to do with the sparse environment the crontab commands run it? I found this question:
php ssl curl : object moved error
but i'm using bash, not php. What are the equivalent command line options or variables to set to fix this problem in bash?
(I want to do this with curl, not wget)
Edit: well, sometimes downloading the file manually (via interactive shell) works, but sometimes it doesn't (I still get the "Object moved here" message). So it may not be a a specifically be a problem with cron's environment, but with curl itself.
the cron entry:
* * * * * ~/.class/test.sh >> ~/.class/test_out 2>&1
test.sh:
#! /bin/bash
PATH=/usr/local/bin:/usr/bin:/bin:/sbin
cd ~/.class
course="physics 1b"
url="http://www.registrar.ucla.edu/schedule/detselect.aspx?termsel=10S<URL>subareasel=PHYSICS<URL>idxcrs=0001B+++"
curl "$url" -sLo "$course".html --max-redirs 5
Edit: Problem solved. The issue was the stray tags in the url. It was because I was doing sed s,"<URL>",\""$url"\", template.txt > test.sh to generate the scripts and sed replaced all instances of & with the regular expression <URL>. After fixing the url, curl works fine.
You want the -L or --location option, which follows 300 series redirects. --maxredirs [n] will limit curl to n redirects.
Its curious that this works from an interactive shell. Are you fetching the same url? You could always try sourcing your environment scripts in your cron entry:
* * * * * . /home/you/.bashrc ; curl -L --maxredirs 5 ...
EDIT: the example url is somewhat different than the one in the script. $url in the script has an additional pair of <URL> tags. Replacing them with &, the conventional argument seperators for GET requests, works for me.
Without seeing your script it's hard to guess what exactly is going on, but it's likely that it's an environment problem as you surmise.
One thing that often helps is to specify the full path to executables and files in your script.
If you show your script and crontab entry, we can be of more help.

how to automate the testing of a text based menu

I have a text based menu running on a remote Linux host. I am using expect to ssh into this host and would like to figure out how to interact with the menus. Interaction involves arrowing up, down and using the enter and back arrow keys. For example,
Disconnect
Data Collection >
Utilities >
Save Changes
When you enter the system Disconnect is highlighted. So simply pressing enter twice you can disconnect from the system. Second enter confirms the disconnect.
The following code will ssh into my system and bring up the menu. If I remove the expect eof and try to send "\r" thinking that this would select the Disconnect menu option I get the following error: "write() failed to write anything - will sleep(1) and retry..."
#!/usr/bin/expect
set env(TERM) vt100
set password abc123
set ipaddr 162.116.11.100
set timeout -1
match_max -d 100000
spawn ssh root#$ipaddr
exp_internal 1
expect "*password:*"
send -- "$password\r"
expect "Last login: *\r"
expect eof
I have looked at the virterm and term_expect examples but cannot figure out how to tweak them to work for me. If someone can point me in the right direction I would greatly appreciate it. What I need to know is can I interact with a text based menu system and what is the correct method for doing this, examples if any exist would be great.
thanks,
-reagan
Try using the autoexpect tool to record an interactive session, and see what the codes look like.
To simplify your life, you might want to setup public key encryption so that you can ssh to the remote host without using a password. Then you can concentrate on testing your software instead of ssh. Google "ssh login without password" for more information. The instructions you will find are straight forward, so don't be afraid.
Have you tried \n\n\r?