How to get external IPs of specific instance group on GCE - Google Compute Engine? - google-compute-engine

$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances list
This command currently works to get ALL the ips that are active but if I have multiple instance groups lets say one is called: Office, and the other is called Home
How do I get just the instance IPs in instance group "Office" only

Unfortunately there is no easy way to do it. Ideally it should be part of gcloud instance-groups list-instances API, but it does not return IP addresses, just instance names.
So far, I've managed to get the desired response by executing 2 different commands.
To get names of all instances
instances=$(gcloud beta compute instance-groups list-instances <Enter Your Instance Group Name Here> | awk -v ORS=, '{if(NR>1)print $1}')
To get External IPs
gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances list --filter="name=( $instances )"
A breakdown / explanation of 1st Command:
gcloud beta compute instance-groups list-instances <Enter Your Instance Group Name Here> will return all instances in that Instance Group
awk -v ORS=, will replace all lines with , and returns a single comma separated string
'if(NR>1) will exclude first line of response which is NAME
print $1 will get only the 1st column which
are instance names
instances=$(<Entire Gcloud Command with awk) will capture the response in variable
2nd Command should be self explanatory.
It will be great if someone can combine these 2 commands into a single command.

Related

Zabbix api value is different from Graph value

i have zabbix 5. I've been trying to write a shell script to get item trend for a range of time. the shell script works correctly but the value it return doesn't match what is showing on graph.
for example:
I have an item with itemid "10234" which return "percentage of used CPU".
i want to get the zabbix trend for this item from "2021/09/20 09:00:00" till "2021/09/21 09:00:00".
Unix time for this rang is: 1632112200 , 1632198600
I run this command to get the values:
curl -L -k -i -X POST -H 'Content-Type:application/json' -d '{"jsonrpc":"2.0","method":"trend.get","id":1,"aut h":"1a543455bd48e6ddc222219acccb52e9","params" : {"output": ["clock","value_avg","value_min","value_max","num", "itemid"],"itemids":["10234"],"time_from": "1632112200","time_till": "1632198600", "limit": "1"}}' https://172.30.134.03:423//api_jsonrpc.php
output:
{"clock":"1632114000","value_avg":"14.968717529411 764","value_min":"12.683622999999997","value_max": "17.635707999999994"}
but in Graph it shows:
why this happens and how to fix it?
In most cases, the graphs apply approximations. If you zoom in, you should see the same data you get from the API. The most zoom you can apply is 1 minute, while the API will get you the exact point in time value.

aws cli query to find the last snapshot taken, the date it was taken, the tag used if any, the name tag of the instance and the instance i.d

I need to get 5 colums reported from awscli. These are, last snapshot taken for instance, the date it was taken, the tag used if any, the name tag of the instance and the instance i.d.
The below will list ALL snapshots and the time taken and a 'null' name gets reported...
aws ec2 describe-snapshots --query 'Snapshots[*].{ID:SnapshotId,Time:StartTime,Name:Tags[?Key==`Name`]|[0].Value}'
This will give me the description of the snapshot, the snap id and the date:
aws ec2 describe-snapshots --owner self --output json | jq '.Snapshots[] | select(.StartTime < "'$(date --date='-1 month' '+%Y-%m-%d')'") | [.Description, .StartTime, .SnapshotId]'
So basically I have something that gives me the snapshot data, will query on date and tell me what time it was taken but I'm missing the full requirement all in one.
I guess the main stumbling block for me is how to only report on the last snapshot that was taken for an instance. Can anyone please help?
You can use sort_by get the latest snapshot.
aws ec2 describe-snapshots --query "sort_by(Snapshots, &StartTime)[-1].{SnapshotId:SnapshotId,StartTime:StartTime}"
output
{
"SnapshotId": "snap-123456",
"StartTime": "2020-07-07T13:57:05.982Z"
}
OR if you just looking for owned by you then
MY_ACCOUNT_ID=1234567 aws ec2 describe-snapshots --filter "Name=owner-id,Values=$MY_ACCOUNT_ID" --query "sort_by(Snapshots, &StartTime)[-1].{SnapshotId:SnapshotId,StartTime:StartTime}"
aws-snapshot-by-me
Update:
As the above query does not contain instance information, so you can get instance information by doing a reverse query. find snapshot first and then find instance ID using attached volume ID.
VOLUME_ID=$(aws ec2 describe-snapshots --filter "Name=owner-id,Values=$MY_ACCOUNT_ID" --query "sort_by(Snapshots, &StartTime)[-1].VolumeId" --output text)
aws ec2 describe-volumes --filter "Name=volume-id,Values=$VOLUME_ID" --query 'Volumes[?Attachments != `null`].Attachments[].InstanceId'

The forge script 'test-list-resources' only list 10 items

The forge script 'test-list-resources' only list 10 items. How do we list all the resources? Besides the command-line script, is it possible to view all resource somewhere online?
And I found that it 's not listing the latest 10 items, it lists the first 10 items after sorting by the URN(which is very long and human-unreadable), this is not so intuitive in usability, because usually user upload the model and could forget the URN and they might want to check the URN by executing this script.
Can you please clarify where the test-list-resource script came from?
Also from my perspective this script under the hood use one of the next methods:
1.Get Buckets
2.Get Bucket by Key
Both of them them you can use for getting bucket(s) with content. And for both of them you can specify limit as Query String Parameter, and now you have 10 because this value GET methods use by default. To getting more them 10 you just need to set higher value up to 100(max value)
Updated
After checking script source I found that we use second of GET methods - Get Bucket by Key. And the quickest solution that I can propose to you - is just jump in script code and edit 1 line. Basically you need only add limit param to query (for GET buckets/:bucketKey/objects curl request). And you can do this in few ways:
Hardcode 'limit' equal 100
response=$(curl -H "Authorization: ${bearer}" -X GET ${ForgeHost}/oss/v2/buckets/${bucket}/objects?limit=100 -k -s)
Pass value to script from shell environment variables
first
export BUCKET_LIMIT=<<YOUR LIMIT VALUE>>
then
response=$(curl -H "Authorization: ${bearer}" -X GET ${ForgeHost}/oss/v2/buckets/${bucket}/objects?limit=$BUCKET_LIMIT -k -s)
If you run script with 'sh' command you can add inline parameter
first
response=$(curl -H "Authorization: ${bearer}" -X GET ${ForgeHost}/oss/v2/buckets/${bucket}/objects?limit=$1 -k -s)
then
sh test-list-resources 100
Also thank you for notice this case, I will connect with script's author and create proposal for adding new functionality regarding limits and other params

Lines with common words in outputs of 2 scripts

I need to run two linux shell scripts and get lines from the second script that contain same words as lines in the output from the first (not the whole line is the same). For example:
Script #1 output:
Router 1: Ip address 10.0.0.1
Router 2: Ip address 10.0.1.1
Router 3: Ip address 10.0.2.1
Script #2 output:
Router 1: Model: Cisco 2960
Router 2: Model: Juniper MX960
Router 5: Model: Huwei S3300
So, finally I need a list of routers that are present in both outputs, but only lines from the second script, i.e. lines with model.
Considering the above two script output is stored/redirected to tmp1 and tmp2 respectively.
Below script will output the common Router X present in both the files.
#!/bin/bash
tmp1="$1"
tmp2="$2"
while read -r line
do
routerName=$(echo "$line" | cut -d ":" -f 1)
if grep -q "$routerName" "$tmp2"
then
# Instead of printing you can add any logic
echo $routerName
fi
done < "$tmp1"
save the above script as filename.sh and pass the arguments
./filename.sh tmpScript1output_file tmpScript2output_file

How can I invoke a shell or Perl script from iptables?

We're using CentOS and would like to ban several Asian countries from accessing the entire server. Almost every IP we check which has tried to hack into our server is allocated to an Asian country (Russia, China, Pakistan, etc.)
We have an IP to country MySQL database we can efficiently query and would like to try something like:
-A INPUT -p tcp -m tcp --dport 80 -j /path/to/perlscript.pl
The script would need the IP passed in as an argument, then it would return either an ACCEPT or DROP target?
Thanks for the answers, here's my follow up.
Do you know if it is possible though? Having a rule point to a script which returns a target? (ACCPET/DROP)
Not entirely sure how ipset works, will have to experiment I guess, but it looks like it creates a single rule. How would it handle Russia for example, which has over 6000 ranges assigned to it? And we want to add probably 20 - 40 countries in total, so we could end up needing to add in excess of 100,000 ranges. Wouldn't the overhead of a single MySQL query be less taxing?
SELECT country FROM ip_countries WHERE $VAR{ip} >= range1 && $VAR{ip} <= range2
The database we use is freely available here : http://software77.net/geo-ip/
It represents IPs in the database by converting the IP to a number using this formula :
$VAR{numberedIP} = $octs[3] + ($octs[2] * 256) + ($octs[1] * 256 * 256) + ($octs[0] * 256 * 256 * 256);
It will store the start of the range in the "range1" column, and the end of the range in the "range2" column.
So you can see how we'd look up an IP using the above query. Literally takes less than a hundredth of a second to get a result and it's quite accurate. We have one website on a dedicated server, quite low traffic. But as with all servers I have ever checked, this one is hit daily by hackers' robots, checking email accounts, FTP accounts etc. And just about every web server I've ever worked on is compromised sooner or later. In our case, 99.99% of traffic from Asian countries has criminal intent attached to it.
We'd like this to run via iptables so that all ports are covered, not just HTTP for example by using directives in say .htaccess.
Do you think ipset would still be faster and more efficient?
It would be far too slow to launch perl for every matching packet. The right tool for this sort of thing is ipset, and there is much more information and documentation available on the ipset man page.
In CentOS you can install it with yum. Naturally, all of these commands and the script need to run as root:
# yum install ipset
Next install the kernel modules (you'll want this to happen at boot as well):
# modprobe -v ipset ip_set_hash_netport
And then use a script like the following to populate an ipset and block IP's from its ranges using iptables:
#!/usr/bin/env perl
use strict;
use warnings;
use DBI;
my $dbh = DBI->connect('... your DSN ...',...);
# I have no knowledge of your schema, but if you can pull the
# address range in the form: AA.BB.CC.DD/NN
my $ranges = $dbh->selectcol_arrayref(
q{SELECT cidr FROM your_table WHERE country_code IN ('CN',...)});
`ipset create geoblock hash:netport`;
for (#$ranges) {
# to match on port 80:
`ipset add geoblock $_,80`;
}
`iptables -I INPUT -m set --set geoblock src -j DROP`;
If you would like to block all ports rather than just 80, use the ip_set_hash_net module instead of ip_set_hash_netport, change hash:netport to hash:net, and remove ,80 from the ipset command.