How to ping from Zabbix agent? - ping

Is it possible to ping from Zabbix agent and pass that data into Zabbix server? I would like to be able to get response time from the agent.
I read that it is possible by using fping, would be great if someone could guide me to the correct path.
Thank you,
Rijath Mohammed

While that is not currently available out of the box, you can implement such a functionality using a feature called "user parameters". This forum thread has a simple example:
UserParameter=myping[*],/etc/zabbix/fping -q $1;echo $?
Although for you the path to fping is likely to be /usr/sbin/fping or /usr/bin/fping.
You can read more about user parameters in the official manual: https://www.zabbix.com/documentation/3.0/manual/config/items/userparameters .
While I haven't ever configured that, it would be similar on Windows - see this forum thread for some inspiration.
And if you would like to see this feature implemented out of the box, make sure to vote on this feature request.

Got it working using the below powershell script, :)
$Test = test-connection google.com -count 1
$Test.responsetime
This will just return the response time for Google.com and that value is passed to Zabbix using the below user parameter:
UnsafeUserParameters=1
UserParameter=ping.google,C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe C:\zabbix\pinggoogle.ps1
I am calling this parameter from Zabbix using the key "ping.google"

Related

Wazuh active response with VirusTotal is not working

Wanted to integrate with VirusTotal and Yara but it seems like active response doesn't work as expected by following the steps in the link below:
https://documentation.wazuh.com/current/user-manual/capabilities/active-response/ar-use-cases/removing-malware.html
After adding/downloading eicar.com in /root directory, and read ossec.log, I get the following output:
About VirusTotal
I just followed the documentation and it worked well for me in Wazuh Manager 4.3.4 and a Wazuh Agent of the same version.
I got those same messages in /var/ossec/logs/ossec.log of the Wazuh Agent, those appear when the files do not exist or the proper permissions are not assigned, those files were replaced already in 4.2 but still show up in the log, since you are trying to use the script from the documentation then do not worry about those messages.
If you check under /var/ossec/logs/active-responses.log do you get any error?
What version of Wazuh Manager and Wazuh Agent are you using?
About Yara
It shouldn't be related to VirusTotal and probably deserves a different post, there is an issue open here but seems it is working, probably this comment helps you troubleshooting that one
The Active Response module is managed from the Wazuh Manager in /var/ossec/etc/ossec.conf, from here you can enable the response you need to execute using an <active response> configuration block that will use a "command" as a response. For example, if you are going to enable "remove-threat" as an Active Response on any agent that triggers the VirusTotal rule, you should have a <command> block and also an <active-response> block for that particular case, the same goes for any other AR case you may want to use.
<command>
<name>remove-threat</name>
<executable>remove-threat.sh</executable>
<timeout_allowed>no</timeout_allowed>
</command>
<active-response>
<disabled>no</disabled>
<command>remove-threat</command>
<location>local</location>
<rules_id>87105</rules_id>
</active-response>
The Response (script) needs to be present on each agent under /var/ossec/active-response/bin/. If you are only using the "remove-threat" Active Response, you should only have a single <active-response> block on the Manager's configuration file. Each <active-response> block within the Manager's "ossec.conf" must have a matching <command> block that is basically the response (script) the module is going to use. Perhaps you can share with us this configuration file so we can take a look.
Also, the following output from the Manager will be useful to see if the integration with Virustotal is being activated:
cat /var/ossec/logs/ossec.log | grep wazuh-integratord
I hope this helps,
Let us know

Python code in Google Cloud function not showing desired output

I have the following lines of python code
import os
def hello_world():
r=os.system("curl ipinfo.io/ip")
print (r)
hello_world()
Shows the desired output when executed from command line in Google Cloud Shell but seems there is a 0 at the end of IP Address output
$ python3 main2.py
34.X.X.2490
When I deployed the same code in Google CLoud function it is showing OK as output
I have to replace the first line of code in GCF as follows to make it deploy.
def hello_world(self):
Any suggestion so that GCF displays the desired output which is the output of curl command?
Your function won't work for 2 reasons:
Firstly, you don't respect the HTTP Cloud Function Python function signature:
def hello_world(request):
....
Secondly, you can't use system call. In fact not exactly, you can perform system call, but, because you don't know which package/binaries are installed, you can't rely on this. It's serverless, you don't manage the underlying infrastructure and runtime environment.
Here you made the assumption that CURL is installed on the runtime image. Maybe yes, maybe not, maybe it was, maybe it will be remove in future!! You can't rely on that!!
If you want to manage you runtime environment, you can use Cloud Run. You will manage your runtime environment, and you can install what you want on it and then you are sure of what you can do.
Last remarks:
note: instead of performing a CURL, you can perform a http get request to the same URL to get the IP
Why do you want to know the outgoing IP? It's serverless, you also don't manage the network. You will reach the internet through a Google IPs. It can change everytime, and other cloud functions (or cloud run), from your projects or project from others (like me), are able to use the same IPs. It's Google IPs, not yours! If it's your requirement, let me know, there are solutions for that!

CAS 6.2.x MFA Principal Attribute Trigger 'memberOf' Active Directory Not Working

I have CAS 6.2.x running in Kubernetes building the image from this repo. I am passing in the cas.properties file via configmap.I have it wired up against Active Directory and am able to login with the Username/Password. I am now working to enable MFA with the Google Authenticator plugin. I have this working as well if I force the flow globally with the following:
cas.authn.mfa.global-provider-id=mfa-gauth
When I try to use the values described here for Multifactor Authentication: Principal Attribute Trigger it doesn't send me to the MFA flow. These are the settings that I have set:
cas.authn.ldap[0].principalAttributeList=userPrincipalName,cn,givenName,sAMAccountName,memberOf
cas.authn.mfa.global-principal-attribute-name-triggers=memberOf
cas.authn.mfa.global-principal-attribute-value-regex=ForceMfa
When I log in these are the values returned back for memberOf:
memberOf
[CN=Group2,OU=MyOu,DC=subdomain,DC=domain,DC=local, CN=Group1,OU=MyOu,DC=subdomain,DC=domain,DC=local, CN=ForceMfa,OU=MyOu,DC=subdomain,DC=domain,DC=local]
Principal
I used Misagh blog post as a guide.
If I change the trigger and regex to sAMAccountName and my username it then works as expected. Not sure if I need to change the regex format to find the group name or if I just have something else wrong. It just seems like the regex is not finding a match for some reason as the settings seem to be working for me, just not with memberOf.
Thank you
Consider switching this to:
cas.authn.mfa.global-principal-attribute-value-regex=.*ForceMfa.+
Then, attach/review your logs for org.apereo.cas under either DEBUG/TRACE so you can see what's happening.

AWS SSM Parameter Store: How can I edit multi-line "SecureString" values using the console?

Currently, I use a single SSM parameter to store a set of properties separated by newlines, like this:
property1=value1
property2=value2
property3=value3
(I am aware of the 4K size limit, it's fine.)
This works well, for normal String type parameters that store non-sensitive information like environment configuration, but I'd also like to do similar for secrets using the SecureString parameter type.
The problem is that I can't edit the parameter value in the console because it's using a HTML input field of type="password" that doesn't handle newlines.
The multi-line value works fine with the actual parameter store backend - I can set a value with multiple lines with the SSM API no problem and they can be read with the EC2 CLI properly too.
But I can't edit them using the console. This is a problem because the whole point of using a SecureString parameter is that I intend the only place to edit/view these secrets to be via the console (so that permissions are controlled and access is audited).
There's a few infrastructure workarounds I could implement (one parameter for each secret, store the secrets on S3 or other secret storing service, etc.) but they all have drawbacks - I'm just trying to find out if there's a way around this using the console?
Is there any way I can work around this and use the console to edit multi-line SecureString parameters?
Any kind of browser workaround or hack that I might be able to use to tell the browser to use a textarea instead of a "password" type field?
I'm using Chrome, but I'd be happy to work around this by using another browser or something (editing the secrets is pretty rare, and viewing multi-line values in the console works fine).
EDIT
After posting this question, AWS notified me there was a whole new "AWS Systems Manager" UI, but it still has the same problem - I tried the below browser hacks on this new UI, but no luck.
Failed browser hack attempt 1: I tried opening the browser console, running document.getElementById("Value").value = "value1\nvalue2" and then clicking the save button, which set the value I injectec, but the newline was filtered out.
Failed browser hack attempt 2: I tried using the browser instpector to change the element to a TextArea and then typed in two lines of input and clicked save, but that didn't set the value at all.
From https://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html#cli-using-param-file, I learned you can pass a file as parameter to the --value argument. So if your file is called secrets.properties, you can do this:
aws ssm put-parameter --type SecureString --name secrets --value file://secrets.properties
I found a way to do it, but it's too much effort and too weird - if anyone can find a simpler way, I will mark that as the answer.
The hacky workaround is to install the "Tamper Chrome" extension + app, then capture the XHR request as the browser sends it and edit the new lines into the JSON.
Blech. Plus "Tamper Chrome" is pretty awful, I don't want to run it on my machine.
This might be better to use the new secrets manager that was launched recently. The interface for it is very close to parameter store but it has better support for multiple parameters in one place.
I wonder if the change in the console was due to the expected release of the service since they have a pricing model around secrets whereas parameter store is free
In the end, I decided the answer to this question is "don't do that". Not that I would've wanted to hear that when I was trying to make it work.
You should use a separate SSM param per secret for these reasons:
ability to grant permissions at fine grained level; e.g. you have an API password for calling your service, and a DB password for the service talk to a DB - if you store them in the same secret you couldn't only grant access to the API password.
ability to track key access separately - the SSM access logs can only tell you that the target machine/user accessed the SSM param at that time, it won't be able to tell you which secret was accessed
ability to use separate KMS keys to encrypt
Just watch out for the fact that you can only request a max of 10 SSM params at a time.
if you want, you can try with my app https://github.com/ledongthuc/awssecretsmanagerui
I try to create it to easier to update multi-line values and binary easier. Hope it's helpful with your case.

Couchbase - Can the N1QL DP4 release handle the stale option being set on a query?

I have been running some tests using the DP4 release of N1QL.
It seems that if I write to the database (save a document) I can access it by key straight away, but if I run a query to find it by the document type and another matching value it doesn't come back in the results for between 1 and 10 seconds.
After this time has passed, the query returns the expected result.
I have seen the issue raised here: https://issues.couchbase.com/browse/MB-10944
The issue says it is resolved in DP4 but there is no confirmation of this or documentation on how to use the new feature.
Has anybody figured out how to do this or could one of the Couchbase developers lend a hand?
yes but that feature is currently not available via the N1QL shell and you will need to use the HTTP REST API directly to pass those parameters.
e.g.
curl -v http://localhost:8093/query/service -d 'statement=select * from default&scan_consistency=REQUEST_PLUS'
By setting the scan_consistency parameter to 'REQUEST PLUS', N1QL will set stale=false internally for the view scan.