Zabbix get the result of running a command on the ESXi server - zabbix

On the ESXi server (I connect via SSH), I execute the command to view the SMART disk:
esxcli storage core device smart get -d=t10.ATA_INTEL_SSDSC2BB080G4__PHSL4
How can I pass the received data to Zabbix?
I'm trying to create an Item with type "script". I create this Item in a template. The template already connects to ESXi using macros, Zabbix receives data. But in the script field of the Item, you simply cannot insert this command, you just need to write the script.

In Zabbix a Script is a command you can execute using Host the context menĂ¹, or a command you can execute as an Action, when a Problem is triggered.
An item to perform a command via SSH, should use the "SSH agent" item type.
You cannot perform an esxcli command through the vSphere HTTPS SDK with Zabbix.
See
https://www.zabbix.com/documentation/6.0/en/manual/web_interface/frontend_sections/administration/scripts
https://www.zabbix.com/documentation/6.0/en/manual/config/items/itemtypes/ssh_checks
Example:

Related

Internal 500 error on Google Compute Engine, installing littlest jupyter

"Internal 500 server error" after VM runs for a day or two.
This is the second time it has happened, I start the instance, install littlest Jupyterhub
(see details below). I can login to the external ip, for a day, but then it stops
with internal 500 error. I cannot ssh or get into the instance, only alternate is to
create a new instance and re-do. What is the problem?
I have installed littlest jupyterhub using on this instance, using
#!/bin/bash
curl https://raw.githubusercontent.com/jupyterhub/the-littlest-jupyterhub/master/bootstrap/bootstrap.py | sudo python3 - --admin master
I would recommend you enable access on your instance to the serial console [1].
You will also need to setup a password for your user following this documentation [2].
With these two steps done, you should be able to reconnect to your instance once you are locked out like you mentioned by following this [3].
You should then be able to investigate what is going on in the instance.
Then try to verify if your application is still running, if the SSH server is still running etc.
Frederic
[1] https://cloud.google.com/compute/docs/instances/interacting-with-serial-console#enable_instance_access
[2] https://cloud.google.com/compute/docs/instances/interacting-with-serial-console#setting_up_a_local_password
[3] https://cloud.google.com/compute/docs/instances/interacting-with-serial-console#connectserialconsole

ECS EC2 Launch Type: Service database connection string

I am trying out a small POC (learning experiment) on docker. I have 3 docker images, one each for a storefront, a search engine and a database engine called, storefront, solr, docmysql respectively. I have tried running them in a docker swarm (on a single node) on ec2 and it works fine.
In the POC, I next needed to move this to AWS ECS using the EC2 launch type on a single Non-Amazon ECS-Optimized AMI. I have installed and started a ecs-agent on this. I have created 3 services with one task for each of the 3 images configured as containers within the task. The question is about connecting to the database from the storefront.
The storefront has a property file where the database connection is typically defined as
"jdbc:mysql://docmysql/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false".
This worked when I ran it as a docker swarm. Once I moved it to ECS (EC2 launch type), I had to expose the port 3306 from my task/container for the docmysql service. This gave me a service endpoint of docmysql.local, with 'local' being a private namespace. I tried changing the connection string to
"jdbc:mysql://docmysql.local/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false"
in the property file and it always fails with " Name or service not known". What should my connection string be? When the service is created I see 2 entries in Route 53, one SRV record and a A record. The A record has as its name a .docmysql.local, If I use this in the database connection string, I see that it works but obvious not the right thing to do with the hadcoded taskid. I have read about AWS Cloud Map (servicediscovery) but still not very clear how to go about it. I will not be putting any loadbalancer in front of my DB task in the service, there will always be only one task for the db.
So what is the best way to generate the connection string that works. Also why did I not have issues when I ran it as a docker swarm.
I know I can use an RDS instead of stating my own database, I will try that but for now need this working as I have started with this. Thanks for any help.
Well, I've raised some points before my own solution within the problem:
Do you need your instance to scale using ECS? If not, migrate it to RDS.
Do you need to deploy it on EC2-Type? If not, use fargate, it is more simple to handle.
Now, I've faced that issue on Fargate, and discovered that depending on your container/task definitions, it can be used inside the same task for testing purposes, so, 127.0.0.1 should be the answer.
On different tasks you need to work with awsvpc network mode so, you will have this:
Each task that uses the awsvpc network mode receives its own elastic network interface, which is attached to the container instance that hosts it. (FROM AWS)
My suggestion is to create a Lambda Function to discover your network interface dynamically.
Read this for deeply understanding:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
https://aws.amazon.com/blogs/developer/invoking-aws-lambda-functions-from-java/

How to setup mysql develper for PCF mySQL database to manage it

I am trying to understand PCF concepts and thinking that once i am done with creating mysql services in PCF, how i can manage that database like creating tables and maintaining that table just like we do in pur traditional environment using mySqldeveoper. I came across one service like PivotalMySQLWeb and tried but didnt liked it much. So if somehow i can get connection details of mysql service , i can use that to connect using sql developer.
The links #khalid mentioned are definitely good.
http://docs.pivotal.io/p-mysql/2-0/use.html
https://github.com/andreasf/cf-mysql-plugin#usage
More generally, you can use an SSH tunnel to access any service, not just MySQL. This also allows you to use whatever tool you would like to access the service.
This is documented here, but if for some reason that goes away here are the steps.
Create your target service instance, if you don't have one already.
Push an app, any app. It really doesn't matter, it can be a hello world app. The app doesn't even need to use the service. We just need something to connect to.
Either Bind the service from #1 to the app in #2 or create a service key using the service from #1. If you bind to the app, run cf env <app> or if you use a service key run cf service-key MY-DB EXTERNAL-ACCESS-KEY and either one will give you your service credentials.
Run cf ssh -L 63306:us-cdbr-iron-east-01.p-mysql.net:3306 YOUR-HOST-APP, where 63306 is the local port you'll connect to on your machine and us-cdbr-iron-east-01.p-mysql.net:3306 are the host and port from the credentials in step #3.
The tunnel is now up, use whatever client you'd like to connect to your service. For example: mysql -u b5136e448be920 -h localhost -p -D ad_b2fca6t49704585d -P 63306, where b5136e448be920 and ad_b2fca6t49704585d are the username and database name from step #3 and 63306 is the local port you picked from step #4.
Additionally, if you want to connect aws-rds-mysql (instantiated from Pivotal Cloud Foundry) from IntelliJ, you can use the DB-Navigator Plugin (https://plugins.jetbrains.com/plugin/1800-database-navigator) inside IntelliJ, through which, database manipulation can be performed.
After creating the ssh tunnel $ cf ssh -L 63306:<DB_HOSTNAME>:3306 YOUR-HOST-APP (as also mentioned in https://docs.pivotal.io/pivotalcf/2-4/devguide/deploy-apps/ssh-services.html),
Go to DB Navigator plugin and click on custom under new connection.
Enter the URL as: jdbc:mysql://:password>#localhost:63306/<database_name>
The following thread might be helpful for you as well How do I connect to my MySQL service on Pivotal Cloud Foundry (PCF) via MySQL Workbench or CLI or MySQLWeb Database Management App?

Pull new item value on demand

Looking for solution for zabbix to get fresh value (not from db) for $itemid via command line interface.
Force Zabbix server or proxy to poll a new value from a item.
php command $itemid
response: value
Forcing Zabbix server or proxy to poll a value for a passive-like item is not supported at this time. There is a feature request to add such a capability: https://support.zabbix.com/browse/ZBXNEXT-473 .
You can script around zabbix_sender on destination host to send data on zabbix server, but it requires item key be "Zabbix trapper" type and periodical send should be configured on destination server scheduler.

how to execute script on zabbix agent?

I have made a magic script on Zabbix agent and i wanna to execute on Zabbix agent using Zabbix item.i will be thankful to you in advance for this . do something if you can.
You would create a user parameter that would look like any other item from the server side. On the agent side, you would have to edit the agent configuration file and restart the agent.
In addition to user parameters, Zabbix supports remote commands. These are appropriate (and indeed invoked from) actions as a result of some condition. So in the answer from Richv you would use that for polling for data, e.g. to find out if a process is running. Continuing that example, if the process is not running and you wanted to start it, you can ask the agent to execute a command that might start the process. To do that you must enable remote commands in the agent config file, and configure it as an action (that in turn is linked to whatever condition you had concerns about).
1st in the zabbix_agentd.conf for the remote host, add EnableRemoteCommands=1
Then see the video at https://youtu.be/G6jfahBZwlk
In the video, a bat file is created on the remote host, and then an item is created for the remote host using the Zabbix server interface, and selecting the system.run option in the key options.