Zabbix Web scenarios variables random number or other function - zabbix

I need post variable with random number value. How can i generate random variable in web scenario? Can i run some script or macro to generate random value for scenario or step?

There is no native way to do it, as you guessed you can make it work with a macro and a custom script.
You can define a {$RANDOM} host macro and use it in the web scenario step as a post field value.
Then you have to change it periodically with a crontabbed script, a python sample like:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Set a random macro to a value.
Provide user from the commandline or from Env var support:
# export ZABBIX_SERVER='https://your_zabbix_host/zabbix/'
# export ZABBIX_USERNAME='admin'
# export ZABBIX_PASSWORD='secretPassword'
$ ./setRandomMacro.py -u admin -p zabbix -Z http://yourzabbix -H yourHost -M '{$RANDOM}'
Connecting to http://yourzabbix
Host yourHost (Id: ----)
{$RANDOM}: current value "17" -> new value "356"
$ ./setRandomMacro.py -u admin -p zabbix -Z http://yourzabbix -H yourHost -M '{$RANDOM}'
Connecting to http://yourzabbix
Host yourHost (Id: ----)
{$RANDOM}: current value "356" -> new value "72"
"""
from zabbix.api import ZabbixAPI
import json
import argparse
import getopt
import sys
import os
import random
# Class for argparse env variable support
class EnvDefault(argparse.Action):
# From https://stackoverflow.com/questions/10551117/
def __init__(self, envvar, required=True, default=None, **kwargs):
if not default and envvar:
if envvar in os.environ:
default = os.environ[envvar]
if required and default:
required = False
super(EnvDefault, self).__init__(default=default, required=required,
**kwargs)
def __call__(self, parser, namespace, values, option_string=None):
setattr(namespace, self.dest, values)
def jsonPrint(jsonUgly):
print(json.dumps(jsonUgly, indent=4, separators=(',', ': ')))
def ArgumentParser():
parser = argparse.ArgumentParser()
parser.add_argument('-Z',
required=True,
action=EnvDefault,
envvar='ZABBIX_SERVER',
help="Specify the zabbix server URL ie: http://yourserver/zabbix/ (ZABBIX_SERVER environment variable)",
metavar='zabbix-server-url')
parser.add_argument('-u',
required=True,
action=EnvDefault,
envvar='ZABBIX_USERNAME',
help="Specify the zabbix username (ZABBIX_USERNAME environment variable)",
metavar='Username')
parser.add_argument('-p',
required=True,
action=EnvDefault,
envvar='ZABBIX_PASSWORD',
help="Specify the zabbix username (ZABBIX_PASSWORD environment variable)",
metavar='Password')
parser.add_argument('-H',
required=True,
help="Hostname",
metavar='hostname')
parser.add_argument('-M',
required=True,
help="Macro to set",
metavar='macro')
return parser.parse_args()
def main(argv):
# Parse arguments and build work variables
args = ArgumentParser()
zabbixURL = args.Z
zabbixUsername = args.u
zabbixPassword = args.p
hostName = args.H
macroName = args.M
# API Connect
print('Connecting to {}'.format(zabbixURL))
zapi = ZabbixAPI(url=zabbixURL, user=zabbixUsername,
password=zabbixPassword)
hostObj = zapi.host.get(search={'host': hostName}, output='hostids')
print('Host {} (Id: {})'.format(hostName, hostObj[0]['hostid']))
currentMacro = zapi.usermacro.get(
hostids=hostObj[0]['hostid'], filter={'macro': macroName})
if (currentMacro):
newMacroValue = random.randint(1, 1001)
print('{}: current value "{}" -> new value "{}"'.format(macroName,
currentMacro[0]['value'], newMacroValue))
zapi.usermacro.update(
hostmacroid=currentMacro[0]['hostmacroid'], value=newMacroValue)
else:
print('No {} macro found on host {}'.format(macroName, hostName))
if __name__ == "__main__":
main(sys.argv[1:])

Related

AWS API re-deployment using ansible

I have an existing API in my AWS account. Now I am trying to use ansible to redeploy api after introducing any resource policy changes.
According to AWS I need to use below CLI command to redeploy the api:
- name: deploy API
command: >
aws apigateway update-stage --region us-east-1 \
--rest-api-id <rest-api-id> \
--stage-name 'stage'\
--patch-operations op='replace',path='/deploymentId',value='<deployment-id>'
Above, 'deploymentId' from previous deployment will be different after every deployment that's why trying to create that as a variable so this can be automated for redeployment steps.
I can get previous deployment information using below CLI:
- name: Get deployment information
command: >
aws apigateway get-deployments \
--rest-api-id 123454ne \
--region us-east-1
register: deployment_info
And output looks like this:
deployment_info.stdout_lines:
- '{'
- ' "items": ['
- ' {'
- ' "id": "abcd",'
- ' "createdDate": 1228509116'
- ' }'
- ' ]'
- '}'
I was using deployment_info.items.id as deploymentId and couldn't able to make this work. Now stuck on what can be Ansible CLI command to get id from output and use this id as deploymentId in deployment commands.
How can I use this id for deploymentId in deployment commands?
I created a small ansible module which you might find useful
#!/usr/bin/python
# Creates a new deployment for an API GW stage
# See https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-deployments.html
# Based on https://github.com/ansible-collections/community.aws/blob/main/plugins/modules/aws_api_gateway.py
# TODO needed?
# from __future__ import absolute_import, division, print_function
# __metaclass__ = type
import json
import traceback
try:
import botocore
except ImportError:
pass # Handled by AnsibleAWSModule
from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
def main():
argument_spec = dict(
api_id=dict(type='str', required=True),
stage=dict(type='str', required=True),
deploy_desc=dict(type='str', required=False, default='')
)
module = AnsibleAWSModule(
argument_spec=argument_spec,
supports_check_mode=True
)
api_id = module.params.get('api_id')
stage = module.params.get('stage')
client = module.client('apigateway')
# Update stage if not in check_mode
deploy_response = None
changed = False
if not module.check_mode:
try:
deploy_response = create_deployment(client, api_id, **module.params)
changed = True
except (botocore.exceptions.ClientError, botocore.exceptions.EndpointConnectionError) as e:
msg = "Updating api {0}, stage {1}".format(api_id, stage)
module.fail_json_aws(e, msg)
exit_args = {"changed": changed, "api_deployment_response": deploy_response}
module.exit_json(**exit_args)
retry_params = {"retries": 10, "delay": 10, "catch_extra_error_codes": ['TooManyRequestsException']}
#AWSRetry.jittered_backoff(**retry_params)
def create_deployment(client, rest_api_id, **params):
result = client.create_deployment(
restApiId=rest_api_id,
stageName=params.get('stage'),
description=params.get('deploy_desc')
)
return result
if __name__ == '__main__':
main()

JINA#4428[C]:Can not fetch the URL of Hubble from `api.jina.ai`

I was trying out the Semantic Wikipedia Search from jina-ai.
This is the error I am getting after running python app.py -t index.
app.py is used to index the data.
JINA#4489[C]:Can not fetch the URL of Hubble from api.jina.ai
HubIO#4489[E]:Error while pulling jinahub+docker://TransformerTorchEncoder:
JSONDecodeError('Expecting value: line 1 column 1 (char 0)')
This is app.py:
__copyright__ = "Copyright (c) 2021 Jina AI Limited. All rights reserved."
__license__ = "Apache-2.0"
import os
import sys
import click
import random
from jina import Flow, Document, DocumentArray
from jina.logging.predefined import default_logger as logger
MAX_DOCS = int(os.environ.get('JINA_MAX_DOCS', 10000))
def config(dataset: str):
if dataset == 'toy':
os.environ['JINA_DATA_FILE'] = os.environ.get('JINA_DATA_FILE', 'data/toy-input.txt')
elif dataset == 'full':
os.environ['JINA_DATA_FILE'] = os.environ.get('JINA_DATA_FILE', 'data/input.txt')
os.environ['JINA_PORT'] = os.environ.get('JINA_PORT', str(45678))
cur_dir = os.path.dirname(os.path.abspath(__file__))
os.environ.setdefault('JINA_WORKSPACE', os.path.join(cur_dir, 'workspace'))
os.environ.setdefault('JINA_WORKSPACE_MOUNT',
f'{os.environ.get("JINA_WORKSPACE")}:/workspace/workspace')
def print_topk(resp, sentence):
for doc in resp.data.docs:
print(f"\n\n\nTa-Dah🔮, here's what we found for: {sentence}")
for idx, match in enumerate(doc.matches):
score = match.scores['cosine'].value
print(f'> {idx:>2d}({score:.2f}). {match.text}')
def input_generator(num_docs: int, file_path: str):
with open(file_path) as file:
lines = file.readlines()
num_lines = len(lines)
random.shuffle(lines)
for i in range(min(num_docs, num_lines)):
yield Document(text=lines[i])
def index(num_docs):
flow = Flow().load_config('flows/flow.yml')
data_path = os.path.join(os.path.dirname(__file__), os.environ.get('JINA_DATA_FILE', None))
with flow:
flow.post(on='/index', inputs=input_generator(num_docs, data_path),
show_progress=True)
def query(top_k):
flow = Flow().load_config('flows/flow.yml')
with flow:
text = input('Please type a sentence: ')
doc = Document(content=text)
result = flow.post(on='/search', inputs=DocumentArray([doc]),
parameters={'top_k': top_k},
line_format='text',
return_results=True,
)
print_topk(result[0], text)
#click.command()
#click.option(
'--task',
'-t',
type=click.Choice(['index', 'query'], case_sensitive=False),
)
#click.option('--num_docs', '-n', default=MAX_DOCS)
#click.option('--top_k', '-k', default=5)
#click.option('--dataset', '-d', type=click.Choice(['toy', 'full']), default='toy')
def main(task, num_docs, top_k, dataset):
config(dataset)
if task == 'index':
if os.path.exists(os.environ.get("JINA_WORKSPACE")):
logger.error(f'\n +---------------------------------------------------------------------------------+ \
\n | 🤖🤖🤖 | \
\n | The directory {os.environ.get("JINA_WORKSPACE")} already exists. Please remove it before indexing again. | \
\n | 🤖🤖🤖 | \
\n +---------------------------------------------------------------------------------+')
sys.exit(1)
index(num_docs)
elif task == 'query':
query(top_k)
if __name__ == '__main__':
main()
This is flow.yml
version: '1' # This is the yml file version
with: # Additional arguments for the flow
workspace: $JINA_WORKSPACE # Workspace folder path
port_expose: $JINA_PORT # Network Port for the flow
executors: # Now, define the executors that are run on this flow
- name: transformer # This executor computes an embedding based on the input text documents
uses: 'jinahub+docker://TransformerTorchEncoder' # We use a Transformer Torch Encoder from the hub as a docker container
- name: indexer # Now, index the text documents with the embeddings
uses: 'jinahub://SimpleIndexer' # We use the SimpleIndexer for this purpose
And when I try to execute app.py -t index
This is the error:
JINA#3803[C]:Can not fetch the URL of Hubble from `api.jina.ai` HubIO#3803[E]:Error while pulling jinahub+docker://TransformerTorchEncoder: JSONDecodeError('Expecting value: line 1 column 1 (char 0)')
I think this just happened because the API was down. It should work now.

Modifying Python 3 code to deal with RSA key fingerprint

The following code fails to connect to a Cisco switch because of the:
RSA key fingerprint is 3e:b7:7b:55:6b:a3:xx:xx:xx:xx
Are you sure you want to continue connecting (yes/no)? yes
#!/usr/bin/env python
from __future__ import print_function
from netmiko import ConnectHandler
import sys
import time
import select
import paramiko
import re
fd = open(r'output_twinax.log','w') # Where you want the file to save to.
old_stdout = sys.stdout
sys.stdout = fd
platform = 'cisco_ios'
username = 'username' # edit to reflect
password = 'password' # edit to reflect
ip_add_file = open(r'IP-list','r') # a simple list of IP addresses you want to connect to each one o
n a new line
for host in ip_add_file:
host = host.strip()
device = ConnectHandler(device_type=platform, ip=host, username=username, password=password)
find_hostname = device.find_prompt()
hostname = find_hostname.replace(">","")
print(hostname)
output = device.send_command('terminal length 0')
output = device.send_command('enable') #Editable to be what ever is needed
output = device.send_command('sh int status | i SFP')
print(output)
fd.close()
Please help modifying it to account for the RSA key. Thank you much.
Did you try use_keys keyword argument?
#!/usr/bin/env python
from __future__ import print_function
from netmiko import ConnectHandler
import sys
import time
import select
import paramiko
import re
fd = open(r'output_twinax.log','w') # Where you want the file to save to.
old_stdout = sys.stdout
sys.stdout = fd
platform = 'cisco_ios'
username = 'username' # edit to reflect
password = 'password' # edit to reflect
# List of IP addresses in each line
ip_add_file = open(r'IP-list','r')
key_file = "./rsa_key.txt"
for host in ip_add_file:
host = host.strip()
device = ConnectHandler(device_type=platform,
ip=host,
username=username,
key_file=key_file,
use_keys=True)
find_hostname = device.find_prompt()
hostname = find_hostname.replace(">","")
print(hostname)
output = device.send_command('terminal length 0')
output = device.send_command('enable')
output = device.send_command('sh int status | i SFP')
print(output)
fd.close()

Backup DB Django MysqlDump

Good afternoon, I have an application in Django 1.10 where I need to create a backup of the bd, this copy should be made when the user clicks on a button that will be placed in a template and will download the copy in the Team of the user.
In my views.py I have the following.
def backup(request):
subprocess.Popen("mysqldump -u root -p12345 victimas > /home/proyecto/backup.sql")
subprocess.Popen("gzip -c /home/proyecto/backup.sql > /home/proyecto/backup.gz")
dataf = open('/home/proyecto/backups/backup.gz', 'r')
return HttpResponse(dataf.read(), mimetype='application/x-gzip')
But I get the error
[Errno 2] No such file or directory: django mysqldump
Doing this directly from the console creates the file for me, and check the permissions of the folder.
I appreciate your collaboration
As per the Popen documentation, Popen takes a list of arguments. If you pass it a string, it will be treated as the command name - the entire string will be treated as the command, not as a command with arguments.
Split the string argument using:
import shlex
command_line = "mysqldump -u root -p12345 victimas > /home/proyecto/backup.sql"
args = shlex.split(command_line)
subprocess.Popen(args)
I resolved this:
In the Django settings file add:
RUTA = '/path/to_tmp/file/'
In the views.py
import subprocess, gzip
from subprocess import Popen
from victimas.settings import DATABASES, RUTA
def backup(request):
name = DATABASES['default']['NAME']
passwd = DATABASES['default']['PASSWORD']
user = DATABASES['default']['USER']
ruta = RUTA
proc = subprocess.Popen("mysqldump -u "+user+" -p"+passwd+" "+name+" > "+ruta+"backup.sql", shell=True)
proc.wait()
procs = subprocess.Popen("tar -czvf "+ruta+"backup.tar.tgz "+ruta+"backup.sql", shell=True, )
procs.wait()
fs = FileSystemStorage(ruta)
with fs.open('backup.tar.tgz') as tar:
response = HttpResponse(tar, content_type='application/x-gzip')
response['Content-Disposition'] = 'filename="backup.tar.tgz"'
return response

Scrapy / Pipeline not inserting data to MySQL database

I'm making a pipeline in scrapy to store scraped data in a mysql database. When the spider is run in terminal it works perfectly. Even the pipeline is opened. However the data is not being sent to the database. Any help appreciated! :)
here's the pipeline code:
import sys
import MySQLdb
import hashlib
from scrapy.exceptions import DropItem
from scrapy.http import Request
from tutorial.items import TutorialItem
class MySQLTest(object):
def __init__(self):
db = MySQLdb.connect(user='root', passwd='', host='localhost', db='python')
cursor = db.cursor()
def process_item(self, spider, item):
try:
cursor.execute("INSERT INTO info (venue, datez) VALUES (%s, %s)", (item['artist'], item['date']))
self.conn.commit()
except MySQLdb.Error, e:
print "Error %d: %s" % (e.args[0], e.args[1])
return item
and heres the spider code
import scrapy # Import required libraries.
from scrapy.selector import HtmlXPathSelector # Allows for path detection in a websites code.
from scrapy.spider import BaseSpider # Used to create a simple spider to extract data.
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor # Needed for the extraction of href links in HTML to crawl further pages.
from scrapy.contrib.spiders import CrawlSpider # Needed to make the crawl spider.
from scrapy.contrib.spiders import Rule # Allows specified rules to affect what the link
import spotipy
import soundcloud
import mysql.connector
from tutorial.items import TutorialItem
class AllGigsSpider(CrawlSpider):
name = "allGigs" # Name of the Spider. In command promt, when in the correct folder, enter "scrapy crawl Allgigs".
allowed_domains = ["www.allgigs.co.uk"] # Allowed domains is a String NOT a URL.
start_urls = [
"http://www.allgigs.co.uk/whats_on/London/clubbing-1.html",
"http://www.allgigs.co.uk/whats_on/London/festivals-1.html",
"http://www.allgigs.co.uk/whats_on/London/comedy-1.html",
"http://www.allgigs.co.uk/whats_on/London/theatre_and_opera-1.html",
"http://www.allgigs.co.uk/whats_on/London/dance_and_ballet-1.html"
] # Specify the starting points for the web crawler.
rules = [
Rule(SgmlLinkExtractor(restrict_xpaths='//div[#class="more"]'), # Search the start URL's for
callback="parse_me",
follow=True),
]
def parse_me(self, response):
for info in response.xpath('//div[#class="entry vevent"]|//div[#class="resultbox"]'):
item = TutorialItem() # Extract items from the items folder.
item ['artist'] = info.xpath('.//span[#class="summary"]//text()').extract() # Extract artist information.
item ['date'] = info.xpath('.//span[#class="dates"]//text()').extract() # Extract date information.
#item ['endDate'] = info.xpath('.//abbr[#class="dtend"]//text()').extract() # Extract end date information.
#item ['startDate'] = info.xpath('.//abbr[#class="dtstart"]//text()').extract() # Extract start date information.
item ['genre'] = info.xpath('.//div[#class="header"]//text()').extract()
yield item # Retreive items in item.
client = soundcloud.Client(client_id='401c04a7271e93baee8633483510e263')
tracks = client.get('/tracks', limit=1, license='cc-by-sa', q= item['artist'])
for track in tracks:
print(tracks)
I believe the problem was in my settings.py file where i had missed a comma... yawn.
ITEM_PIPELINES = {
'tutorial.pipelines.MySQLTest': 300,
}