I have a function which fetch sql file from Cloud storage. This function accept project_id,bucket_id & sql_file
from google.cloud import storage
def read_sql(request):
request_json = request.get_json(silent=True)
project_id=request_json['project_id']
bucket_id=request_json['bucket_id']
sql_file=request_json['sql_file']
gcs_client = storage.Client(project_id)
bucket = gcs_client.get_bucket(bucket_id)
blob = bucket.get_blob(sql_file)
contents = blob.download_as_string()
return contents.decode('utf-8')
It works fine when I test it with these parameters
{"project_id":"my_project","bucket_id":"my_bucket","sql_file":"daily_load.sql"}
I am trying to call this function in Google WorkFlow but don't know how to setup arguments in call http.get
main:
params: []
steps:
- init:
assign:
- project: ${sys.get_env("GOOGLE_CLOUD_PROJECT_ID")}
- location: "us-central1"
- name: "workflow_test_return_sql1"
- service_account: "sa#appspot.gserviceaccount.com" # Use App Engine default SA.
- get_function:
call: googleapis.cloudfunctions.v1.projects.locations.functions.get
args:
name: ${"projects/" + project + "/locations/" + location + "/functions/" + name}
result: function
- grant_permission_to_all:
call: googleapis.cloudfunctions.v1.projects.locations.functions.setIamPolicy
args:
resource: ${"projects/" + project + "/locations/" + location + "/functions/" + name}
body:
policy:
bindings:
- members: ["allUsers"]
role: "roles/cloudfunctions.invoker"
- call_function:
call: http.get
args:
url: ${function.httpsTrigger.url}
result: resp
Code written in Workflow is yaml
Any idea how to build function call URL with arguments ?
I have tried below two approached but they didn't work
- call_function:
call: http.post
args:
url: ${function.httpsTrigger.url}
body:
input: {"project_id":"my_project","bucket_id":"my_bucket","sql_file":"daily_load.sql"}
result: resp
and
- call_function:
call: http.get
args:
url: ${function.httpsTrigger.url}?project_id=my_project?bucket_id=my_bucket?sql_file=daily_load.sql
You are close!! Try that
- call_function:
call: http.get
args:
url: ${function.httpsTrigger.url + "?project_id=" + my_project + "&bucket_id=" + my_bucket + "&sql_file=" + daily_load.sql}
EDIT 1
Here the post option with JSON body (note that YAML and JSON are similar. here how to write your JSON with yaml)
- call_function:
call: http.post
args:
url: ${function.httpsTrigger.url}
body:
project_id: "my_project"
bucket_id: "my_bucket"
sql_file: "daily_load.sql"
result: resp
Related
I have the following task:
this_is_a_task = SimpleHttpOperator(
task_id= 'task_id',
method='POST',
http_conn_id='conn_id',
endpoint='/?test=foo',
# data={"test": "foo"},
headers={"Content-Type": "application/json"}
on the cloud functions side, I'm trying to catch the parameters with the two following ways:
# catching data
# test_data = request.get_json().get('test')
# print('test: {}'.format(test))
# catching end point
test_endpoint = request.args.get('test')
print('test: {}'.format(test))
the second option is working (request.args.get('test')) however when trying the first option request.get_json().get('test') I'm getting a 400 request error.
so if I'm not using the endpoint variable from my SimpleHttpOperator how can I catch the json object pass into the data variable?
I've tried to replicate your issue and based on this documentation you need to add json.dumps when you are calling a POST with json data. Then provide authentication credentials as a Google-generated ID token stored in an Authorization header.
See below sample code:
import datetime
import json
from airflow import models
from airflow.operators import bash
from airflow.providers.http.operators.http import SimpleHttpOperator
YESTERDAY = datetime.datetime.now() - datetime.timedelta(days=1)
default_args = {
'owner': 'Composer Example',
'depends_on_past': False,
'email': [''],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': datetime.timedelta(minutes=5),
'start_date': YESTERDAY,
}
with models.DAG(
'composer_quickstart',
catchup=False,
default_args=default_args,
schedule_interval=datetime.timedelta(days=1)) as dag:
# Print the dag_run id from the Airflow logs
gen_auth = bash.BashOperator(
task_id='gen_auth', bash_command='gcloud auth print-identity-token '
)
auth_token = "{{ task_instance.xcom_pull(task_ids='gen_auth') }}"
this_is_a_task = SimpleHttpOperator(
task_id='task_id',
method='POST',
http_conn_id='cf_conn1',
data=json.dumps({"test": "foo"}),
headers={"Content-Type": "application/json","Authorization": "Bearer " + auth_token}
)
gen_auth >> this_is_a_task
On the cloud functions side tried to use below sample code:
test_data = request.get_json().get('test')
print(test_data)
return test_data
You can also test your function using this curl command:
curl -i -X POST -H "Content-Type:application/json" -d '{"test": "foo"}' "Authorization: bearer $(gcloud auth print-identity-token)" https://function-5-k6ssrsqwma-uc.a.run.app
I have a YAML file as follows:
api: v1
hostname: abc
metadata:
name: test
annotations: {
"ip" : "1.1.1.1",
"login" : "fad-login",
"vip" : "1.1.1.1",
"interface" : "port1",
"port" : "443"
}
I am trying to read this data from a file, only replace the values of ip and vip and write it back to the file.
What I tried is:
open ("test.yaml", w) as f:
yaml.dump(object, f) #this does not help me since it converts the entire file to YAML
also json.dump() does not work too as it converts entire file to JSON. It needs to be the same format but the values need to be updated. How can I do so?
What you have is not YAML with embedded JSON, it is YAML with some the value for annotations being
in YAML flow style (which is a superset of JSON and thus closely resembles it).
This would be
YAML with embedded JSON:
api: v1
hostname: abc
metadata:
name: test
annotations: |
{
"ip" : "1.1.1.1",
"login" : "fad-login",
"vip" : "1.1.1.1",
"interface" : "port1",
"port" : "443"
}
Here the value for annotations is a string that you can hand to a JSON parser.
You can just load the file, modify it and dump. This will change the layout
of the flow-style part, but that will not influence any following parsers:
import sys
import ruamel.yaml
file_in = Path('input.yaml')
yaml = ruamel.yaml.YAML()
yaml.preserve_quotes = True
yaml.width = 1024
data = yaml.load(file_in)
annotations = data['metadata']['annotations']
annotations['ip'] = type(annotations['ip'])('4.3.2.1')
annotations['vip'] = type(annotations['vip'])('1.2.3.4')
yaml.dump(data, sys.stdout)
which gives:
api: v1
hostname: abc
metadata:
name: test
annotations: {"ip": "4.3.2.1", "login": "fad-login", "vip": "1.2.3.4", "interface": "port1", "port": "443"}
The type(annotations['vip'])() establishes that the replacement string in the output has the same
quotes as the original.
ruamel.yaml currently doesn't preserve newlines in a flow style mapping/sequence.
If this has to go back into some repository with minimal chances, you can do:
import sys
import ruamel.yaml
file_in = Path('input.yaml')
def rewrite_closing_curly_brace(s):
res = []
for line in s.splitlines():
if line and line[-1] == '}':
res.append(line[:-1])
idx = 0
while line[idx] == ' ':
idx += 1
res.append(' ' * (idx - 2) + '}')
continue
res.append(line)
return '\n'.join(res) + '\n'
yaml = ruamel.yaml.YAML()
yaml.preserve_quotes = True
yaml.width = 15
data = yaml.load(file_in)
annotations = data['metadata']['annotations']
annotations['ip'] = type(annotations['ip'])('4.3.2.1')
annotations['vip'] = type(annotations['vip'])('1.2.3.4')
yaml.dump(data, sys.stdout, transform=rewrite_closing_curly_brace)
which gives:
api: v1
hostname: abc
metadata:
name: test
annotations: {
"ip": "4.3.2.1",
"login": "fad-login",
"vip": "1.2.3.4",
"interface": "port1",
"port": "443"
}
Here the 15 for width is of course highly dependent on your file and might influence other lines if they
were longer. In that case you could leave that out, and make the wrapping
that rewrite_closing_curly_brace() does split and indent the whole flow style part.
Please note that your original, and the transformed output are, invalid YAML,
that is accepted by ruamel.yaml for backward compatibility. According to the YAML
specification the closing curly brace should be indented more than the start of annotation
How to reproduce 415 Unsupported media type?
I am trying with this with the upload to Google Drive.
Setup resumable upload so that I can add any media after the initial POST.
Thinking like during the first POST, the media type is not set completely. Google Drive will use some default option like application/octet-stream.
Then during the data file upload via PUT, will set the right media type.
If the PUT is Not success, the response should be 415
Second method could be by adding content-encoding during POST. If the encoding is not supported the response should be 415.
Here is Python code I rewrote from Web. How to change this to get 415?
Are there other methods to get this error? I am not familiar with Javascript, web resources are cryptic for me now.
import json
import os
import requests
access_token ="ya29.XXX.....XXXqI"
filename = './test-atom.xml'
filesize = os.path.getsize(filename)
# 1. Retrieve session for resumable upload.
headers = {"Authorization": "Bearer "+access_token, "Content-Type": "application/json"}
params = {
"name": "drive-small2.xml",
#"mimeType": "application/atom+xml" ## works without this line
}
r = requests.post(
"https://www.googleapis.com/upload/drive/v3/files?uploadType=resumable",
headers=headers,
data=json.dumps(params)
)
location = r.headers['Location']
# 2. Upload the file.
#headers = {"Content-Type": "application/atom+xml", "Content-Range": "bytes 0-" + str(filesize - 1) + "/" + str(filesize)} ## works for this also
headers = { "Content-Range": "bytes 0-" + str(filesize - 1) + "/" + str(filesize)} ## works for this also
r = requests.put(
location,
headers=headers,
data=open(filename, 'rb')
)
print(r.text)
***********************
C:\Users\Administrator\Desktop\driveUpload>py resume-upload10.py
{
"kind": "drive#file",
"id": "1nQW6_-1F4fBEcKoln7vSM0qOTpWhSpZ2",
"name": "drive-small2.xml",
"mimeType": "text/xml"
}
Suppose the following scenario in using Zabbix 4.2. We have a core switch, two distributed switches and 20 access switches, where the distributed switches are connected to the core and 10 access switches are connected to each distributed switch. I am monitoring all of them using SNMP v2c and using the template cisco switches (the official one). Now the problem arises as I cannot define device dependency in zabbix easily. By easily, I mean that if a distributed switch goes out, I want to have the alarm for that device and not for all access switches connected to it. I could define it as follows. Change the triggers for each device and made them dependent on the corresponding trigger for distributed switches. However, this is too time consuming. What should I do? Any help is appreciated.
You are right, there isn't an easy way to set this kind of dependancy.
I had to manage the same situation a while ago and I wrote a python dependancy setter which uses a "dependent hostgroup <--> master host" logic.
You can modify it to fit your needs (see masterTargetTriggerDescription and slaveTargetTriggerDescription for the dependancy targets), it works but contains little error checking: use at your own risk!
import csv
import re
import json
from zabbix.api import ZabbixAPI
# Zabbix Server endpoint
zabbixServer = 'https://yourzabbix/zabbix/'
zabbixUser = 'admin'
zabbixPass = 'zabbix'
zapi = ZabbixAPI(url=zabbixServer, user=zabbixUser, password=zabbixPass)
# Hostgrop variables - to reference IDs while building API parameters
hostGroupNames = [] # list = array
hostGroupId = {} # dict = associative array
# Csv file for dep settings - see the format:
"""
Hostgroup;Master
ACCESS_1;DistSwitch1
ACCESS_2;DistSwitch1
ACCESS_5;DistSwitch2
ACCESS_6;DistSwitch2
DIST;CoreSwitch1
"""
fileName = 'dependancy.csv'
masterTargetTriggerDescription = '{HOST.NAME} is unavailable by ICMP'
slaveTargetTriggerDescription = '{HOST.NAME} is unavailable by ICMP|Zabbix agent on {HOST.NAME} is unreachable'
# Read CSV file
hostFile = open(fileName)
hostReader = csv.reader(hostFile, delimiter=';', quotechar='|')
hostData = list(hostReader)
# CSV Parsing
for line in hostData:
hostgroupName = line[0]
masterName = line[1]
slaveIds = []
masterId = zapi.get_id('host', item=masterName, with_id=False, hostid=None)
hostGroupId = zapi.get_id('hostgroup', item=hostgroupName, with_id=False, hostid=None)
masterTriggerObj = zapi.trigger.get(hostids=masterId, filter=({'description': masterTargetTriggerDescription}) )
print "Group: " + hostgroupName + " - ID: " + str(hostGroupId)
print "Master host: " + masterName + " - ID: " + str(masterId)
print "Master trigger: " + masterTriggerObj[0]['description'] + " - ID: " + str(masterTriggerObj[0]['triggerid'])
# cycle through slave hosts
hostGroupObj = zapi.hostgroup.get(groupids=hostGroupId, selectHosts='extend')
for host in hostGroupObj[0]['hosts']:
#exclude master
if host['hostid'] != str(masterId):
print " - Host Name: " + host['name'] + " - ID: " + host['hostid'] + " - MASTER: " + str(masterId)
# cycle for all slave's triggers
slaveTargetTriggerObj = zapi.trigger.get(hostids=host['hostid'])
#print json.dumps(slaveTargetTriggerObj)
for slaveTargetTrigger in slaveTargetTriggerObj:
# search for dependancy targets
if re.search(slaveTargetTriggerDescription, slaveTargetTrigger['description'] ,re.IGNORECASE):
print " - Trigger: " + slaveTargetTrigger['description'] + " - ID: " + slaveTargetTrigger['triggerid']
# Clear existing dep. from the trigger, then create the new dep.
clear = zapi.trigger.deletedependencies(triggerid=slaveTargetTrigger['triggerid'].encode())
result = zapi.trigger.adddependencies(triggerid=slaveTargetTrigger['triggerid'].encode(), dependsOnTriggerid=masterTriggerObj[0]['triggerid'])
print "----------------------------------------"
print ""
I updated the code contributed by Simone Zabberoni and rewritten it to work with Python 3, PyZabbix, and YAML.
#!/usr/bin/python3
import re
import yaml
#https://pypi.org/project/py-zabbix/
from pyzabbix import ZabbixAPI
# Zabbix Server endpoint
zabbix_server = 'https://zabbix.example.com/zabbix/'
zabbix_user = 'zbxuser'
zabbix_pass = 'zbxpassword'
# Create ZabbixAPI class instance
zapi = ZabbixAPI(zabbix_server)
# Enable HTTP auth
zapi.session.auth = (zabbix_user, zabbix_pass)
# Login (in case of HTTP Auth, only the username is needed, the password, if passed, will be ignored)
zapi.login(zabbix_user, zabbix_pass)
# Hostgrop variables - to reference IDs while building API parameters
hostGroupNames = [] # list = array
hostGroupId = {} # dict = associative array
# yaml file for dep settings - see the format:
"""
pvebar16 CTs:
master: pvebar16.example.com
masterTargetTriggerDescription: 'is unavailable by ICMP'
slaveTargetTriggerDescription: 'is unavailable by ICMP|Zabbix agent is unreachable for 5 minutes'
"""
fileName = 'dependancy.yml'
with open('dependancy.yml') as f:
hostData = yaml.load(f)
for groupyml in hostData.keys():
masterTargetTriggerDescription = hostData[groupyml]['masterTargetTriggerDescription']
slaveTargetTriggerDescription = hostData[groupyml]['slaveTargetTriggerDescription']
masterName = hostData[groupyml]['master']
hostgroupName = groupyml
slaveIds = []
masterId = zapi.host.get(filter={'host': masterName},output=['hostid'])[0]['hostid']
hostGroupId = zapi.hostgroup.get(filter={'name': hostgroupName},output=['groupid'])[0]['groupid']
masterTriggerObj = zapi.trigger.get(host=masterName, filter={'description': masterTargetTriggerDescription}, output=['triggerid','description'])
print("Group: " + hostgroupName + " - ID: " + str(hostGroupId))
print("Master host: " + masterName + " - ID: " + str(masterId))
print("Master trigger: " + masterTriggerObj[0]['description'] + " - ID: " + str(masterTriggerObj[0]['triggerid']))
# cycle through slave hosts
hostGroupObj = zapi.hostgroup.get(groupids=hostGroupId, selectHosts='extend')
for host in hostGroupObj[0]['hosts']:
#exclude master
if host['hostid'] != str(masterId):
print(" - Host Name: " + host['name'] + " - ID: " + host['hostid'] + " - MASTER: " + str(masterId))
# cycle for all slave's triggers
slaveTargetTriggerObj = zapi.trigger.get(hostids=host['hostid'])
for slaveTargetTrigger in slaveTargetTriggerObj:
# search for dependancy targets
if re.search(slaveTargetTriggerDescription, slaveTargetTrigger['description'] ,re.IGNORECASE):
print(" - Trigger: " + slaveTargetTrigger['description'] + " - ID: " + slaveTargetTrigger['triggerid'])
# Clear existing dep. from the trigger, then create the new dep.
clear = zapi.trigger.deletedependencies(triggerid=slaveTargetTrigger['triggerid'])
result = zapi.trigger.adddependencies(triggerid=slaveTargetTrigger['triggerid'], dependsOnTriggerid=masterTriggerObj[0]['triggerid'])
print("----------------------------------------")
print("")
I am trying to link my django web app to Azure ML API. I do have Django form with all the required inputs for my Azure API.
def post(self,request):
form = CommentForm(request.POST)
url = 'https://ussouthcentral.services.azureml.net/workspaces/7061a4b24ea64942a19f74ed36e4b438/services/ae2c257d6e164dca8d433ad1a1f9feb4/execute?api-version=2.0&format=swagger'
api_key = # Replace this with the API key for the web service
headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
if form.is_valid():
age = form.cleaned_data['age']
bmi = form.cleaned_data['bmi']
args = {"age":age,"bmi":bmi}
json_data = str.encode(json.dumps(args))
print(type(json_data))
r= urllib.request.Request(url,json_data,headers)
try:
response = urllib.request.urlopen(r)
result = response.read()
print(result)
except urllib.request.HTTPError as error:
print("The request failed with status code: " + str(error.code))
print(json_data)
# Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure
print(error.info())
print(json.loads(error.read()))
return render(request,self.template_name)
When i try to submit the form i am getting type error -
TypeError('POST data should be bytes, an iterable of bytes, or a file object. It cannot be of type str.',)
Getting status code - 400 and below error
{'error': {'code': 'BadArgument', 'message': 'Invalid argument provided.', 'details': [{'code': 'RequestBodyInvalid', 'message': 'No request body provided or error in deserializing the request body.'}]}}
Arguments are using print(json_data) -
b'{"age": 0, "bmi": 22.0}'
Can someone help me on this?
Try to use JsonResponse:
https://docs.djangoproject.com/en/2.1/ref/request-response/#jsonresponse-objects
Also, I don't think you need a template for API response.
return JsonResponse({'foo': 'bar'})
Thanks all. I found the error it was the data which was not in proper order.