I have a requirement to use Python to deploy some things to an Openshift cluster (I've been pre-empted from any other solution), so I'm attempting to exercise the kubernetes module:
from kubernetes import client, config
configuration = client.Configuration()
configuration.username='admin'
configuration.password='redacted'
configuration.host='https://api.cluster.example.com:6443'
configuration.verify_ssl = False
v1 = client.CoreV1Api(client.ApiClient(configuration))
ns = {}
ns['kind'] = 'Namespace'
ns['apiVersion'] = 'v1'
ns['metadata'] = {}
ns['metadata']['name'] = 'mynamespace'
v1.create_namespace(ns)
Sadly, the v1 object doesn't authenticate to the cluster with the username / password I've given it:
kubernetes.client.rest.ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'd59bb32d-a114-4b42-90ca-86c6315809d0', Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-ptions': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid': '58a1456a-ff43-47a6-9a08-4a682ad5a509', X-Kubernetes-Pf-Prioritylevel-Uid': 'efbe11a3-861b-46ec-8e4a-1eadb766e284', 'Date': 'Thu, 21 Jan 2021 19:17:31 GMT', 'Content-Length': '273'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"namespaces is forbidden: User \"system:anonymous\" cannot create resource \"namespaces\" in API group \"\" at the cluster scope","reason":"Forbidden","details":{"kind":"namespaces"},"code":403}
I'm looking for a way to throw configuration YAML at the OCP cluster and have it stick...
Related
I have used below code available on the Google Cloud Docs platform:
from google.auth.transport.requests import Request
from google.oauth2 import id_token
import requests
IAM_SCOPE = 'https://www.googleapis.com/auth/iam'
OAUTH_TOKEN_URI = 'https://www.googleapis.com/oauth2/v4/token'
def trigger_dag(data, context=None):
"""Makes a POST request to the Composer DAG Trigger API
When called via Google Cloud Functions (GCF),
data and context are Background function parameters.
For more info, refer to
https://cloud.google.com/functions/docs/writing/background#functions_background_parameters-python
To call this function from a Python script, omit the ``context`` argument
and pass in a non-null value for the ``data`` argument.
"""
# Fill in with your Composer info here
# Navigate to your webserver's login page and get this from the URL
# Or use the script found at
# https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/composer/rest/get_client_id.py
client_id = '87431184677-jitlhi9o0u9sin3uvdebqrvqokl538aj.apps.googleusercontent.com'
# This should be part of your webserver's URL:
# {tenant-project-id}.appspot.com
webserver_id = 'b368a47a354ddf2f6p-tp'
# The name of the DAG you wish to trigger
dag_name = 'composer_sample_trigger_response_dag'
webserver_url = (
#'https://'
webserver_id
+ '.appspot.com/admin/airflow/tree?dag_id='
+ dag_name
#+ '/dag_runs'
)
# Make a POST request to IAP which then Triggers the DAG
make_iap_request(
webserver_url, client_id, method='POST', json={"conf": data, "replace_microseconds": 'false'})
# This code is copied from
# https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/iap/make_iap_request.py
# START COPIED IAP CODE
def make_iap_request(url, client_id, method='GET', **kwargs):
"""Makes a request to an application protected by Identity-Aware Proxy.
Args:
url: The Identity-Aware Proxy-protected URL to fetch.
client_id: The client ID used by Identity-Aware Proxy.
method: The request method to use
('GET', 'OPTIONS', 'HEAD', 'POST', 'PUT', 'PATCH', 'DELETE')
**kwargs: Any of the parameters defined for the request function:
https://github.com/requests/requests/blob/master/requests/api.py
If no timeout is provided, it is set to 90 by default.
Returns:
The page body, or raises an exception if the page couldn't be retrieved.
"""
# Set the default timeout, if missing
if 'timeout' not in kwargs:
kwargs['timeout'] = 90
# Obtain an OpenID Connect (OIDC) token from metadata server or using service
# account.
google_open_id_connect_token = id_token.fetch_id_token(Request(), client_id)
# Fetch the Identity-Aware Proxy-protected URL, including an
# Authorization header containing "Bearer " followed by a
# Google-issued OpenID Connect token for the service account.
resp = requests.request(
method, url,
headers={'Authorization': 'Bearer {}'.format(
google_open_id_connect_token)}, **kwargs)
if resp.status_code == 403:
raise Exception('Service account does not have permission to '
'access the IAP-protected application.')
elif resp.status_code != 200:
raise Exception(
'Bad response from application: {!r} / {!r} / {!r}'.format(
resp.status_code, resp.headers, resp.text))
else:
return resp.text
# END COPIED IAP CODE
You encounter the error "405 method not found" because you are trying to send a request to your_webserver_id.appspot.com/admin/airflow/tree?dag_id=composer_sample_trigger_response_dag which is the URL seen "Tree view" in the airflow webserver.
To properly send request to the Airflow API you need to construct the webserver_url just like in the documentation in Trigger DAGs in Cloud Functions. The webserver_url was constructed to use Trigger a new DAG endpoint to send requests. So if you'd like to trigger the DAG you can stick with the code below.
Airflow run DAG endpoint:
POST https://airflow.apache.org/api/v1/dags/{dag_id}/dagRuns
webserver_url = (
'https://'
+ webserver_id
+ '.appspot.com/api/experimental/dags/'
+ dag_name
+ '/dag_runs'
)
Moving forward, if you would like to perform different operations using the Airflow API you can check the Airflow REST reference
Using the following code to use make an API that connects to Amazon AWS. This is the AMazon Lambda code that I use-
import boto3
import json
import requests
from requests_aws4auth import AWS4Auth
region = 'us-east-1'
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region,
service, session_token=credentials.token)
host = 'XXX.com'
index = 'items'
url = 'https://' + host + '/' + index + '/_search'
# Lambda execution starts here
def handler(event, context):
# Put the user query into the query DSL for more accurate search results.
# Note that certain fields are boosted (^).
query = {
"query": {
"multi_match": {
"query": event['queryStringParameters']['q'],
}
}
}
# ES 6.x requires an explicit Content-Type header
headers = { "Content-Type": "application/json" }
# Make the signed HTTP request
r = requests.get(url, auth=awsauth, headers=headers,
data=json.dumps(query))
# Create the response and add some extra content to support CORS
response = {
"statusCode": 200,
"headers": {
"Access-Control-Allow-Origin": '*'
},
"isBase64Encoded": False
}
# Add the search results to the response
response['body'] = r.text
return response
This should connect to an AWS ES cluster with endpoint XXX.com
Getting output when trying to test -
START RequestId: f640016e-e4d6-469f-b74d-838b9402968b Version: $LATEST
Unable to import module 'index': Error
at Function.Module._resolveFilename (module.js:547:15)
at Function.Module._load (module.js:474:25)
at Module.require (module.js:596:17)
at require (internal/module.js:11:18)
END RequestId: f640016e-e4d6-469f-b74d-838b9402968b
REPORT RequestId: f640016e-e4d6-469f-b74d-838b9402968b Duration:
44.49 ms Billed Duration: 100 ms Memory Size: 128 MB Max
Memory Used: 58 MB
While creating a Lambda function, we need to specify a handler, which is a function in your code, that the AWS Lambda service can invoke when the given Lambda function is executed.
By default, a Python Lambda function is created with handler as lambda_function.lambda_handler which signifies that the service must invoke lambda_handler function contained inside lambda_function module.
From the error you're receiving, it seems that the handler is incorrectly configured to something like index.<something>, and since there's no Python module called index in your deployment package, Lambda is unable to import the same in order to start the execution.
If i am getting things correctly to connect to a AWS ES cluster you need something of this sort
import gitlab
import logging
from elasticsearch import Elasticsearch, RequestsHttpConnection
from requests_aws4auth import AWS4Auth
import boto3
#from aws_requests_auth.aws_auth import AWSRequestsAuth
LOGGER = logging.getLogger()
ES_HOST = {'host':'search-testelasticsearch-xxxxxxxxxx.eu-west-2.es.amazonaws.com', 'port': 443}
def lambda_handler(event, context):
LOGGER.info('started')
dump2={
'number': 9
}
service = 'es'
credentials = boto3.Session().get_credentials()
print('-------------------------------------------')
print(credentials.access_key)
print(credentials.secret_key)
print('--------------------------------------------------------')
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, "eu-west-2", service, session_token=credentials.token)
es = Elasticsearch(hosts=[ES_HOST], http_auth = awsauth, use_ssl = True, verify_certs = True, connection_class = RequestsHttpConnection)
DAVID_INDEX = 'test_index'
response = es.index(index=DAVID_INDEX, doc_type='is_this_important?', body=dump2, id='4')
I m trying to get the response of get_roster in ejabberd through XML-RPC client but I am using ejabberd 18.9 version and it is showing me this error:
org.apache.xmlrpc.XmlRpcException: Error -118 A problem
'{error,access_rules_unauthorized}' occurred executing the command
get_roster with arguments
[{user,<<"admin">>},{server,<<"localhost">>}]
Can somebody suggest how can I solve this?
Here is my java client code:
XmlRpcClientConfigImpl config = new XmlRpcClientConfigImpl();
config.setServerURL(new URL("http://localhost:4560"));
XmlRpcClient client = new XmlRpcClient();
client.setConfig(config);
Hashtable<String, Object> params = new Hashtable<String, Object>();
params.put("user", new String("admin"));
params.put("server", new String("localhost"));
List<Object> roster_params = new ArrayList<Object>();
roster_params.add(params);
Object result = client.execute("get_roster", roster_params);
System.out.println("Result: " + result);
Probably you have ejabberd configured in a way that you must provide auth details of an account with admin rights. In this example written in python, see the LOGIN structure. Sorry, I don't know how this is done in Java.
import xmlrpclib
server_url = 'http://127.0.0.1:4560'
server = xmlrpclib.ServerProxy(server_url)
LOGIN = {'user': 'admin', 'server': 'localhost', 'password': 'mypass11', 'admin': True}
def calling(command, data):
fn = getattr(server, command)
return fn(LOGIN, data)
print ""
print "Calling with auth details:"
result = calling('get_roster', {'user':'user1', 'server':'localhost'})
print result
the issue is now resolved there was some issue with the ejabberd.yml file.
I enabled outh configurations after removing this in config file now this codes works...
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
"/ws": ejabberd_http_ws
"/bosh": mod_bosh
"/api": mod_http_api
Python 3.6 - Scrapy 1.5
I'm scraping the John Deere warranty webpage to watch all new PMP's and its expiration date. Looking inside network communication between browser and webpage I found a REST API that feed data in webpage.
Now, I'm trying to get json data from API rather scraping the javascript page's content. However, I'm getting a Internal Server Error and I don't know why.
I'm using scrapy to log in and catch data.
import scrapy
class PmpSpider(scrapy.Spider):
name = 'pmp'
start_urls = ['https://jdwarrantysystem.deere.com/portal/']
def parse(self, response):
self.log('***Form Request***')
login ={
'USERNAME':*******,
'PASSWORD':*******
}
yield scrapy.FormRequest.from_response(
response,
url = 'https://registration.deere.com/servlet/com.deere.u90950.registrationlogin.view.servlets.SignInServlet',
method = 'POST', formdata = login, callback = self.parse_pmp
)
self.log('***PARSE LOGIN***')
def parse_pmp(self, response):
self.log('***PARSE PMP***')
cookies = response.headers.getlist('Set-Cookie')
for cookie in cookies:
cookie = cookie.decode('utf-8')
self.log(cookie)
cook = cookie.split(';')[0].split('=')[1]
path = cookie.split(';')[1].split('=')[1]
domain = cookie.split(';')[2].split('=')[1]
yield scrapy.Request(
url = 'https://jdwarrantysystem.deere.com/api/pip-products/collection',
method = 'POST',
cookies = {
'SESSION':cook,
'path':path,
'domain':domain
},
headers = {
"Accept":"application/json",
"accounts":["201445","201264","201167","201342","201341","201221"],
"excludedPin":"",
"export":"",
"language":"",
"metric":"Y",
"pipFilter":"OPEN",
"pipType":["MALF","SAFT"]
},
meta = {'dont_redirect': True},
callback = self.parse_pmp_list
)
def parse_pmp_list(self, response):
self.log('***LISTA PMP***')
self.log(response.body)
Why am I getting an error? How to get data from this API?
2018-07-05 17:26:19 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <POST https://jdwarrantysystem.deere.com/api/pip-products/collection> (failed 1 times): 500 Internal Server Error
2018-07-05 17:26:20 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <POST https://jdwarrantysystem.deere.com/api/pip-products/collection> (failed 2 times): 500 Internal Server Error
2018-07-05 17:26:21 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <POST https://jdwarrantysystem.deere.com/api/pip-products/collection> (failed 3 times): 500 Internal Server Error
2018-07-05 17:26:21 [scrapy.core.engine] DEBUG: Crawled (500) <POST https://jdwarrantysystem.deere.com/api/pip-products/collection> (referer: https://jdwarrantysystem.deere.com/portal/)
2018-07-05 17:26:21 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <500 https://jdwarrantysystem.deere.com/api/pip-products/collection>: HTTP status code is not handled or not allowed
I found the problem: This is a POST request that must have a body data in json format, because unlike a GET request, the parameters don't go in the URI. The request header need too a "content-type": "application/json". See: How parameters are sent in POST request and Rest POST in python. So, editing the function parse_pmp:
def parse_pmp(self, response):
self.log('***PARSE PMP***')
cookies = response.headers.getlist('Set-Cookie')
for cookie in cookies:
cookie = cookie.decode('utf-8')
self.log(cookie)
cook = cookie.split(';')[0].split('=')[1]
path = cookie.split(';')[1].split('=')[1]
domain = cookie.split(';')[2].split('=')[1]
data = json.dumps({"accounts":["201445","201264","201167","201342","201341","201221"],"excludedPin":"","export":"","language":"","metric":"Y","pipFilter":"OPEN","pipType":["MALF","SAFT"]}) # <----
yield scrapy.Request(
url = 'https://jdwarrantysystem.deere.com/api/pip-products/collection',
method = 'POST',
cookies = {
'SESSION':cook,
'path':path,
'domain':domain
},
headers = {
"Accept":"application/json",
"content-type": "application/json" # <----
},
body = data, # <----
meta = {'dont_redirect': True},
callback = self.parse_pmp_list
)
Everything works fine!
I was trying to incorporate IDM (Docker) latest, and pep-proxy (git example running with node server).
When I started pep-proxy, everything was working as intended.
I've got the following messages:
INFO: Server - Starting PEP proxy in port 80. IdM authentication...
Server - Success authenticating PEP proxy. Proxy Auth-token: d9badf48-16fa-423d-884c-a3e155578791
Now a problem happens. When I enter the wrong token I get this error.
ERROR: IDM-Client - Error validating token.
Proxy not authorized in keystone. Keystone authentication ...
ERROR: Server - Caught exception:
SyntaxError: Unexpected token u in JSON at position 0
As far as I understand I am expecting some return like invalid token, etc.. instead I get this error in pep-proxy and my curl command show->(52) Empty reply from server.
My config.json of pep-proxy:
var config = {};
// Used only if https is disabled
config.pep_port = 80;
// Set this var to undefined if you don't want the server to listen on HTTPS
config.https = {
enabled: false,
cert_file: 'cert/cert.crt',
key_file: 'cert/key.key',
port: 443
};
config.idm = {
host: 'localhost',
port: 3000,
ssl: false
}
config.app = {
host: 'www.google.es',
port: '80',
ssl: false // Use true if the app server listens in https
}
// Credentials obtained when registering PEP Proxy in app_id in Account Portal
config.pep = {
app_id: 'xxxxxx',
username: 'xxxxxx',
password: 'xxxxxx',
trusted_apps : []
}
// in seconds
config.cache_time = 300;
// if enabled PEP checks permissions with AuthZForce GE.
// only compatible with oauth2 tokens engine
//
// you can use custom policy checks by including programatic scripts
// in policies folder. An script template is included there
config.azf = {
enabled: true,
protocol: 'http',
host: 'localhost',
port: 8080,
custom_policy: undefined // use undefined to default policy checks (HTTP verb + path).
};
// list of paths that will not check authentication/authorization
// example: ['/public/*', '/static/css/']
config.public_paths = [];
config.magic_key = 'undefined';
module.exports = config;
IDM logs:
fiware-idm_1 | GET
/user?access_token=7cb25729577c2e01dc337314dcd912ec981dc49b 401 4.445 ms - 116
fiware-idm_1 | Executing (default): SELECT email, 'user' as Source FROM
user WHERE email='pep_proxy_edf60435-7de7-4875-85a9-cf68b8838b8c'
fiware-idm_1 | UNION ALL
fiware-idm_1 | SELECT id, 'pep_proxy' as Source FROM
pep_proxy WHERE id='pep_proxy_edf60435-7de7-4875-85a9-cf68b8838b8c';
fiware-idm_1 | Executing (default): SELECT `id`, `password`,
`oauth_client_id` FROM `pep_proxy` AS `PepProxy` WHERE `PepProxy`.`id` =
'pep_proxy_edf60435-7de7-4875-85a9-cf68b8838b8c';
fiware-idm_1 | Executing (default): INSERT INTO `auth_token`
(`access_token`,`expires`,`valid`,`pep_proxy_id`) VALUES ('a0d54a6f-
8461-4000-bb80-5fb60193bcb4','2018-05-04
11:45:21',true,'pep_proxy_edf60435-7de7-4875-85a9-cf68b8838b8c');
fiware-idm_1 | POST /v3/auth/tokens 201 13.733 ms - 74
The error "SyntaxError: Unexpected token u in JSON at position 0", as stated here, is probably due to some place at the code where JSON.parse is called with an undefined parameter. You are getting this message because the error was not properly treated and the exception is being thrown (exception not treated).
In the Wilma PEP Proxy github, we can see the latest changes at the code and we can guess/infer where this error comes from.
I think you can open an issue at github.