Well I am having problems with the self-hosted gateway in an APIM.
I have followe this tutorials:
For create an Azure API Management service: https://learn.microsoft.com/en-us/azure/api-management/get-started-create-service-instance
For provisioning a self-hosted gateway: https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-provision-self-hosted-gateway
For deploy it: https://learn.microsoft.com/en-us/azure/api-management/how-to-deploy-self-hosted-gateway-kubernetes
All go well and the self-hosted gateway is running, the LoadBalancer service is good and has an external IP... Even I have this green check:
However, when I visit the Gateway URL: https://apim-example.azure-api.net I got: { "statusCode": 404, "message": "Resource not found" } on the browser.
I dont know why, because I have a Hello world! API example deployed and asingned to the gateway and, if, insteal of make a GET on https://apim-example.azure-api.net/example/kenny, I make the GET using the public Load Balancer IP Of the self-hosted gateway (https://XX.XX.XX.XX/example/kenny) the API response a 200 OK.
Have anyone idea of how to solve this? A lot of thanks!
Following all the given Microsoft documentations, I have created an APIM Instance, Self-hosted Gateway in it and added the Kubernetes deployment in the Gateway:
Then I created the basic Http Trigger in the Azure Function App and imported it in the Azure APIM Instance by adding the Self-Hosted Gateway in the Settings of that API:
Note: For testing purpose, I have unchecked the "Subscription required" option.
However, when I visit the Gateway URL: https://apim-example.azure-api.netI got: { "statusCode": 404, "message": "Resource not found" } on the browser.
As given in this MS Doc, if we access the base URL without APIs, Response Status Code 404 will be displayed.
If API Name is passed to the Self-hosted gateway API of the APIM Instance, then the result is as expected:
Related
I've changed my approach and turned to what worked earlier. I configured an API gateway to call the Google Cloud Functions and it called them with the appropriate permissions when I passed in an api key. I think it's erroring when trying to call the workflow because I didn't specify a resource. Not sure exactly though... It looks like the API key is working, but the OAuth is failing. My OAuth is configured with a fresh connection since I've made the workflow. It's authenticated on my end, I clicked my account in google and everything. I'm 99.99% sure the OAuth is configured correctly. When I called the GCP function with the API Gateway, I didn't have to use OAuth.
Is OAuth a requirement for the Workflows API? Are there any work arounds?
How to specify the params for the Workflow in the API Gateway config?
Named Credential:
Label GoogleCloudFunction
Name GoogleCloudFunction
URL https://workflowexecutions.googleapis.com
Hide Section - AuthenticationAuthentication
Certificate
Identity Type Named Principal
Authentication Protocol OAuth 2.0
Authentication Provider GoogleCloudAuth
Scope https://www.googleapis.com/auth/cloud-platform
Authentication Status Authenticated
Log from API Gateway:
httpRequest: {
latency: "0.039s"
protocol: "http"
requestMethod: "POST"
requestSize: "1269"
requestUrl: "/create-site-tracker-site?key=HIDDEN"
responseSize: "743"
status: 401
}
insertId: "48330ec2-7114-4270-b465-68ae6308bdc34850908905639612439#a1"
jsonPayload: {
api_key: "HIDDEN"
api_key_state: "VERIFIED"
api_version: "1.0.0"
http_status_code: 401
location: "us-central1"
log_message: "1.create_site_tracker_site_0s5865srg8pbr_apigateway_quick_hangout_329722_cloud_goog.CreateSiteFunction is called"
response_code_detail: "via_upstream"
}
API Config
# openapi2-functions.yaml
swagger: '2.0'
info:
title: create-site-tracker-site with auth
description: Create Site in Site Tracker using JSForce
version: 1.0.0
schemes:
- https
produces:
- application/json
paths:
/create-site-tracker-site:
post:
summary: Create Site
operationId: createSiteFunction
x-google-backend:
address: https://workflowexecutions.googleapis.com/v1/projects/us-central1-quick-hangout-329722/locations/us-central1/workflows/create-site-and-project/executions
security:
- api_key: []
responses:
'200':
description: A successful response
schema:
type: string
securityDefinitions:
# This section configures basic authentication with an API key.
api_key:
type: "apiKey"
name: "key"
in: "query"
Your HTTP request appears to include no "Authorization" header. Without this it is unlikely that you're call will succeed unless your Cloud Functions permit unauthenticated calls.
It's difficult to understand what you're doing because e.g. "works when I test it manually" is imprecise and provides little information about what you did. I assume (!?) you're using gcloud functions call which authenticates for you.
Please add more detail to your question include the commands that you tried and those that succeed and fail and include error messages.
The majority of Google's services are exposed as REST APIs and so you can invoke almost everything using simple HTTP commands.
Current work around is calling the workflow from a google cloud function, and then calling the function via API Gateway and passing a key. Gross but it works
I have, in the same project, one HTTP Cloud Function and a Cloud Scheduler, that sends a POST request to this function.
I want to allow only requests from within the project to call the Function. However, when I set Ingress Settings to "Allow internal traffic only", the Cloud Scheduler gets "PERMISSION_DENIED"
Here is the error log (edited)
httpRequest: {
status: 403
}
insertId: "insert_id"
jsonPayload: {
#type: "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished"
jobName: "projects/project_name/locations/location/jobs/cloud_scheduler_job"
status: "PERMISSION_DENIED"
targetType: "HTTP"
url: "https://location-project_name.cloudfunctions.net/cloud_function_name"
}
logName: "projects/project_name/logs/cloudscheduler.googleapis.com%2Fexecutions"
receiveTimestamp: "2020-02-20T13:15:43.134508712Z"
resource: {
labels: {
job_id: "cloud_scheduler_name"
location: "location"
project_id: "project_id"
}
type: "cloud_scheduler_job"
}
severity: "ERROR"
timestamp: "2020-02-20T13:15:43.134508712Z"
}
Link to UI options for ingressSettings
According to the official documentation:
To use Cloud Scheduler your Cloud project must contain an App Engine
app that is located in one of the supported regions. If your project
does not have an App Engine app, you must create one.
Cloud Scheduler overview
Therefore find the location of your app engine application by running:
gcloud app describe
#check for the locationId: europe-west2
Then make sure that you deploy your cloud function with Ingress Settings to "Allow internal traffic only" to the same location as your app engine application.
I deployed a cloud function in the same region as my app engine application and everything worked as expected.
when you use the option "Allow internal traffic only" you need to use some kind of authentication within your Cloud Functions(to avoid this you can use the option "Allow all traffic").
please check the third comment provided in the link: https://serverfault.com/questions/1000987/trigger-google-cloud-functions-from-google-cloud-scheduler-with-private-network
I´m using dialogflow with http request on a project that works in twilio, with the recent need of migration to v2 API of dialogflow the client access token will not work. Reading the new authentication, I generated the json following the instructions in the google cloud docs, but can´t make it works. Because I need to do all the interaction through POST requests to the dialogflow agent, does anyone know how I can generate the authentication token well?
{
"error": {
"code": 401,
"message": "Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.",
"status": "UNAUTHENTICATED"
}
}
Thanks
This is the function code that today works to make the http request. The problem is that all the services are in Twilio and i dont have access to the server, for that I cant define the environment variable.
Twilio Function code
Twilio Fuctions uses NodeJs and allow me to install many npm modules, with the following limitation: "Native Packages Not Supported - Functions does not provide a C/C++ compliler required to complie native addon modules. This means modules that depend on node-gyp can not be installed to Functions."
I don´t know if this limitations affect service acount working to me in this case.
Time ago we set up a PEP proxy to secure the API our widgets are using. All have being working correctly until today, that we are receiving a 502 Bad Gateway error code for every call going through the proxy.
We have checked the requests are reaching our server and it is responsing correctly to them. The parameters added by the proxy (x-nick-name, x-display-name...) are defined correctly too.
We have also checked the requests outside wirecloud and all go well: we get the token properly and use it in the subsequent calls without problem.
We do not know where this error comes from, any ideas?
EDIT 06/11/2015
After Alvaro's new setting we are receiving the following error in the response body:
{
"description": "Connection Error",
"details": "('Connection aborted.', error(104, 'Connection reset by peer'))"
}
EDIT 09/11/15
Today, the code received in the request's response is different: 504 GATEWAY TIMEOUT
{
"description": "Connection Error",
"details": "('Connection aborted.', error(104, 'Connection reset by peer'))"
}
EDIT 16/11/15
Answering to Mr. Alonso's question:
1.- If we request directly to the server, the response is correctly displayed in the application.
2.- Here you can see the logs from the PEP Proxy with the new line added. As you can see the request is redirected correctly but the info is not displayed in the app.
Seems that the problem is in the PEP proxy side.
I've checked using other tools like curl (I obtained the connection details from the server log). Making the same request using curl gives the same result than using WireCloud: connection reset by peer. Also, if I make the request without the X-Auth-Token header, your service responds with an 401 error code. This is important, because it means that there is not a communication problem between the Mashup portal and your server. I don't know why, but the PEP proxy seems to be crashing when making the authenticated request from the Mashup portal (the same command works executing it from my machine).
I suggest you to restart the PEP proxy. If the problem persist, please attach any available info about the crash from the PEP proxy logs.
You can check three things to give us more information:
Try to remove the PEP and send the request directly to your service.
Introduce a new log in PEP to print the headers of the response: line 41 of lib/HTTPClient.js, log.debug("Headers: ", headers);
Try to send a request to the root path (directly to the tomacat or apache)
If not perhaps we can talk in private to check more information
I have the gmail api activated and I am on an instance with "full API access to all Google Cloud services". When I run the following from the instance:
credentials = GoogleCredentials.get_application_default()
service = build('gmail', 'v1', credentials=credentials)
service.users().messages().list(userId='me').execute()
I get:
HttpError: <HttpError 403 when requesting https://www.googleapis.com/gmail/v1/users/me/messages/send?alt=json returned "Insufficient Permission">
I have tried several other gmail-api calls and this is always the response.
You will need to use OAuth 2.0 for Server to Server Applications method.