How can I have multiple API endpoints for one Google Cloud Function? - google-cloud-functions

I have a Google Cloud Function which contains multiple modules to be invoked on different paths.
I am using the serverless framework to deploy my functions, but it has the limitation of only one path per function.
I want to use multiple paths in one function just like we can in the AWS serverless framework.
Suppose a user cloud function will have two paths /user/add as well as /user/remove; both the paths should invoke the same function.
Something like this:
serverless.yml
functions:
user:
handler: handle
events:
- http: user/add
- http: user/remove
How can I have multiple API endpoints for one GCF?

Yes, indeed there is no actual REST service backing up Google Cloud Functions. It uses out of the box HTTP triggers.
To hustle the way around, I'm using my request payload to determine which action to perform. In the body, I'm adding a key named "path".
For example, consider the Function USER.
To add a user:
{
"path":"add",
"body":{
"first":"Jhon",
"last":"Doe"
}
}
To remove a user:
{
"path":"remove",
"body":{
"first":"Jhon",
"last":"Doe"
}
}
If your operations are purely CRUD, you can use request.method which offers verbs like GET, POST, PUT, DELETE to determine operations.

You could use Firebase Hosting to rewriting URLs.
In your firebase.json file:
"hosting": {
"rewrites": [
{
"source": "/api/v1/your/path/here",
"function": "your_path_here"
}
]
}
Keep in mind this is a workaround and it has a major drawback: you will pay for double hit. Consider this if your app has to scale.

You can write your functions in different runtimes. The Node.js runtime uses the Express framework. So you can use its router to build different routes within a single function.
Add the dependency
npm install express#4.17.1
The following example is using typescript. Follow these guidelines to initiate a typescript project.
// index.ts
import { HttpFunction } from '#google-cloud/functions-framework';
import * as express from 'express';
import foo from './foo';
const app = express();
const router = express.Router();
app.use('/foo', foo)
const index: HttpFunction = (req, res) => {
res.send('Hello from the index route...');
};
router.get('', index)
app.use('/*', router)
export const api = app
// foo.ts
import { HttpFunction } from '#google-cloud/functions-framework';
import * as express from 'express';
const router = express.Router();
const foo: HttpFunction = (req, res) => {
res.send('Hello from the foo route...');
};
router.get('', foo)
export default router;
to deploy run:
gcloud functions deploy YOUR_NAME \
--runtime nodejs16 \
--trigger-http \
--entry-point api \
--allow-unauthenticated

Currently, in google allows only one event definition per function is supported. For more

Express can be installed with npm i express, then imported and used more or less as normal to handle routing:
const express = require("express");
const app = express();
// enable CORS if desired
app.use((req, res, next) => {
res.set("Access-Control-Allow-Origin", "*");
next();
});
app.get("/", (req, res) => {
res.send("hello world");
});
exports.example = app; // `example` is whatever your GCF entrypoint is
If Express isn't an option for some reason or the use case is very simple, a custom router may suffice.
If parameters or wildcards are involved, consider using route-parser. A deleted answer suggested this app as an example.
The Express request object has a few useful parameters you can take advantage of:
req.method which gives the HTTP verb
req.path which gives the path without the query string
req.query object of the parsed key-value query string
req.body the parsed JSON body
Here's a simple proof-of-concept to illustrate:
const routes = {
GET: {
"/": (req, res) => {
const name = (req.query.name || "world");
res.send(`<!DOCTYPE html>
<html lang="en"><body><h1>
hello ${name.replace(/[\W\s]/g, "")}
</h1></body></html>
`);
},
},
POST: {
"/user/add": (req, res) => { // TODO stub
res.json({
message: "user added",
user: req.body.user
});
},
"/user/remove": (req, res) => { // TODO stub
res.json({message: "user removed"});
},
},
};
exports.example = (req, res) => {
if (routes[req.method] && routes[req.method][req.path]) {
return routes[req.method][req.path](req, res);
}
res.status(404).send({
error: `${req.method}: '${req.path}' not found`
});
};
Usage:
$ curl https://us-east1-foo-1234.cloudfunctions.net/example?name=bob
<!DOCTYPE html>
<html lang="en"><body><h1>
hello bob
</h1></body></html>
$ curl -X POST -H "Content-Type: application/json" --data '{"user": "bob"}' \
> https://us-east1-foo-1234.cloudfunctions.net/example/user/add
{"message":"user added","user":"bob"}
If you run into trouble with CORS and/or preflight issues, see Google Cloud Functions enable CORS?

Related

how can I properly invoke a google cloud function inside another function [duplicate]

I am using a Cloud Function to call another Cloud Function on the free spark tier.
Is there a special way to call another Cloud Function? Or do you just use a standard http request?
I have tried calling the other function directly like so:
exports.purchaseTicket = functions.https.onRequest((req, res) => {
fetch('https://us-central1-functions-****.cloudfunctions.net/validate')
.then(response => response.json())
.then(json => res.status(201).json(json))
})
But I get the error
FetchError: request to
https://us-central1-functions-****.cloudfunctions.net/validate
failed, reason: getaddrinfo ENOTFOUND
us-central1-functions-*****.cloudfunctions.net
us-central1-functions-*****.cloudfunctions.net:443
Which sounds like firebase is blocking the connection, despite it being a google owned, and therefore it shouldn't be locked
the Spark plan only allows outbound network requests to Google owned
services.
How can I make use a Cloud Function to call another Cloud Function?
You don't need to go through the trouble of invoking some shared functionality via a whole new HTTPS call. You can simply abstract away the common bits of code into a regular javascript function that gets called by either one. For example, you could modify the template helloWorld function like this:
var functions = require('firebase-functions');
exports.helloWorld = functions.https.onRequest((request, response) => {
common(response)
})
exports.helloWorld2 = functions.https.onRequest((request, response) => {
common(response)
})
function common(response) {
response.send("Hello from a regular old function!");
}
These two functions will do exactly the same thing, but with different endpoints.
To answer the question, you can do an https request to call another cloud function:
export const callCloudFunction = async (functionName: string, data: {} = {}) => {
let url = `https://us-central1-${config.firebase.projectId}.cloudfunctions.net/${functionName}`
await fetch(url, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ data }),
})
}
(Note we are using the npm package 'node-fetch' as our fetch implementation.)
And then simply call it:
callCloudFunction('search', { query: 'yo' })
There are legitimate reasons to do this. We used this to ping our search cloud function every minute and keep it running. This greatly lowers response latency for a few dollars a year.
It's possible to invoke another Google Cloud Function over HTTP by including an authorization token. It requires a primary HTTP request to calculate the token, which you then use when you call the actual Google Cloud Function that you want to run.
https://cloud.google.com/functions/docs/securing/authenticating#function-to-function
const {get} = require('axios');
// TODO(developer): set these values
const REGION = 'us-central1';
const PROJECT_ID = 'my-project-id';
const RECEIVING_FUNCTION = 'myFunction';
// Constants for setting up metadata server request
// See https://cloud.google.com/compute/docs/instances/verifying-instance-identity#request_signature
const functionURL = `https://${REGION}-${PROJECT_ID}.cloudfunctions.net/${RECEIVING_FUNCTION}`;
const metadataServerURL =
'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=';
const tokenUrl = metadataServerURL + functionURL;
exports.callingFunction = async (req, res) => {
// Fetch the token
const tokenResponse = await get(tokenUrl, {
headers: {
'Metadata-Flavor': 'Google',
},
});
const token = tokenResponse.data;
// Provide the token in the request to the receiving function
try {
const functionResponse = await get(functionURL, {
headers: {Authorization: `bearer ${token}`},
});
res.status(200).send(functionResponse.data);
} catch (err) {
console.error(err);
res.status(500).send('An error occurred! See logs for more details.');
}
};
October 2021 Update: You should not need to do this from a local development environment, thank you Aman James for clarifying this
Despite of the question tag and other answers concern the javascript I want to share the python example as it reflects the title and also authentification aspect mentioned in the question.
Google Cloud Function provide REST API interface what incluse call method that can be used in another Cloud Function.
Although the documentation mention using Google-provided client libraries there is still non one for Cloud Function on Python.
And instead you need to use general Google API Client Libraries. [This is the python one].3
Probably, the main difficulties while using this approach is an understanding of authentification process.
Generally you need provide two things to build a client service:
credentials ans scopes.
The simpliest way to get credentials is relay on Application Default Credentials (ADC) library. The rigth documentation about that are:
https://cloud.google.com/docs/authentication/production
https://github.com/googleapis/google-api-python-client/blob/master/docs/auth.md
The place where to get scopes is the each REST API function documentation page.
Like, OAuth scope: https://www.googleapis.com/auth/cloud-platform
The complete code example of calling 'hello-world' clound fucntion is below.
Before run:
Create default Cloud Function on GCP in your project.
Keep and notice the default service account to use
Keep the default body.
Notice the project_id, function name, location where you deploy function.
If you will call function outside Cloud Function environment (locally for instance) setup the environment variable GOOGLE_APPLICATION_CREDENTIALS according the doc mentioned above
If you will call actualy from another Cloud Function you don't need to configure credentials at all.
from googleapiclient.discovery import build
from googleapiclient.discovery_cache.base import Cache
import google.auth
import pprint as pp
def get_cloud_function_api_service():
class MemoryCache(Cache):
_CACHE = {}
def get(self, url):
return MemoryCache._CACHE.get(url)
def set(self, url, content):
MemoryCache._CACHE[url] = content
scopes = ['https://www.googleapis.com/auth/cloud-platform']
# If the environment variable GOOGLE_APPLICATION_CREDENTIALS is set,
# ADC uses the service account file that the variable points to.
#
# If the environment variable GOOGLE_APPLICATION_CREDENTIALS isn't set,
# ADC uses the default service account that Compute Engine, Google Kubernetes Engine, App Engine, Cloud Run,
# and Cloud Functions provide
#
# see more on https://cloud.google.com/docs/authentication/production
credentials, project_id = google.auth.default(scopes)
service = build('cloudfunctions', 'v1', credentials=credentials, cache=MemoryCache())
return service
google_api_service = get_cloud_function_api_service()
name = 'projects/{project_id}/locations/us-central1/functions/function-1'
body = {
'data': '{ "message": "It is awesome, you are develop on Stack Overflow language!"}' # json passed as a string
}
result_call = google_api_service.projects().locations().functions().call(name=name, body=body).execute()
pp.pprint(result_call)
# expected out out is:
# {'executionId': '3h4c8cb1kwe2', 'result': 'It is awesome, you are develop on Stack Overflow language!'}
These suggestions don't seem to work anymore.
To get this to work for me, I made calls from the client side using httpsCallable and imported the requests into postman. There were some other links to https://firebase.google.com/docs/functions/callable-reference there were helpful. But determining where the information was available took a bit of figuring out.
I wrote everything down here as it takes a bit of explaining and some examples.
https://www.tiftonpartners.com/post/call-google-cloud-function-from-another-cloud-function
Here's an inline version for the 'url' might expire.
This 'should' work, it's not tested but based off of what I wrote and tested for my own application.
module.exports = function(name,context) {
const {protocol,headers} = context.rawRequest;
const host = headers['x-forwardedfor-host'] || headers.host;
// there will be two different paths for
// production and development
const url = `${protocol}://${host}/${name}`;
const method = 'post';
const auth = headers.authorization;
return (...rest) => {
const data = JSON.stringify({data:rest});
const config = {
method, url, data,
headers: {
'Content-Type': 'application/json',
'Authorization': auth,
'Connection': 'keep-alive',
'Pragma': 'no-cache,
'Cache-control': 'no-cache',
}
};
try {
const {data:{result}} = await axios(config);
return result;
} catch(e) {
throw e;
}
}
}
This is how you would call this function.
const crud = httpsCallable('crud',context);
return await crud('read',...data);
context you get from the google cloud entry point and is the most important piece, it contains the JWT token needed to make the subsequent call to your cloud function (in my example its crud)
To define the other httpsCallable endpoint you would write an export statement as follows
exports.crud = functions.https.onCall(async (data, context) => {})
It should work just like magic.
Hopefully this helps.
I found a combination of two of the methods works best
const anprURL = `https://${REGION}-${PROJECT_ID}.cloudfunctions.net/${RECEIVING_FUNCTION}`;
const metadataServerURL =
'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=';
const tokenUrl = metadataServerURL + anprURL;
// Fetch the token
const tokenResponse = await fetch(tokenUrl, {
method: "GET"
headers: {
'Metadata-Flavor': 'Google',
},
});
const token = await tokenResponse.text();
const functionResponse = await fetch(anprURL, {
method: 'POST',
headers: {
"Authorization": `bearer ${token}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({"imageUrl": url}),
});
// Convert the response to text
const responseText = await functionResponse.text();
// Convert from text to json
const reponseJson = JSON.parse(responseText);
Extending the Shea Hunter Belsky's answer I would love to inform you that the call to the metatdata server of google to fetch the authorization token would not work from local machine
Since fetch is not readily available in Node.JS and my project was already using the axios library, I did it like this:
const url = `https://${REGION}-${PROJECT_ID}.cloudfunctions.net/${FUNCTION_NAME}`;
const headers = {
'Content-Type': 'application/json',
};
const response = await axios.post(url, { data: YOUR_DATA }, { headers });

Get raw request body in feathers

I try to implement a WooCommerce webhook functionality for feathers. To authenticate the request I need to verify the signature of the raw request body like so
const isSignatureValid = (
secret: string,
body: any,
signature?: string
): boolean => {
const signatureComputed = crypto
.createHmac("SHA256", secret)
.update(new Buffer(JSON.stringify(body), "utf8"))
.digest("base64");
return signatureComputed === signature ? true : false;
};
Currently my signature never verifies. My guess is that this is due to the fact that req.body is not the acutual raw body of the request but some already parsed version with some featherjs goodness added.
Question is: How do I obtain the raw request body in a standard feathers app (created with feathers cli)?
Not sure if this is the most idiomatic way to do this, but I came up with the following:
In of express.json() in the main app.ts file it is possible to add the raw, unparsed body to the req object. This is handy, as for some webhooks (woocommerce, stripe), you only need the raw body to verifiy the signature, but otherwise work with the parsed JSON.
export interface IRequestRawBody extends http.IncomingMessage {
rawBody: Buffer;
}
app.use(
express.json({
verify: (req: IRequestRawBody, res, buf) => {
const rawEndpoints: string[] = ["/wc-webhook"];
if (req.url && rawEndpoints.includes(req.url)) {
req.rawBody = buf;
}
}
})
);
// Initialize our service with any options it requires
app.use('/country', function (req, res, next) {
console.log(req);
next();
}, new Country(options, app)
);
We can get request body as above in the express middleware of service class.
This is an extension of the original answer from #florian-norbert-bepunkt since for me it didn't work out of the box... I also needed a string version of the body in order to calculate the security hash.
app.use(express.json({
verify: (req, res, buf) => {
const rawEndpoints = ["/someAPI"];
if (req.url && rawEndpoints.includes(req.url)) {
req.feathers = {dataRawBuffer: buf};
}
}
}));
After that, you can access the original buffer from context as
context.params.dataRawBuffer
I can confirm that this works on feathersJS v4, maybe this will help someone.

Node Elasticsearch - Bulk indexing not working - Content-Type header [application/x-ldjson] is not supported

Being new to elasticsearch, am exploring it by integrating with node and trying to execute the following online git example in windows.
https://github.com/sitepoint-editors/node-elasticsearch-tutorial
while trying to import the data of 1000 items from data.json, the execution 'node index.js' is failing with the following error.
By enabling the trace, I now see the following as the root cause from the bulk function.
"error": "Content-Type header [application/x-ldjson] is not supported",
** "status": 406**
I see a change log from https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/changelog.html which says following
13.0.0 (Apr 24 2017) bulk and other APIs that send line-delimited JSON bodies now use the Content-Type: application/x-ndjson header #507
Any idea how to resolve this content type issue in index.js?
index.js
(function () {
'use strict';
const fs = require('fs');
const elasticsearch = require('elasticsearch');
const esClient = new elasticsearch.Client({
host: 'localhost:9200',
log: 'error'
});
const bulkIndex = function bulkIndex(index, type, data) {
let bulkBody = [];
data.forEach(item => {
bulkBody.push({
index: {
_index: index,
_type: type,
_id: item.id
}
});
bulkBody.push(item);
});
esClient.bulk({body: bulkBody})
.then(response => {
console.log(`Inside bulk3...`);
let errorCount = 0;
response.items.forEach(item => {
if (item.index && item.index.error) {
console.log(++errorCount, item.index.error);
}
});
console.log(`Successfully indexed ${data.length - errorCount} out of ${data.length} items`);
})
.catch(console.err);
};
// only for testing purposes
// all calls should be initiated through the module
const test = function test() {
const articlesRaw = fs.readFileSync('data.json');
const articles = JSON.parse(articlesRaw);
console.log(`${articles.length} items parsed from data file`);
bulkIndex('library', 'article', articles);
};
test();
module.exports = {
bulkIndex
};
} ());
my local windows environment:
java version 1.8.0_121
elasticsearch version 6.1.1
node version v8.9.4
npm version 5.6.0
The bulk function doesn't return a promise. It accepts a callback function as a parameter.
esClient.bulk(
{body: bulkBody},
function(err, response) {
if (err) { console.err(err); return; }
console.log(`Inside bulk3...`);
let errorCount = 0;
response.items.forEach(item => {
if (item.index && item.index.error) {
console.log(++errorCount, item.index.error);
}
});
console.log(`Successfully indexed ${data.length - errorCount} out of ${data.length} items`);
}
)
or use promisify to convert a function accepting an (err, value) => ... style callback to a function that returns a promise.
const esClientBulk = util.promisify(esClient.bulk)
esClientBulk({body: bulkBody})
.then(...)
.catch(...)
EDIT: Just found out that elasticsearch-js supports both callbacks and promises. So this should not be an issue.
By looking at the package.json of the project that you've linked, it uses elasticsearch-js version ^11.0.1 which is an old version and that is sending requests with application/x-ldjson header for bulk upload, which is not supported by newer elasticsearch versions. So, upgrading elasticsearch-js to a newer version (current latest is 14.0.0) should fix it.

Proxy webpack-dev-server based on request payload to return json file

So I've searched and I think a saw the entire internet but no solution regarding the issue I encounter.
I have multiple http request which I want to mock. All request have the same url but deviate based on the requestPayload which contain a graphQl query. Based on this query I want to return a specific json file. All proxy settings I have found can handle parameters but do not handle responses based on requestPayload.
Have you taken a look over this functionality?
https://webpack.js.org/configuration/dev-server/#devserver-before
as far as webpack-dev-server is an instance of express app you are able to setup it in the before/after hooks. Hooks get app (server) instance as a first argument.
so for your case your webpack development config would look like:
module.exports = {
//...
devServer: {
before: function(app) {
var bodyParser = require('body-parser');
app.use(bodyParser.json());
app.get('/some/path/graphql', function(req, res) {
var query = req.body;
// ...your custom logic of
// specific query handling goes here
if (condition(query)) {
res.json({ mockedResponse: 'foo' });
} else {
res.json({ mockedResponse: 'bar' });
}
});
}
}
};
UPD: keep in mind if you're using proxy config for devServer you might want to use after hook instead of before to let your requests be proxified if needed.

How can I access my couchdb from my node server using REST/JSON/GET/POST?

I'm trying to access my couchdb from a node.js server.
I've followed the nodejs tutorial, and have set up this simple nodejs server:
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(80, "127.0.0.1");
console.log('Server running at http://127.0.0.1:80/');
I would like to make RESTful http and POST requests to the nodejs server. The nodejs server should then be able to make GET/POST request to the Couchdb, which responds with JSON objects.
How might I do this?
First of all I am the author of nano and will use it in this response.
Here go some simple instructions to get started with node.js and CouchDb.
mkdir test && cd test
npm install nano
npm install express
If you have couchdb installed, great. If you don't you will either need to install it setup a instance online at iriscouch.com
Now create a new file called index.js. Inside place the following code:
var express = require('express')
, nano = require('nano')('http://localhost:5984')
, app = module.exports = express.createServer()
, db_name = "my_couch"
, db = nano.use(db_name);
app.get("/", function(request,response) {
nano.db.create(db_name, function (error, body, headers) {
if(error) { return response.send(error.message, error['status-code']); }
db.insert({foo: true}, "foo", function (error2, body2, headers2) {
if(error2) { return response.send(error2.message, error2['status-code']); }
response.send("Insert ok!", 200);
});
});
});
app.listen(3333);
console.log("server is running. check expressjs.org for more cool tricks");
If you setup a username and password for your CouchDB you need to include it in the url. In the following line I added admin:admin# to the url to exemplify
, nano = require('nano')('http://admin:admin#localhost:5984')
The problem with this script is that it tries to create a database every time you do a request. This will fail as soon as you create it for the first time. Ideally you want to remove the create database from the script so it runs forever:
var express = require('express')
, db = require('nano')('http://localhost:5984/my_couch')
, app = module.exports = express.createServer()
;
app.get("/", function(request,response) {
db.get("foo", function (error, body, headers) {
if(error) { return response.send(error.message, error['status-code']); }
response.send(body, 200);
});
});
});
app.listen(3333);
console.log("server is running. check expressjs.org for more cool tricks");
You can now either manually create, or even do it programmatically. If you are curious on how you would achieve this you can read this article I wrote a while back Nano - Minimalistic CouchDB for node.js.
For more info refer to expressjs and nano. Hope this helps!
I have a module (node-couchdb-api) I've written for this exact purpose. It has no ORM or other features like that, it's just a simple wrapper for the HTTP API that CouchDB offers. It even follows the conventions established by Node.JS for async callbacks, making your code that much more consistent all-around. :)
You can use node.js module such as Cradle to work with CouchDB.
Here is a list of available Node.JS modules: https://github.com/joyent/node/wiki/modules
Just make HTTP requests. I would recommend request
Here's an example from my code
request({
"uri": this._base_url + "/" + user._id,
"json": user,
"method": "PUT"
}, this._error(cb));
Here's another example from my code
// save document
"save": function _save(post, cb) {
// doc changed so empty it from cache
delete this._cache[post.id];
// PUT document in couch
request({
"uri": this._base_url + "/" + post._id,
"json": post,
"method": "PUT"
}, this._error(function _savePost(err, res, body) {
if (body) {
body.id = post.id;
body.title = post.title;
}
cb(err, res, body);
}));
}