How to run a GCP Cloud Function written in Golang to run a Dataflow job to import text file to Spanner? - google-cloud-functions

I have used the example in: https://github.com/apache/beam/blob/master/sdks/go/examples/wordcount/wordcount.go#L82 as well as the advice from Google Cloud Support to use the following to run a Dataflow import job:
flag.Parse()
flag.Set("runner", "dataflow")
flag.Set("project"xxxx "rp")
flag.Set("region", "us-central1")
flag.Set("staging_location", "gs://xxx/temp")
flag.Set("job_name", "import-delivery_schedule")
beam.Init()
p := beam.NewPipeline()
p.Root()
if err := beamx.Run(context.Background(), p); err != nil {
log.Fatalf("Failed to execute job: %v", err)
}
The cloud function is in the same project as the database.
The Dataflow import job is in the same project as the database.
The import job runs successfully from the console.
However, I'm unable to get this to work.
I'm getting this error: "Function execution took 18 ms, finished with status: 'connection error'"

If the import dataflow job is getting created and running successfully, then there seems to be no problem with the GCP cloud function.
Ensure the dataflow workers have sufficient permissions to access the database. https://cloud.google.com/spanner/docs/import#iam These permissions are required for a dataflow job to access and write to spanner (or a subset of them depending on the type of modifications you are doing in the import).
Include these roles to the service account which the dataflow workers are assuming. https://cloud.google.com/dataflow/docs/concepts/security-and-permissions#worker-service-account

Related

how to close database connection without terminating the server connection in golang

Is there any way to close the database connection without terminating the HTTP server?
my code:
func thisone(w http.ResponseWriter,r *http.Request){
/*connect the db*/
defer database.Close()
/*query the database*/
}
func main(){
http.HandleFunc("/route",thisone)
http.ListenAndServe(":8000",nil)
}
what this does is after querying the database it terminates the program and stopped listening to the port
but I want to keep listening to the port even after the database connection is close.
so is there any way to do that
Thank You
Every time you are querying the database you are calling thisone() and every time that function is executed is closing the database connection. Try to put database.Close() inside main function.
func main(){
defer database.Close()
http.HandleFunc("/route",thisone)
http.ListenAndServe(":8000",nil)}
That's a little weird that you have an error putting up database.Close() in the main function because I recently made an API Rest with Go a little similar. You can see the code here.I hope it is useful.
Github API Rest with Go

When do GCP cloud functions acknowledge pub/sub messages?

I have a cloud function that gets triggered from a pub/sub message. This function never explicitly acknowledges the message in the source code.
So when does this function acknowledge the pub/sub message if the acknowledgement never happens in the source code?
Update: when a function crashes, I understand that a message acknowledgement shouldn't occur and yet a new function invocation for that message never appears in the logs
Reproducible Example
Create a pubsub topic called test_topic
Create a cloud function called test_function with trigger test_topic. Give it all the default settings including NOT retrying on failure. In the code itself, set the language to python3.7 with entry point of hello_pubsub and the following code:
import base64
def hello_pubsub(event, context):
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
print(pubsub_message)
raise RuntimeError('error in function')
The requirements.txt remains blank
Go into test_topic and publish a message with go as the text.
There will be an error in the test_function logs. However there will only be one function invocation with the error and this will remain the case even after a few days or so.
If the function finish gracefully, the message is acknowledge. If the function exits in error, the message is NACK.
EDIT 1
I have tested with a Go background function. You need to deploy your cloud function with the parameter --retry to allow the messages in error to be retried. Else, the messages aren't retried.
In Go, here the cases where retried are performed:
Return an Error (equivalent to exception in Java or Python), status "error" in the logs
Perform a log.Fatal() (exit the function (function crash) with a specific log) status "connection error" in the logs
Perform an explicit exit, status "connection error" in the logs
Here the code (if interested)
type PubSubMessage struct {
Data []byte `json:"data"`
}
func PubsubError(ctx context.Context, m PubSubMessage) error {
switch string(m.Data) {
case "ok":
return nil
case "error":
return errors.New("it's an error")
case "fatal":
log.Fatal("crash")
case "exit":
os.Exit(1)
}
return nil
}
And how i deployed my Cloud Functions
gcloud beta functions deploy --runtime=go113 --trigger-topic=test-topic \
--source=function --entry-point=PubsubError --region=us-central1 \
--retry pubsuberror
Based on this description:
Google Cloud Pub/Sub Triggers
Cloud Functions acks the Pub/Sub message internally upon successful function invocation.
I do understand that documentation quotation as the acknowledgement happens only after the execution of code is finished without any (uncatched) errors.
At the same time, while the execution might still be 'in progress', the Pub/Sub service may make a decision to trigger another cloud function (instance) from the same Pub/Sub message.
Some additional details are in this Issue Tracker dicussion:
Cloud Function explicit acknowledgement of a pubsub message
From my point of view, independently from 'successful' or 'not successful' the invocation happened, the cloud function is to be developed in an idempopent way, taking into account 'at least once delivery' paradigm of the Pub/Sub service. In other words the cloud function is to be developed in a such a way, that multiple invocations from one message are handled correctly.

How to restore single database from instance backup on GCP?

I am a beginner GCP administrator. I have several applications running on one instance. Each application has its own database. I set up automatic instance backup via the GCP GUI.
I would like to prepare for a possible failure of one of the applications, i.e. one database. I would like to prepare a procedure for restoring such a database, but in the GCP GUI there is no option to restore one database, I need to restore the entire instance, which I cannot due to the operation of other applications on this instance.
I also read in the documentation that a backup cannot be exported.
Is there any way to restore only one database from the entire instance backup?
Will I have to write a MySQL script that will backup each database separately and save it to Cloud Storage?
Like Daniel mentioned you can use gcloud sql export/import to do this. You'll also need a Google Storage Bucket.
First export a database to a file
gcloud sql export sql [instance-name] [gs://path-to-export-file.gz] --database=[database-name]
Create an empty database
gcloud sql databases create [new-database-name] --instance=[instance-name]
Use the export file to populate your fresh, empty database.
gcloud sql import sql [instance-name] [gs://path-to-export-file.gz] --database=[database-name]
I'm also a beginner here, but as an alternative, I think could you do the following:
Create a new instance with the same configuration
Restore the original backup into the new instance (this is possible)
Create a dump of the one database that you are interested in
Finally, import that dump into the production instance
In this way, you avoid messing around with data exports, limit the dump operation to the unlikely case of a restore, and save money on database instances.
Curious what people think about this approach?
As of now there is no way to restore only one database from the entire instance backup. As you can check on the documentation the rest of the applications will also experience downtime (since the target instance will be unavailable for connections and existing connections will be lost).
Since there is no built in method to restore only one database from the entire backup instance you are correct and writing a MySQL script to backup each database separately and use import and export operations (here is the relevant documentation regarding import and export operations in the Cloud SQL MySQL context).
But I would recommend you from an implementation point of view to use a separate Cloud SQL instance for each application, and then you could restore the database in case one particular application fails without causing downtime or issues on the rest of the applications.
I see that the topic has been raised again. Below is a description of how I solved the problem with doing backup individual databases from one instance, without using the built-in instance backup mechanism in GCP and uploud it to cloud storage.
To solve the problem, I used Google Cloud Functions written in Node.js 8.
Here is step by step solution:
Create a Cloud Storage Bucket.
Create Cloud Function using Node.js 8.
Edit below code to meet your instance and database parameters:
const {google} = require("googleapis");
const {auth} = require("google-auth-library");
var sqladmin = google.sqladmin("v1beta4");
exports.exportDatabase = (_req, res) => {
async function doBackup() {
const authRes = await auth.getApplicationDefault();
let authClient = authRes.credential;
var request = {
// Project ID
project: "",
// Cloud SQL instance ID
instance: "",
resource: {
// Contains details about the export operation.
exportContext: {
// This is always sql#exportContext.
kind: "sql#exportContext",
// The file type for the specified uri (e.g. SQL or CSV)
fileType: "SQL",
/**
* The path to the file in GCS where the export will be stored.
* The URI is in the form gs://bucketName/fileName.
* If the file already exists, the operation fails.
* If fileType is SQL and the filename ends with .gz, the contents are compressed.
*/
uri:``,
/**
* Databases from which the export is made.
* If fileType is SQL and no database is specified, all databases are exported.
* If fileType is CSV, you can optionally specify at most one database to export.
* If csvExportOptions.selectQuery also specifies the database, this field will be ignored.
*/
databases: [""]
}
},
// Auth client
auth: authClient
};
// Kick off export with requested arguments.
sqladmin.instances.export(request, function(err, result) {
if (err) {
console.log(err);
} else {
console.log(result);
}
res.status(200).send("Command completed", err, result);
}); } doBackup(); };
Sorry for the last line but I couldn't format it well.
Save and deploy this Cloud Function
Copy the Trigger URL from configuration page of Cloud function.
In order for the function to run automatically with a specified frequency, use Cloud
Scheduler: Descrition: "", Frequency: USE UNIX-CORN !!!, Time zone: Choose
yours, Target: HTTP, URL: PAST COPIED BEFORE TRIGGER URL HTTP
method: POST
Thats All, it shoudl work fine.

Server fails to launch in Google App Engine; OK in Localhost

I have a Flex App written in Go and React that is deployed to Google App engine. I would like it to interact with a MySql Database (2nd generation) on Google Cloud over a Unix socket. I believe the issue lies with the Go server not launching/responding to requests (see below for justification). The App is located at https://haveibeenexploited.appspot.com/
The project is simple. I have two routes in my Server:
server.go
package main
import (
"net/http"
"searchcontract"
)
func main() {
http.Handle("/", http.FileServer(http.Dir("./app/build")))
http.HandleFunc("/search", searchcontract.SearchContract)
http.ListenAndServe(":8080", nil)
}
The second route ("/search") is activated when a user hits the search button. Ideal behavior should return a row specifying the exploits available for the given "contract address" which React writes out to the screen.
searchcontract/searchcontract.go
//SearchContract is a handler that queries the DB for compromised contracts.
func SearchContract(w http.ResponseWriter, r *http.Request) {
var contractName contractID //Used for parsing in contractName
queryResult := getRow(&contractName.Name)
w.WriteHeader(200)
json.NewEncoder(w).Encode(queryResult)
}
//processRow queries the DB for a contract with ID value of name.
func getRow(contractName *string) *ContractVulnerabilityInfo {
var storage ContractVulnerabilityInfo //stores row to encode
//Login to database
...
scanErr := db.QueryRow("SELECT * FROM contracts WHERE ContractAddress=?;", &contractName).Scan(&storage.ContractAddress, &storage.IntegerOverflow, &storage.IntegerUnderflow, &storage.DOS, &storage.ExceptionState, &storage.ExternalCall, &storage.ExternalCallFixed, &storage.MultipleCalls, &storage.DelegateCall, &storage.PredictableEnv, &storage.TxOrigin, &storage.EtherWithdrawal, &storage.StateChange, &storage.UnprotectedSelfdestruct, &storage.UncheckedCall)
...
return &storage
}
My app.yaml file should allow me to deploy this flex app and does:
runtime: go1.12
env: flex
handlers:
- url: /.*
script: _server # my server.go file handles all endpoints
automatic_scaling:
max_num_instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
env_variables:
# user:password#unix(/cloudsql/INSTANCE_CONNECTION_NAME)/dbname
MYSQL_CONNECTION: root:root#unix(/cloudsql/haveibeenexploited:us-west1:hibe)/mythril
# https://cloud.google.com/sql/docs/mysql/connect-app-engine
beta_settings:
cloud_sql_instances: haveibeenexploited:us-west1:hibe
I am able to query the database successfully on localhost.Localhost correctly shows address
However, whenever I try to implement and push to AppEngine, when I query something that should be in the database, it does not show up in the remote App! App Engine does not show address in database. Furthermore, I get a status code of '0' returned, which indicates to me that the server function isn't even being called at all ('200' is what I expect if successful or some other error message.').
Summary
I can't wrap my head around this bug. What should work locally should work remotely. Also, I can't debug this app probably because Stackdriver does not support flex apps and the devserver Google Cloud provides does not support Go Apps.
I believe the primary issue is with Go not speaking to the React element correctly or the routing not being taken care of appropriately.
1) The problem does not lie with MySql connection/database access
- I changed my route to only be one page, turned off React, and included a hardcoded query. The result on localhost. The result on App Engine
2) There is an issue in either a) my routing or b) the interaction between React and Go.
3) Go seems to start correctly... at least when React is not started.
Any help is appreciated.
EDIT I believe that the go app indeed is still running, but the searchfunction is failing for whatever reason. The reason I believe this is because when I add another route for haveibeenexploited.com/hello, it works.

SSIS Package won't do anything when Run through SQL Server Job

I have a simple SSIS Package, which has
a Excute SQL Task control on the Control Flow, which fetches some value from the database
In the DataFlow, am using a Script Component, which based on values given by 'Excute SQL Task', does this:
public override void CreateNewOutputRows()
{
try
{
string loginURL = "http://maps.googleapis.com/maps/api/geocode/xml?address=" + Variables.ProjectAddress + "&sensor=true";
WebClient client = new WebClient();
string downloadString = client.DownloadString(loginURL);
XmlDocument xml = new XmlDocument();
xml.LoadXml(downloadString);
///// setting output buffer variables
}
catch(Exception ex)
{
}
}
so basically am requesting a web service for latitude and longitude inside the package.
The retrieved values are then updated into the database:
Everything works fine, when I run the package from the Visual Studio SSIS project console.
But when I try to run the package through a SQL Server 2008 R2 Job, nothing happens. Job Executes successfully but no rows are updated(or inserted).
I tried importing the package into SQL MSDB and setting the protection level to all the items in the dropdown one by one as given here
...and then running this imported package from SQL Job. Still...nothing happened.
Does anyone know whats wrong?? How do I deal with following facts:
It has to do with permission of the sql user to make a web service request. How do I configure that out?
2.it has to do with the configuration file of imported ssis package. What should I look for?
Help me out please:
I hope I have given all the required info to look into the problem
is the job on an SQL Server Instance on your computer? I ask because it may be firewall or permission issues from the SQL Server to the computer you have the web service.
Also I advise removing that try catch and enabling package configurations so you can see if it is trowing an error
Regarding protection level, if you are using EncryptSensitiveWithUserKey the package wont load the database sensitive information (login and password) unless it is on the computer you developed it. Same thing applies to EncryptAllWithUserKey but in this case it wont even open the package