projects.locations.jobs.create in CloudScheduler fails with InvalidArgument and PermissionDenied - google-apis-explorer

I'm currently trying to create a CloudScheduler job from golang.
However, it is stuck with an error and I would like to know how to fix it.
The current code looks like this, and is to be run from CloudRun.
const projectID = "hogehoge-project"
const locationID = "asia-northeast1"
const jobBaseUrl = "https://example.com/notify"
func StartToCheckRunnning(jobID string) error {
ctx := context.Background()
cloudschedulerService, err := cloudscheduler.NewCloudSchedulerClient(ctx)
if err != nil {
log.Fatalf("cloudscheduler.NewCloudSchedulerClient: %v", err)
return fmt.Errorf("cloudscheduler.NewCloudSchedulerClient: %v", err)
}
defer cloudschedulerService.Close()
queuePath := fmt.Sprintf("projects/%s/locations/%s", projectID, locationID)
req := &schedulerpb.CreateJobRequest{
Parent: queuePath,
Job: &schedulerpb.Job{
Name: jobID,
Description: "managed by the system",
Target: &schedulerpb.Job_HttpTarget{
HttpTarget: &schedulerpb.HttpTarget{
Uri: createJobUrl(jobBaseUrl, jobID),
HttpMethod: schedulerpb.HttpMethod_POST,
},
},
Schedule: "* * * * *",
TimeZone: "jst",
},
}
resp, err := cloudschedulerService.CreateJob(ctx, req)
if err != nil {
log.Fatalf("cloudschedulerService.CreateJob: %v", err)
return fmt.Errorf("cloudschedulerService.CreateJob: %v", err)
}
// TODO: Use resp.
_ = resp
return nil
}
When I run it, I will get the following error.
cloudschedulerService.CreateJob: rpc error: code = InvalidArgument desc = Job name must be formatted: "projects/<PROJECT_ID>/locations/<LOCATION_ID>/jobs/<JOB_ID>".
However, when I change the queuePath to the following, I now get the following error.
queuePath := fmt.Sprintf("projects/%s/locations/%s/jobs/%s", projectID, locationID, jobID)
cloudschedulerService.CreateJob: rpc error: code = PermissionDenied desc = The principal (user or service account) lacks IAM permission "cloudscheduler.jobs.create" for the resource "projects/hogehoge-project/locations/asia-northeast1/jobs/029321cb-467f-491e-852e-0c3df3d49db3" (or the resource may not exist).
Since I am using the CloudRun default service account, there should be no lack of permissions.
By the way, here is what it says to write in this format: projects/PROJECT_ID/locations/LOCATION_ID.
https://pkg.go.dev/google.golang.org/genproto/googleapis/cloud/scheduler/v1beta1#CreateJobRequest
How can I correctly execute the create request?
Thanks.

I solved myself with this code.
It was an error in the Name of the Job, not the Parent.
queuePath := fmt.Sprintf("projects/%s/locations/%s", projectID, locationID)
namePath := fmt.Sprintf("projects/%s/locations/%s/jobs/%s", projectID, locationID, jobID)
req := &schedulerpb.CreateJobRequest{
Parent: queuePath,
Job: &schedulerpb.Job{
Name: namePath,
Description: "managed by system",
Target: &schedulerpb.Job_HttpTarget{
HttpTarget: &schedulerpb.HttpTarget{
Uri: createJobUrl(jobBaseUrl, jobID),
HttpMethod: schedulerpb.HttpMethod_POST,
},
},
Schedule: "* * * * *",
TimeZone: "Asia/Tokyo",
},
}

Related

How to return a partial struct based on query parameters in Go?

I am trying to achieve attribute selection on a Rest Resource according to query parameters. API client will provide a query parameter called fields. Server will return only attributes of the resource mentioned in the query string. Server should return different Partial representation of the Resource according to query parameter. Here are some example requests.
GET /api/person/42/?fields=id,createdAt
GET /api/person/42/?fields=address,account
GET /api/person/42/?fields=id,priority,address.city
I tried to go map[string]any route but it did not go well. I am using MongoDB. When I decode mongo document into map[string]any field names and types are not matching. Therefore I am trying to create a new struct on the fly.
Here is my attempt:
func main() {
query, _ := url.ParseQuery("fields=id,priority,address.city")
fields := strings.Split(query.Get("fields"), ",") // TODO: extractFields
person := getPerson() // Returns a Person Struct
personish := PartialStruct(person, fields)
marshalled, _ := json.Marshal(personish) // TODO: err
fmt.Println(string(marshalled))
}
func PartialStruct(original any, fields []string) any {
// Is there any alternative to reflect ?
originalType := reflect.TypeOf(original)
partialFields := make([]reflect.StructField, 0)
for _, field := range reflect.VisibleFields(originalType) {
queryName := field.Tag.Get("json") // TODO: extractQueryName
if slices.Contains(fields, queryName) {
partialFields = append(partialFields, field)
}
}
partialType := reflect.StructOf(partialFields)
// Is there any alternative to Marshal/Unmarshal?
partial := reflect.New(partialType).Interface()
marshalled, _ := json.Marshal(original) // TODO: err
_ = json.Unmarshal(marshalled, partial) // TODO: err
return partial
}
Here is a runnable example https://go.dev/play/p/Egomxe5NjEc
Resources are modelled as nested structs. Nested fields will be denoted by a "." dot in the query string.
How can I improve PartialStruct to handle nested fields such as address.city?
I am willing to change my direction if there are better ways.
look at the third party libraries:graphql
An example I wrote that may help you:
package main
import (
"encoding/json"
"fmt"
"log"
"github.com/graphql-go/graphql"
)
func main() {
// Schema
fields := graphql.Fields{
"id": &graphql.Field{
Type: graphql.ID,
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
return 111, nil
},
},
"priority": &graphql.Field{
Type: graphql.String,
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
return "admin", nil
},
},
"address": &graphql.Field{
Type: graphql.NewObject(graphql.ObjectConfig{
Name: "address",
Fields: graphql.Fields{
"city": &graphql.Field{
Type: graphql.String,
},
"country": &graphql.Field{
Type: graphql.String,
},
},
}),
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
return map[string]string{
"city":"New York",
"country": "us",
}, nil
},
},
}
rootQuery := graphql.ObjectConfig{Name: "RootQuery", Fields: fields}
schemaConfig := graphql.SchemaConfig{Query: graphql.NewObject(rootQuery)}
schema, err := graphql.NewSchema(schemaConfig)
if err != nil {
log.Fatalf("failed to create new schema, error: %v", err)
}
// Query
query := `
{
id,
address {
city,country
},
priority
}
`
params := graphql.Params{Schema: schema, RequestString: query}
r := graphql.Do(params)
if len(r.Errors) > 0 {
log.Fatalf("failed to execute graphql operation, errors: %+v", r.Errors)
}
rJSON, _ := json.Marshal(r)
fmt.Printf("%s \n", rJSON)
}
Here is a runnable example https://go.dev/play/p/pHH2iBzCLT-

Postman JSON Response empty

I am using go with the echo framework. Created a Post Request to a Docker Postgre database. Everything is working fine when I use Params in Postman to add a new User to the database, but when I am trying to send it as a Body/raw JSON, the response is an empty JSON. Anybody can point me in the right direction?
Working params:
firstname=Dennis
lastname=Liga
JSON I am trying to send:
{
"firstname": "Dennis",
"lastname": "Liga"
}
main.go
e.POST("/addPerson", routes.AddPerson)
routes.go
func AddPerson(c echo.Context) error {
ctx := context.Background()
db := connectors.GetDbConnection()
firstname := c.QueryParam("firstname")
lastname := c.QueryParam("lastname")
queries := postgres.New(db)
insertedPerson, err := queries.CreatePersons(ctx, postgres.CreatePersonsParams{
Firstname: firstname,
Lastname: lastname,
})
if err != nil {
log.Errorf("Failed to insert a person %v", err)
return err
}
fmt.Println(insertedPerson)
return c.JSONPretty(http.StatusOK, insertedPerson, " ")
}

Is there any method similar to Golang's Scan() method (used for SQL) for Elasticsearch?

I am new to both Go and ES. I want to convert the following piece of code:
query = `SELECT roll_no, name, school FROM students where roll_no = $1`
err = db.QueryRowContext(ctx, query, roll_no).Scan(&Student.ID, &Student.Name, &Student.School)
into something like the following Elasticsearch query:
str = fmt.Sprintf(`{"query": { "match": { "roll_no" : %d } } }`, roll_no)
b := []byte(str)
// calls retrieve method, which is shown below
I am connecting to ES using HTTP calls, but the following code is showing http: panic serving [::1]:5574: EOF error while parsing.
func retrieve(url string, b []byte) ([]byte, error) {
request, _ := http.NewRequest("GET", url, bytes.NewBuffer(b))
request.Header.Set("Content-Type", "application/json; charset=UTF-8")
client := &http.Client{}
response, error := client.Do(request)
if error != nil {
panic(error)
}
defer response.Body.Close()
body, _ := ioutil.ReadAll(response.Body)
s := Student{}
error = json.NewDecoder(response.Body).Decode(&s)
if error != nil {
panic(error)
}
fmt.Printf("Student: %v", s)
return body, error
}
Is there anyway I can store it into an object by parsing?
How can I use Golang's Scan() method (normally used for SQL) in Elasticsearch?
Not at all.
Package database/sql needs a database driver and there isn't one for ES.

BigQuery: Unrecognized timezone when importing CSV

How can you define the timezone of a timestamp when loading into BigQuery from a CSV file?
none of these seem to work:
2018-07-31 11:55:00 Europe/Rome
2018-07-31 11:55:00 CET
I get the following error:
{Location: "query"; Message: "Unrecognized timezone: Europe/Rome;
Could not parse '2018-07-31 11:55:00 Europe/Rome' as datetime for
field ts (position 0) starting at location 0"; Reason: "invalidQuery"}
I am running an import from Google Cloud Storage, using this Go code:
gcsRef := bigquery.NewGCSReference(gcsFilename)
gcsRef.SourceFormat = bigquery.CSV
gcsRef.FieldDelimiter = "|"
gcsRef.Schema = bigquery.Schema{
{Name: "ts", Type: bigquery.TimestampFieldType},
{Name: "field2", Type: bigquery.StringFieldType},
{Name: "field3", Type: bigquery.StringFieldType},
}
loader := bigqueryClient.Dataset("events").Table("mytable").LoaderFrom(gcsRef)
loader.WriteDisposition = bigquery.WriteAppend
job, err := loader.Run(ctx)
if err != nil {
log.Fatalln("loader.Run", err.Error())
}
status, err := job.Wait(ctx)
if err != nil {
log.Fatalln("job.Wait", err.Error())
}
if status.Err() != nil {
log.Fatalln("Job completed with error: %v", status.Err(), status.Errors)
}
to make it work - try to declare ts field as string and then you will be able to resolve it into timestamp in whatever query you will then use - using already mentioned (in comment) approach - like SELECT TIMESTAMP(ts)

Gin + Golang + DB Connection Pooling

I would like to understand how does GIN ensures that each HTTP request gets a unique DB ( say MySQL ) connection. Here is one example code.
If you see, since 'db' is a global object and therefore, the API router.GET("/person/:age"... gets access to DB.
Now with load, I suppose GIN will have concurrency implemented internally. If yes, then how does it ensures that each request gets a different connection. If no, then it is single threaded imnplementation. Could anyone please correct my understanding.
package main
import (
// "bytes"
"database/sql"
"fmt"
"github.com/gin-gonic/gin"
_ "github.com/go-sql-driver/mysql"
"net/http"
)
func checkErr(err error) {
if err != nil {
panic(err)
} else {
fmt.Println("successful...")
}
}
func main() {
db, err := sql.Open("mysql", "abfl:abfl#tcp(127.0.0.1:3306)/abfl?charset=utf8")
checkErr(err)
defer db.Close()
// make sure connection is available
err = db.Ping()
checkErr(err)
type User struct {
age int
name string
}
router := gin.Default()
// Add API handlers here
// GET a user detail
router.GET("/person/:age", func(c *gin.Context) {
var (
user User
result gin.H
)
age := c.Param("age")
fmt.Println("input age : '%d'", age)
row := db.QueryRow("select age, name from user where age = ?", age)
err = row.Scan(&user.age, &user.name)
fmt.Printf("user : %+v\n", user)
if err != nil {
// If no results send null
result = gin.H{
"user": nil,
"count": 0,
}
} else {
result = gin.H{
"age": user.age,
"name": user.name,
"count": 1,
}
}
c.JSON(http.StatusOK, result)
})
router.Run(":3000")
}
Establishing a new SQL connection for each HTTP request is too heavy and has no sense.
In go there is no user-managable connection pool yet, it is handled internally by go implementation.
sql.DB is ready to be used concurrently, so there is no worry about it.
And GIN has nothing to do with SQL connections at all. It is fully your responsibility to handle queries/transactions properly.