GoLang Dynamic SQL Query in App Engine - mysql

I want to make dynamic sql in GoLang and I cant seem to find the correct way to do it.
Basically, I just want to do:
query := "SELECT id, email, something FROM User"
var paramValues []string
filterString := ""
if userParams.Name != "" {
paramString += " WHERE id = ?"
paramValues = append(paramValues, userParams.Name)
}
if userParams.UserID != "" {
if len(paramString) > 0 {
paramString += " AND"
} else {
paramString += " WHERE"
}
paramString += " email = ?"
paramValues = append(paramValues, userParams.UserID)
}
stmtOut, err := db.Prepare(query + paramString)
err = stmtOut.QueryRow(paramValues).Scan(&id, &email, &something)
Related to building a dynamic query in mysql and golang
I've been unable to find a solid way to do this that doesn't allow sql injection. The issue with my above solution is that QueryRow() does not take a []string as a parameter.
I want to protect from SQL Injection, so fmt.Sprintf doesn't really solve the problem.
This way I can allow searches on user using either the ID or Email, and I will also use this logic for different objects with more searchable fields.
I'm using go-sql-driver/mysql

Here's something which I can run on my local machine (go1.8 linux/amd64 and current GO MySQL driver 1.3).
Couple of ways are demonstrated.
package main
import (
"database/sql"
"log"
_ "github.com/go-sql-driver/mysql"
"fmt"
)
// var db *sql.DB
// var err error
/*
Database Name/Schema : Test123
Table Name: test
Table Columns and types:
number INT (PRIMARY KEY)
cube INT
*/
func main() {
//Username root, password root
db, err := sql.Open("mysql", "root:root#tcp(127.0.0.1:3306)/Test123?charset=utf8")
if err != nil {
fmt.Println(err) // needs proper handling as per app requirement
return
}
defer db.Close()
err = db.Ping()
if err != nil {
fmt.Println(err) // needs proper handling as per app requirement
return
}
//Prepared statement for inserting data
stmtIns, err := db.Prepare("INSERT INTO test VALUES( ?, ? )") // ? = placeholders
if err != nil {
panic(err.Error()) // needs proper handling as per app requirement
}
defer stmtIns.Close()
//Insert cubes of 1- 10 numbers
for i := 1; i < 10; i++ {
_, err = stmtIns.Exec(i, (i * i * i)) // Insert tuples (i, i^3)
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
}
num := 3
// Select statement
dataEntity := "cube"
condition := "WHERE number=? AND cube > ?"
finalStatement := "SELECT " + dataEntity + " FROM test " + condition
cubeLowerLimit := 10
var myCube int
err = db.QueryRow(finalStatement, num, cubeLowerLimit).Scan(&myCube)
switch {
case err == sql.ErrNoRows:
log.Printf("No row with this number %d", num)
case err != nil:
log.Fatal(err)
default:
fmt.Printf("Cube for %d is %d\n", num, myCube)
}
var cubenum int
// //Prepared statement for reading data
stmtRead, err := db.Prepare(finalStatement)
if err != nil {
panic(err.Error()) // needs proper err handling
}
defer stmtRead.Close()
// Query for cube of 5
num = 5
err = stmtRead.QueryRow(num, cubeLowerLimit).Scan(&cubenum)
switch {
case err == sql.ErrNoRows:
log.Printf("No row with this number %d", num)
case err != nil:
log.Fatal(err)
default:
fmt.Printf("Cube number for %d is %d\n", num, cubenum)
}
}
If you run it subsequent times, you need to delete the rows in the database so that the inserts won't create a panic (or alternatively change the insert rows code so that it doesn't panic). I haven't tried it on Google App Engine. Hope this helps.

Related

XML Insert Performance into MYSQL

I have some code which inserts the records on the database:
The code is supposed to insert 15M records on the database, right now, it takes 60 hours on a AWS t2.large instance. I'm looking for ways to make the insert on the DB faster while also not duplicating records.
Do you guys have suggestions for me?
I'm using Gorm and MYSQL.
// InsertJob will insert job into database, by checking its hash.
func InsertJob(job XMLJob, oid int, ResourceID int) (Job, error) {
db := globalDBConnection
cleanJobDescription := job.Body
hashString := GetMD5Hash(job.Title + job.Body + job.Location + job.Zip)
JobDescriptionHash := GetMD5Hash(job.Body)
empty := sql.NullString{String: "", Valid: true}
j := Job{
CurrencyID: 1, //USD
//other fields here elided for brevity
PrimaryIndustry: sql.NullString{String: job.PrimaryIndustry, Valid: true},
}
err := db.Where("hash = ?", hashString).Find(&j).Error
if err != nil {
if err.Error() != "record not found" {
return j, err
}
err2 := db.Create(&j).Error
if err2 != nil {
log.Println("Unable to create job:" + err.Error())
return j, err2
}
}
return j, nil
}
You can speed it up using using semaphore pattern.
https://play.golang.org/p/OxO8pNy3bc6
inspired from here.
https://gist.github.com/montanaflynn/ea4b92ed640f790c4b9cee36046a5383

how to parse multiple query params and pass them to a sql command

i am playing around rest API's in go and when i do a get call with this
http://localhost:8000/photos?albumId=1&id=1
i want to return only those values from db which corresponds to alubmId=1 and id =1 , or any other key in the query for that matter without storing as variables and then passing it to query, and when i dont give any query params i want it to return all the posts in db
func getPhotos(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
var photos []Photo
db, err = sql.Open("mysql", "root:$pwd#tcp(127.0.0.1:3306)/typicode")
if err != nil {
panic(err.Error())
}
defer db.Close()
for k, v := range r.URL.Query() {
fmt.Printf("%s: %s\n", k, v)
}
result, err := db.Query("SELECT id, albumId, Title,thumbnailUrl,url from photo")
if err != nil {
panic(err.Error())
}
defer result.Close()
for result.Next() {
var photo Photo
err := result.Scan(&photo.Id, &photo.AlbumId, &photo.Title, &photo.Url, &photo.Thumbnailurl)
if err != nil {
panic(err.Error())
}
photos = append(photos, photo)
}
json.NewEncoder(w).Encode(photos)
}
First you need a set of valid table columns that can be used in the query, this is required to avoid unnecessary errors from columns with typos and sql injections from malicious input.
var photocolumns = map[string]struct{}{
"id": {},
"albumId": {},
// all the other columns you accept as input
}
Now depending on the database you may, or you may not, need to parse the query values and convert them to the correct type for the corresponding column. You can utilize the column map to associate each column with the correct converter type/func.
// a wrapper around strconv.Atoi that has the signature of the map-value type below
func atoi(s string) (interface{}, error) {
return strconv.Atoi(s)
}
var photocolumns = map[string]func(string) (interface{}, error) {
"id": atoi,
"albumId": atoi,
// all the other columns you accept as input
}
Then, all you need is a single loop and in it you do all the work you need. You get the correct column name, convert the value to the correct type, aggregate that converted value into a slice so that it can be passed to the db, and also construct the WHERE clause so that it can be concatenated to the sql query string.
where := ""
params := []interface{}{}
for k, v := range r.URL.Query() {
if convert, ok := photocolumns[k]; ok {
param, err := convert(v[0])
if err != nil {
fmt.Println(err)
return
}
params = append(params, param)
where += k + " = ? AND "
}
}
if len(where) > 0 {
// prefix the string with WHERE and remove the last " AND "
where = " WHERE " + where[:len(where)-len(" AND ")]
}
rows, err := db.Query("SELECT id, albumId, Title,thumbnailUrl,url from photo" + where, params...)
// ...

How can I ensure that all of my write transaction functions get resolved in order? Also, why is the else function not executing?

I'm trying to create a very simple Bolt database called "ledger.db" that includes one Bucket, called "Users", which contains Usernames as a Key and Balances as the value that allows users to transfer their balance to one another. I am using Bolter to view the database in the command line
There are two problems, both contained in this transfer function issue resides in the transfer function.
The First: Inside the transfer function is an if/else. If the condition is true, it executes as it should. If it's false, nothing happens. There's no syntax errors and the program runs as though nothing is wrong, it just doesn't execute the else statement.
The Second: Even if the condition is true, when it executes, it doesn't update BOTH the respective balance values in the database. It updates the balance of the receiver, but it doesn't do the same for the sender. The mathematical operations are completed and the values are marshaled into a JSON-compatible format.
The problem is that the sender balance is not updated in the database.
Everything from the second "Success!" fmt.Println() function onward is not processed
I've tried changing the "db.Update()" to "db.Batch()". I've tried changing the order of the Put() functions. I've tried messing with goroutines and defer, but I have no clue how to use those, as I am rather new to golang.
func (from *User) transfer(to User, amount int) error{
var fbalance int = 0
var tbalance int = 0
db, err := bolt.Open("ledger.db", 0600, nil)
if err != nil {
log.Fatal(err)
}
defer db.Close()
return db.Update(func(tx *bolt.Tx) error {
uBuck := tx.Bucket([]byte("Users"))
json.Unmarshal(uBuck.Get([]byte(from.username)), &fbalance)
json.Unmarshal(uBuck.Get([]byte(to.username)), &tbalance)
if (amount <= fbalance) {
fbalance = fbalance - amount
encoded, err := json.Marshal(fbalance)
if err != nil {
return err
}
tbalance = tbalance + amount
encoded2, err := json.Marshal(tbalance)
if err != nil {
return err
}
fmt.Println("Success!")
c := uBuck
err = c.Put([]byte(to.username), encoded2)
return err
fmt.Println("Success!")
err = c.Put([]byte(from.username), encoded)
return err
fmt.Println("Success!")
} else {
return fmt.Errorf("Not enough in balance!", amount)
}
return nil
})
return nil
}
func main() {
/*
db, err := bolt.Open("ledger.db", 0600, nil)
if err != nil {
log.Fatal(err)
}
defer db.Close()
*/
var b User = User{"Big", "jig", 50000, 0}
var t User = User{"Trig", "pig", 40000, 0}
// These two functions add each User to the database, they aren't
// the problem
b.createUser()
t.createUser()
/*
db.View(func(tx *bolt.Tx) error {
c := tx.Bucket([]byte("Users"))
get := c.Get([]byte(b.username))
fmt.Printf("The return value %v",get)
return nil
})
*/
t.transfer(b, 40000)
}
I expect the database to show Big:90000 Trig:0 from the beginning values of Big:50000 Trig:40000
Instead, the program outputs Big:90000 Trig:40000
You return unconditionally:
c := uBuck
err = c.Put([]byte(to.username), encoded2)
return err
fmt.Println("Success!")
err = c.Put([]byte(from.username), encoded)
return err
fmt.Println("Success!")
You are not returning and checking errors.
json.Unmarshal(uBuck.Get([]byte(from.username)), &fbalance)
json.Unmarshal(uBuck.Get([]byte(to.username)), &tbalance)
t.transfer(b, 40000)
And so on.
Debug your code statement by statement.

Correctly remove second json.Marshal in Go

I have, for whatever reason, while trying to build a simple Rest API in Go with MySQL storage, added a second json.Marshal which is double-encoding and producing results with escaped quotes and such. I could strip the quotes, but I think I shouldn't have two json.Marshal things happening in the first place.
The problem is twofold - 1) which is proper to remove (leaning toward the first because "result" should be the larger array) and 2)how to keep the code functioning after removal? I can't just simply remove the first one as I start encountering all sorts of errors. Here are the relevant portions of the code:
type Volume struct {
Id int
Name string
Description string
}
... skipping ahead ....
var result = make([]string,1000)
switch request.Method {
case "GET":
name := request.URL.Query().Get("name")
stmt, err := db.Prepare("select id, name, description from idm_assets.VOLUMES where name = ?")
if err != nil{
fmt.Print( err );
}
rows, err := stmt.Query(name)
if err != nil {
fmt.Print( err )
}
i := 0
for rows.Next() {
var name string
var id int
var description string
err = rows.Scan( &id, &name, &description )
if err != nil {
fmt.Println("Error scanning: " + err.Error())
return
}
volume := &Volume{Id: id,Name:name,Description: description}
Here is the first json.Marshal ...
b, err := json.Marshal(volume)
fmt.Println(b)
if err != nil {
fmt.Println(err)
return
}
result[i] = fmt.Sprintf("%s", string(b))
i++
}
result = result[:i]
...skipping other cases for PUT, DELETE, Etc. To the second json.Marshal ...
default:
}
json, err := json.Marshal(result)
if err != nil {
fmt.Println(err)
return
}
fmt.Fprintf(response,"'%v'\n",string(json) )
Turn result into an array of *Volume
result := []*Volume{}
and then append new Volume records:
result = append(result, &Volume{Id: id,Name:name,Description: description})
and in the end use Marshal(result) to get the JSON result.

How can I implement my own interface for OpenID that uses a MySQL Database instead of In memory storage

So I'm trying to use the OpenID package for Golang, located here: https://github.com/yohcop/openid-go
In the _example it says that it uses in memory storage for storing the nonce/discoverycache information and that it will not free the memory and that I should implement my own version of them using some sort of database.
My database of choice is MySQL, I have tried to implement what I thought was correct (but is not, does not give me any compile errors, but crashes on runtime)
My DiscoveryCache.go is as such:
package openid
import (
"database/sql"
"log"
//"time"
_ "github.com/go-sql-driver/mysql"
"github.com/yohcop/openid-go"
)
type SimpleDiscoveredInfo struct {
opEndpoint, opLocalID, claimedID string
}
func (s *SimpleDiscoveredInfo) OpEndpoint() string { return s.opEndpoint }
func (s *SimpleDiscoveredInfo) OpLocalID() string { return s.opLocalID }
func (s *SimpleDiscoveredInfo) ClaimedID() string { return s.claimedID }
type SimpleDiscoveryCache struct{}
func (s SimpleDiscoveryCache) Put(id string, info openid.DiscoveredInfo) {
/*
db, err := sql.Query("mysql", "db:connectinfo")
errCheck(err)
rows, err := db.Query("SELECT opendpoint, oplocalid, claimedid FROM discovery_cache")
errCheck(err)
was unsure what to do here because I'm not sure how to
return the info properly
*/
log.Println(info)
}
func (s SimpleDiscoveryCache) Get(id string) openid.DiscoveredInfo {
db, err := sql.Query("mysql", "db:connectinfo")
errCheck(err)
var sdi = new(SimpleDiscoveredInfo)
err = db.QueryRow("SELECT opendpoint, oplocalid, claimedid FROM discovery_cache WHERE id=?", id).Scan(&sdi)
errCheck(err)
return sdi
}
And my Noncestore.go
package openid
import (
"database/sql"
"errors"
"flag"
"fmt"
"time"
_ "github.com/go-sql-driver/mysql"
)
var maxNonceAge = flag.Duration("openid-max-nonce-age",
60*time.Second,
"Maximum accepted age for openid nonces. The bigger, the more"+
"memory is needed to store used nonces.")
type SimpleNonceStore struct{}
func (s *SimpleNonceStore) Accept(endpoint, nonce string) error {
db, err := sql.Open("mysql", "dbconnectinfo")
errCheck(err)
if len(nonce) < 20 || len(nonce) > 256 {
return errors.New("Invalid nonce")
}
ts, err := time.Parse(time.RFC3339, nonce[0:20])
errCheck(err)
rows, err := db.Query("SELECT * FROM noncestore")
defer rows.Close()
now := time.Now()
diff := now.Sub(ts)
if diff > *maxNonceAge {
return fmt.Errorf("Nonce too old: %ds", diff.Seconds())
}
d := nonce[20:]
for rows.Next() {
var timeDB, nonce string
err := rows.Scan(&nonce, &timeDB)
errCheck(err)
dbTime, err := time.Parse(time.RFC3339, timeDB)
errCheck(err)
if dbTime == ts && nonce == d {
return errors.New("Nonce is already used")
}
if now.Sub(dbTime) < *maxNonceAge {
_, err := db.Query("INSERT INTO noncestore SET nonce=?, time=?", &nonce, dbTime)
errCheck(err)
}
}
return nil
}
func errCheck(err error) {
if err != nil {
panic("We had an error!" + err.Error())
}
}
Then I try to use them in my main file as:
import _"github.com/mysqlOpenID"
var nonceStore = &openid.SimpleNonceStore{}
var discoveryCache = &openid.SimpleDiscoveryCache{}
I get no compile errors but it crashes
I'm sure you'll look at my code and go what the hell (I'm fairly new and only have a week or so experience with Golang so please feel free to correct anything)
Obviously I have done something wrong, I basically looked at the NonceStore.go and DiscoveryCache.go on the github for OpenId, replicated it, but replaced the map with database insert and select functions
IF anybody can point me in the right direction on how to implement this properly that would be much appreciated, thanks! If you need anymore information please ask.
Ok. First off, I don't believe you that the code compiles.
Let's look at some mistakes, shall we?
db, err := sql.Open("mysql", "dbconnectinfo")
This line opens a database connection. It should only be used once, preferably inside an init() function. For example,
var db *sql.DB
func init() {
var err error
// Now the db variable above is automagically set to the left value (db)
// of sql.Open and the "var err error" above is the right value (err)
db, err = sql.Open("mysql", "root#tcp(127.0.0.1:3306)")
if err != nil {
panic(err)
}
}
Bang. Now you're connected to your MySQL database.
Now what?
Well this (from Get) is gross:
db, err := sql.Query("mysql", "db:connectinfo")
errCheck(err)
var sdi = new(SimpleDiscoveredInfo)
err = db.QueryRow("SELECT opendpoint, oplocalid, claimedid FROM discovery_cache WHERE id=?", id).Scan(&sdi)
errCheck(err)
Instead, it should be this:
// No need for a pointer...
var sdi SimpleDiscoveredInfo
// Because we take the address of 'sdi' right here (inside Scan)
// And that's a useless (and potentially problematic) layer of indirection.
// Notice how I dropped the other "db, err := sql.Query" part? We don't
// need it because we've already declared "db" as you saw in the first
// part of my answer.
err := db.QueryRow("SELECT ...").Scan(&sdi)
if err != nil {
panic(err)
}
// Return the address of sdi, which means we're returning a pointer
// do wherever sdi is inside the heap.
return &sdi
Up next is this:
/*
db, err := sql.Query("mysql", "db:connectinfo")
errCheck(err)
rows, err := db.Query("SELECT opendpoint, oplocalid, claimedid FROM discovery_cache")
errCheck(err)
was unsure what to do here because I'm not sure how to
return the info properly
*/
If you've been paying attention, we can drop the first sql.Query line.
Great, now we just have:
rows, err := db.Query("SELECT ...")
So, why don't you do what you did inside the Accept method and parse the rows using for rows.Next()... ?