Passing Variable to GoLang Query - mysql

first off let me say I'm a beginner (started a few days ago) with golang and am in the process of learning how to practically apply the language. My goal is to build a web Rest API that queries a database and provides data back to the user.
I've been able to successfully create a simple API using martini (https://github.com/go-martini/martini) and connect to a MySQL database using https://github.com/go-sql-driver/mysql. My problem at the current moment is how to pass a variable param from the API request into my query. Here is my current code:
package main
import (
"github.com/go-martini/martini"
_ "github.com/go-sql-driver/mysql"
"database/sql"
"fmt"
)
func main() {
db, err := sql.Open("mysql",
"root:root#tcp(localhost:8889)/test")
m := martini.Classic()
var str string
m.Get("/hello/:user", func(params martini.Params) string {
var userId = params["user"]
err = db.QueryRow(
"select name from users where id = userId").Scan(&str)
if err != nil && err != sql.ErrNoRows {
fmt.Println(err)
}
return "Hello " + str
})
m.Run()
defer db.Close()
}
As you can see my goal is to take the input variable (user) and insert that into a variable called userId. Then I'd like to query the database against that variable to fetch the name. Once that all works I'd like to respond back with the name of the user.
Can anyone help? It would be much appreciated as I continue my journey to learn Go!

I haven't used it, but looking at the docs, is this what you are after?
db.QueryRow("SELECT name FROM users WHERE id=?", userId)
I assume it should replace the ? with userId in a sql safe way.
http://golang.org/pkg/database/sql/#DB.Query

You can try it this way.
var y string // Variable to store result from query.
err := db.QueryRow("SELECT name from user WHERE id = $1", jobID).Scan(&y)
if err != nil && err != sql.ErrNoRows {
fmt.Println(err)
}
Documentation reference: https://pkg.go.dev/database/sql#pkg-variables

Related

How to Create function in go lang on Azure dev-ops to list and create work-items

anyone have luck trying to create and list work items in dev ops with go lang
I've tried going through the docs but without any look, I need some kind of guidance how to get startet just some simple listing work items in a area and creating a work item
so far i can list items and that is just that
i need a function to create work items or copy current work-items and list them in a specific area, my issue is that my skills in go lang istn so good yet so i difficult to understand the docs on the azure site?
package main
import (
"context"
"log"
"net/http"
"github.com/google/uuid"
"github.com/microsoft/azure-devops-go-api/azuredevops"
"github.com/microsoft/azure-devops-go-api/azuredevops/core"
)
type GetBacklogArgs struct {
// (required) Project ID or project name
Project *string
// (required) Team ID or team name
Team *string
// (required) The id of the backlog level
Id *string
}
type ClientImpl struct {
Client azuredevops.Client
}
func main() {
organizationUrl := "<org url>" // todo: replace value with your organization url
personalAccessToken := "<token>" // todo: replace value with your PAT
// Create a connection to your organization
connection := azuredevops.NewPatConnection(organizationUrl, personalAccessToken)
ctx := context.Background()
// Create a client to interact with the Core area
coreClient, err := core.NewClient(ctx, connection)
if err != nil {
log.Fatal(err)
}
// Get first page of the list of team projects for your organization
responseValue, err := coreClient.GetProjects(ctx, core.GetProjectsArgs{})
if err != nil {
log.Fatal(err)
}
index := 0
for responseValue != nil {
// Log the page of team project names
for _, teamProjectReference := range (*responseValue).Value {
log.Printf("Name[%v] = %v", index, *teamProjectReference.Name)
index++
}
// if continuationToken has a value, then there is at least one more page of projects to get
if responseValue.ContinuationToken != "" {
// Get next page of team projects
projectArgs := core.GetProjectsArgs{
ContinuationToken: &responseValue.ContinuationToken,
}
responseValue, err = coreClient.GetProjects(ctx, projectArgs)
if err != nil {
log.Fatal(err)
}
} else {
responseValue = nil
}
}
}

How to filter elements of a [][]string slice in Golang?

First of all i'm new here and i'm trying to learn Golang. I would like to check my csv file (which has 3 values; type, maker, model) and create a new one and after a filter operation i want to write new data(filtered) to the created csv file. Here is my code so you can understand me more clearly.
package main
import (
"encoding/csv"
"fmt"
"os"
)
func main() {
//openning my csv file which is vehicles.csv
recordFile, err := os.Open("vehicles.csv")
if err != nil{
fmt.Println("An error encountered ::", err)
}
//reading it
reader := csv.NewReader(recordFile)
vehicles, _ := reader.ReadAll()
//creating a new csv file
newRecordFile, err := os.Create("newCsvFile.csv")
if err != nil{
fmt.Println("An error encountered ::", err)
}
//writing vehicles.csv into the new csv
writer := csv.NewWriter(newRecordFile)
err = writer.WriteAll(vehicles)
if err != nil {
fmt.Println("An error encountered ::", err)
}
}
After i build it, it is working this way. It reads and writes the all data to new created csv file. But the problem here is, i want to filter duplicates of readed csv which is vehicles, i am creating another function (outside of the main function) to filter duplicates but i can't do it because vehicles 's type is [][]string, i searched the internet about filtering duplicates but all i found is int or string types. What i want to do is create a function and call it before WriteAll operation so WriteAll can write the correct (duplicates filtered) data into new csv file. Help me please!!
I appreciate any answer.
Happy coding!
This depends on how you define "uniqueness", but in general there are a few parts of this problem.
What is unique?
All fields must be equal
Only some fields must be equal
Normalize some or all fields before comparing
You have a few approaches for applying your uniqueness, including:
You can use a map, keyed by the "pieces" of uniqueness, requires O(N) state
You can sort the records and compare with the prior record as you iterate, requires O(1) state but is more complicated
You have two approaches for filtering and outputting:
You can build a new slice based on the old one using a loop and write all at once, this requires O(N) space
You can write the records out to the file as you go if you don't need to sort, this requires O(1) space
I think a reasonably simple and performant approach would be to pick (1) from the first, (1) from the second, and (2) from the third, which together would look like:
package main
import (
"encoding/csv"
"errors"
"io"
"log"
"os"
)
func main() {
input, err := os.Open("vehicles.csv")
if err != nil {
log.Fatalf("opening input file: %s", err)
}
output, err := os.Create("vehicles_filtered.csv")
if err != nil {
log.Fatalf("creating output file: %s", err)
}
defer func() {
// Ensure the file is closed at the end of the program
if err := output.Close(); err != nil {
log.Fatalf("finalizing output file: %s", err)
}
}()
reader := csv.NewReader(input)
writer := csv.NewWriter(output)
seen := make(map[[3]string]bool)
for {
// Read in one record
record, err := reader.Read()
if errors.Is(err, io.EOF) {
break
}
if err != nil {
log.Fatalf("reading record: %s", err)
}
if len(record) != 3 {
log.Printf("bad record %q", record)
continue
}
// Check if the record has been seen before, skipping if so
key := [3]string{record[0], record[1], record[2]}
if seen[key] {
continue
}
seen[key] = true
// Write the record
if err := writer.Write(record); err != nil {
log.Fatalf("writing record %d: %s", len(seen), err)
}
}
}

insert query not working in golang

In my golang program, the insert MySQL query was not inserting the values. Even the syntax of the query was correct.It returns my error statement ie internal server error
package main
import (
"database/sql"
"net/http"
"log"
"golang.org/x/crypto/bcrypt"
"encoding/json"
_ "github.com/go-sql-driver/mysql"
"fmt"
)
const hashCost = 8
var db *sql.DB
func initDB(){
var err error
// Connect to the db
//you might have to change the connection string to add your database credentials
db, err = sql.Open("mysql",
"root:nfn#tcp(127.0.0.1:3306)/mydb")
if err != nil {
panic(err)
}
}
// Create a struct that models the structure of a user, both in the request body, and in the DB
type Credentials struct {
Password string `json:"password", db:"password"`
Username string `json:"username", db:"username"`
}
func Signup(w http.ResponseWriter, r *http.Request){
// Parse and decode the request body into a new `Credentials` instance
creds := &Credentials{}
err := json.NewDecoder(r.Body).Decode(creds)
if err != nil {
// If there is something wrong with the request body, return a 400 status
w.WriteHeader(http.StatusBadRequest)
return
}
// Salt and hash the password using the bcrypt algorithm
// The second argument is the cost of hashing, which we arbitrarily set as 8 (this value can be more or less, depending on the computing power you wish to utilize)
hashedPassword, err := bcrypt.GenerateFromPassword([]byte(creds.Password), 8)
// Next, insert the username, along with the hashed password into the database
// Sql query not working Here.How to solve this error
if _, err = db.Exec("INSERT INTO users(username,password) VALUES (?,?) ", creds.Username, string(hashedPassword)); err != nil {
// If there is any issue with inserting into the database, return a 500 error
w.WriteHeader(http.StatusInternalServerError)
return
}
// We reach this point if the credentials we correctly stored in the database, and the default status of 200 is sent back
}

How can I implement my own interface for OpenID that uses a MySQL Database instead of In memory storage

So I'm trying to use the OpenID package for Golang, located here: https://github.com/yohcop/openid-go
In the _example it says that it uses in memory storage for storing the nonce/discoverycache information and that it will not free the memory and that I should implement my own version of them using some sort of database.
My database of choice is MySQL, I have tried to implement what I thought was correct (but is not, does not give me any compile errors, but crashes on runtime)
My DiscoveryCache.go is as such:
package openid
import (
"database/sql"
"log"
//"time"
_ "github.com/go-sql-driver/mysql"
"github.com/yohcop/openid-go"
)
type SimpleDiscoveredInfo struct {
opEndpoint, opLocalID, claimedID string
}
func (s *SimpleDiscoveredInfo) OpEndpoint() string { return s.opEndpoint }
func (s *SimpleDiscoveredInfo) OpLocalID() string { return s.opLocalID }
func (s *SimpleDiscoveredInfo) ClaimedID() string { return s.claimedID }
type SimpleDiscoveryCache struct{}
func (s SimpleDiscoveryCache) Put(id string, info openid.DiscoveredInfo) {
/*
db, err := sql.Query("mysql", "db:connectinfo")
errCheck(err)
rows, err := db.Query("SELECT opendpoint, oplocalid, claimedid FROM discovery_cache")
errCheck(err)
was unsure what to do here because I'm not sure how to
return the info properly
*/
log.Println(info)
}
func (s SimpleDiscoveryCache) Get(id string) openid.DiscoveredInfo {
db, err := sql.Query("mysql", "db:connectinfo")
errCheck(err)
var sdi = new(SimpleDiscoveredInfo)
err = db.QueryRow("SELECT opendpoint, oplocalid, claimedid FROM discovery_cache WHERE id=?", id).Scan(&sdi)
errCheck(err)
return sdi
}
And my Noncestore.go
package openid
import (
"database/sql"
"errors"
"flag"
"fmt"
"time"
_ "github.com/go-sql-driver/mysql"
)
var maxNonceAge = flag.Duration("openid-max-nonce-age",
60*time.Second,
"Maximum accepted age for openid nonces. The bigger, the more"+
"memory is needed to store used nonces.")
type SimpleNonceStore struct{}
func (s *SimpleNonceStore) Accept(endpoint, nonce string) error {
db, err := sql.Open("mysql", "dbconnectinfo")
errCheck(err)
if len(nonce) < 20 || len(nonce) > 256 {
return errors.New("Invalid nonce")
}
ts, err := time.Parse(time.RFC3339, nonce[0:20])
errCheck(err)
rows, err := db.Query("SELECT * FROM noncestore")
defer rows.Close()
now := time.Now()
diff := now.Sub(ts)
if diff > *maxNonceAge {
return fmt.Errorf("Nonce too old: %ds", diff.Seconds())
}
d := nonce[20:]
for rows.Next() {
var timeDB, nonce string
err := rows.Scan(&nonce, &timeDB)
errCheck(err)
dbTime, err := time.Parse(time.RFC3339, timeDB)
errCheck(err)
if dbTime == ts && nonce == d {
return errors.New("Nonce is already used")
}
if now.Sub(dbTime) < *maxNonceAge {
_, err := db.Query("INSERT INTO noncestore SET nonce=?, time=?", &nonce, dbTime)
errCheck(err)
}
}
return nil
}
func errCheck(err error) {
if err != nil {
panic("We had an error!" + err.Error())
}
}
Then I try to use them in my main file as:
import _"github.com/mysqlOpenID"
var nonceStore = &openid.SimpleNonceStore{}
var discoveryCache = &openid.SimpleDiscoveryCache{}
I get no compile errors but it crashes
I'm sure you'll look at my code and go what the hell (I'm fairly new and only have a week or so experience with Golang so please feel free to correct anything)
Obviously I have done something wrong, I basically looked at the NonceStore.go and DiscoveryCache.go on the github for OpenId, replicated it, but replaced the map with database insert and select functions
IF anybody can point me in the right direction on how to implement this properly that would be much appreciated, thanks! If you need anymore information please ask.
Ok. First off, I don't believe you that the code compiles.
Let's look at some mistakes, shall we?
db, err := sql.Open("mysql", "dbconnectinfo")
This line opens a database connection. It should only be used once, preferably inside an init() function. For example,
var db *sql.DB
func init() {
var err error
// Now the db variable above is automagically set to the left value (db)
// of sql.Open and the "var err error" above is the right value (err)
db, err = sql.Open("mysql", "root#tcp(127.0.0.1:3306)")
if err != nil {
panic(err)
}
}
Bang. Now you're connected to your MySQL database.
Now what?
Well this (from Get) is gross:
db, err := sql.Query("mysql", "db:connectinfo")
errCheck(err)
var sdi = new(SimpleDiscoveredInfo)
err = db.QueryRow("SELECT opendpoint, oplocalid, claimedid FROM discovery_cache WHERE id=?", id).Scan(&sdi)
errCheck(err)
Instead, it should be this:
// No need for a pointer...
var sdi SimpleDiscoveredInfo
// Because we take the address of 'sdi' right here (inside Scan)
// And that's a useless (and potentially problematic) layer of indirection.
// Notice how I dropped the other "db, err := sql.Query" part? We don't
// need it because we've already declared "db" as you saw in the first
// part of my answer.
err := db.QueryRow("SELECT ...").Scan(&sdi)
if err != nil {
panic(err)
}
// Return the address of sdi, which means we're returning a pointer
// do wherever sdi is inside the heap.
return &sdi
Up next is this:
/*
db, err := sql.Query("mysql", "db:connectinfo")
errCheck(err)
rows, err := db.Query("SELECT opendpoint, oplocalid, claimedid FROM discovery_cache")
errCheck(err)
was unsure what to do here because I'm not sure how to
return the info properly
*/
If you've been paying attention, we can drop the first sql.Query line.
Great, now we just have:
rows, err := db.Query("SELECT ...")
So, why don't you do what you did inside the Accept method and parse the rows using for rows.Next()... ?

Golang slices of struct or newbie trouble building REST

and need your help.
Wanted to build simple api and stuck with some problem.
I've choose gin and database/sql with postgres driver
package main
import (
"database/sql"
"fmt"
"github.com/gin-gonic/gin"
_ "github.com/lib/pq"
)
func main() {
router := gin.Default()
router.GET("/search/:text", SearchWord)
router.Run(":8080")
}
I need to make query to DB and make json out of this request.
func checkErr(err error) {
if err != nil {
panic(err)
}
}
type Message struct {
ticket_id int `json:"ticket_id"`
event string `json:"event"`
}
func SearchWord(c *gin.Context) {
word := c.Params.ByName("text")
db, err := sql.Open("postgres", "host=8.8.8.8 user= password= dbname=sample")
defer db.Close()
checkErr(err)
rows, err2 := db.Query("SELECT ticket_id,event FROM ....$1, word)
checkErr(err)
for rows.Next() {
var ticket_id int
var event string
err = rows.Scan(&ticket_id, &event)
checkErr(err)
fmt.Printf("%d | %s \n\n", ticket_id, event)
}
}
This coda working nice, but when i need to make json.
I need to make struct of a row
type Message struct {
ticket_id int `json:"ticket_id"`
event string `json:"event"`
}
an then i need to create slice , and append every rows.Next() loop an than answer to browser with Json...
c.JSON(200, messages)
But how to do that...don't know :(
disclaimer: I am brand new to go
Since you Scanned your column data into your variables, you should be able to initialize a structure with their values:
m := &Message{ticket_id: ticket_id, event: event}
You could initialize a slice with
s := make([]*Message, 0)
And then append each of your message structs after instantiation:
s = append(s, m)
Because I'm not too familiar with go there are a couple things i'm not sure about:
after copying data from query to your vars using rows.Scan does initializing the Message struct copy the current iterations values as expected??
If there is a way to get the total number of rows from your query it might be slighlty more performant to initialize a static length array, instead of a slice?
I think #inf deleted answer about marshalling your Message to json down the line might need to be addressed, and Message field's might need to be capitalized
copied from #inf:
The names of the members of your struct need be capitalized so that
they get exported and can be accessed.
type Message struct {
Ticket_id int `json:"ticket_id"`
Event string `json:"event"` }
I'm going to cheat a little here and fix a few things along the way:
First: open your database connection pool once at program start-up (and not on every request).
Second: we'll use sqlx to make it easier to marshal our database rows into our struct.
package main
var db *sqlx.DB
func main() {
var err error
// sqlx.Connect also checks that the connection works.
// sql.Open only "establishes" a pool, but doesn't ping the DB.
db, err = sqlx.Connect("postgres", "postgres:///...")
if err != nil {
log.Fatal(err)
}
router := gin.Default()
router.GET("/search/:text", SearchWord)
router.Run(":8080")
}
// in_another_file.go
type Message struct {
TicketID int `json:"ticket_id" db:"ticket_id"`
Event string `json:"event" db:"event"`
}
func SearchWord(c *gin.Context) {
word := c.Params.ByName("text")
// We create a slice of structs to marshal our rows into
var messages []*Message{}
// Our DB connection pool is safe to use concurrently from here
err := db.Select(&messages, "SELECT ticket_id,event FROM ....$1, word)
if err != nil {
http.Error(c.Writer, err.Error(), 500)
return
}
// Write it out using gin-gonic's JSON writer.
c.JSON(200, messages)
}
I hope that's clear. sqlx also takes care of calling rows.Close() for you, which will otherwise leave connections hanging.