I am trying to build API, with database/sql and mysql driver, that will read data based on URL parameters.
Something like this
myapi.com/users?columns=id,first_name,last_name,country&sort=desc&sortColumn=last_name&limit=10&offset=20
I know how to get all columns or just specific columns when it is defined in struct. But I want to know is it possible to get columns from url and instead of predefined struct save it to map and than just scan those columns.
I have working code that will get data from above endpoint only if number of columns is same as in struct. If I remove country for example I get error that Scan expects 4 params but 3 are given.
I don't need specific code, just some directions since I am learning Go and my background is PHP where this is easier to do.
Update
Thanks to answers I have partly working solution.
Here is code:
cols := []string{"id", "first_name", "last_name"}
vals := make([]interface{}, len(cols))
w := map[string]interface{}{"id": 105}
var whereVal []interface{}
var whereCol []string
for k, v := range w {
whereVal = append(whereVal, v)
whereCol = append(whereCol, fmt.Sprintf("%s = ?", k))
}
for i := range cols {
vals[i] = new(interface{})
}
err := db.QueryRow("SELECT "+strings.Join(cols, ",")+" FROM users WHERE "+strings.Join(whereCol, " AND "), whereVal...).Scan(vals...)
if err != nil {
fmt.Println(err)
}
b, _ := json.Marshal(vals)
fmt.Println(string(b))
This should query SELECT id, first_name, last_name FROM users WHERE id = 105;
But how do I get data out to proper json object? Now it prints out strings encoded in base64 like this.
[105,"Sm9obm55","QnJhdm8="]
From what I know (also not much experienced in Go) if you don't assign a real type to value then Scan will return []byte and when it is marshalled it returns base64 encoded string.
So you have to assign a type to your columns and if you want proper json then assign keys to values.
In your example it can be done something like this:
cols := []string{"id", "first_name", "last_name"}
vals := make([]interface{}, len(cols))
result := make(map[string]interface{}, len(cols))
for i, key := range cols {
switch key {
case "id", "status":
vals[i] = new(int)
default:
vals[i] = new(string)
}
result[key] = vals[i]
}
b, _ := json.Marshal(result)
fmt.Println(string(b))
So, instead of looping over cols and creating new interface for each column, now we are creating key/value pairs and assigning type based on column name.
Also, if you have nullable columns in table, and you probably have, then you'll get error because nil can't go into string. So I suggest this package gopkg.in/guregu/null.v3 and then assign type like null.String. That way you'll get back null as a value.
For example:
for i, key := range cols {
switch key {
case "id", "status":
vals[i] = new(int)
case "updated_at", "created_at":
vals[i] = new(null.Time)
default:
vals[i] = new(null.String)
}
result[key] = vals[i]
}
Here is an option that a found to return dynamic resultset, you will need a interface{} array but you have to assign to a new(interface{}) to get a pointer that can be write by Scan method
//...
types, _ := rows.ColumnTypes()
for rows.Next() {
row := make([]interface{}, len(types))
for i := range types {
row[i] = new(interface{})
}
rows.Scan(row...)
}
You must first fetch the result columns count and then don't exceed the size.
If you meant the query fields, you need dynamic create the query string, the params size must be the same.
I would create a query statement with the dynamic fields(use placeholder for avoiding sql injection):
rows := db.QueryRow("SELECT {{YOUR_FIELDS}} from table_tbl")
Create variable carrier with the same size of columns
vals := make([]interface{}, len(rows.Columns()))
Use sql.RawBytes for field's type if you don't need type checking or can't know their types, otherwise use the same type of field.
for i, _ := range cols {
vals[i] = new(sql.RawBytes)
//check column name, if it is id, and you know it is integer
//vals[i] = new(int)
}
Iterate rows and scan
for rows.Next() {
err = rows.Scan(vals...)
}
Related
This question already has answers here:
When to use single quotes, double quotes, and backticks in MySQL
(13 answers)
Closed 10 days ago.
When I'm trying to parse data into struct and then append it into a slice, get nothing. But if I use query in MySQL workbench, I get some values....
query, err := db.Query("SELECT 'description','is_done' FROM tasks WHERE 'user_id' = ?;", userId)
if err != nil {
return nil, err
}
defer query.Close()
var tasks []TodoUserDTO
var currentTask TodoUserDTO
for query.Next() {
err = query.Scan(¤tTask.Description, ¤tTask.IsDone)
if err != nil {
panic(err)
}
tasks = append(tasks, currentTask)
}
TodoDTO struct looks like this:
type TodoUserDTO struct {
Description string `json:"desc"`
IsDone bool `json:"done"`
}
Based on the code, it looks like you're using the wrong column names in the SELECT statement of your query. The SELECT statement should include the actual column names of the columns in the tasks table, rather than the literal strings of the column names.
Try changing the SELECT statement to this:
"SELECT description, is_done FROM tasks WHERE user_id = ?"
Let's say I have a web app that shows a list of posts. The post struct is:
type Post struct {
Id int64 `sql:",primary"`
Title string
Body string
}
It retrieves the posts with:
var posts []*Post
rows, err := db.QueryContext(ctx, "select * from posts;")
if err != nil {
return nil, oops.Wrapf(err, "could not get posts")
}
defer rows.Close()
for rows.Next() {
p := &Post{}
err := rows.Scan(
&p.Id,
&p.Title,
&p.Body,
)
if err != nil {
return nil, oops.Wrapf(err, "could not scan row")
}
posts = append(posts, p)
}
return posts, nil
All works fine. Now, I want to alter the table schema by adding a column:
ALTER TABLE posts ADD author varchar(62);
Suddenly, the requests to get posts result in:
sql: expected 4 destination arguments in Scan, not 3
which makes sense since the table now has 4 columns instead of the 3 stipulated by the retrieval logic.
I can then update the struct to be:
type Post struct {
Id int64 `sql:",primary"`
Title string
Body string
Author string
}
and the retrival logic to be:
for rows.Next() {
p := &Post{}
err := rows.Scan(
&p.Id,
&p.Title,
&p.Body,
&p.Author
)
if err != nil {
return nil, oops.Wrapf(err, "could not scan row")
}
posts = append(posts, p)
}
which solves this. However, this implies there is always a period of downtime between migration and logic update + deploy. How to avoid that downtime?
I have tried swapping the order of the above changes but this does not work, with that same request resulting in:
sql: expected 3 destination arguments in Scan, not 4
(which makes sense, since the table only has 3 columns at that point as opposed to 4);
and other requests resulting in:
Error 1054: Unknown column 'author' in 'field list'
(which makes sense, because at that point the posts table does not have an author column just yet)
You should be able to achieve your desired behavior by adapting the SQL Query to return the exact fields you want to populate.
SELECT Id , Title , Body FROM posts;
This way even if you add another column Author the query results only contain 3 values.
I am trying to filter a .csv file to only include 2 columns that will be specified by the user. My current code can only filter the .csv file to one column (but when I write to a .csv file, the results are in a row instead of a column) . Any ideas on how to filter the two columns and write the results in a single column on Go? Seems
In addition, is there any way I can write the data as a column instead of a row?
import (
"encoding/csv"
"fmt"
"io"
"log"
"os"
)
func main() {
file, err := os.Open("sample.csv")
checkError(err)
reader := csv.NewReader(file)
_, err = reader.Read() //Skips header
checkError(err)
results := make([]string, 0)
for {
row, err := reader.Read()
if err == io.EOF {
break
}
if err != nil {
log.Fatal(err)
}
//fmt.Println(row[columnNum])
results = append(results, row[columnNum])
}
fmt.Print(results)
//File creation
f, err := os.Create("results.csv")
checkError(err)
defer f.Close()
w := csv.NewWriter(f)
err = w.Write(results)
checkError(err)
w.Flush()
}
...is there any way I can write the data as a column instead of a row?
You're only writing a single column. You call w.Write once, which writes a single record to the CSV. If you write once per consumed row, you'll get multiple rows. Notice that you're calling read Read many times and Write once.
Any ideas on how to filter the two columns and write the results in a single column on Go?
To get two columns, you just need to access both - I see one access to each row right now (row[columnNum]), so you'll just need a second. Combined with my previous point, I think the main problem is that you're missing that CSVs are two dimensional but you're only storing a single dimensional array.
I'm not sure what you mean by "write to a single column" - maybe you want to double the length of the CSV, or maybe you want to somehow merge two columns?
In either case, I'd suggest restructuring your code to avoid building up the intermediate results array, and instead write directly after reading. This will be more performant and more directly maps from the old format to the new one.
file, err := os.Open("sample.csv")
checkError(err)
reader := csv.NewReader(file)
_, err = reader.Read() // Skips header
checkError(err)
// Create the file and writer immediately
f, err := os.Create("results.csv")
checkError(err)
defer f.Close()
w := csv.NewWriter(f)
for {
row, err := reader.Read()
if err == io.EOF {
break
}
if err != nil {
log.Fatal(err)
}
// here - each row is written to the new CSV writer
err = w.Write([]string{row[columnNum]})
// an example of writing two columns to each row
// err = w.Write([]string{row[columnNum1], row[columnNum2]})
// an example of merging two rows
// err = w.Write([]string{fmt.Sprintf("%s - %s", row[columnNum1], row[columnNum2])})
checkError(err)
}
w.Flush()
I work with Go and MySQL database. Assume I have a slice of string like this: []string{"b", "c", "a"} and I want to have final data like this:
[]Student{
Student{ID: "b", Name: "Ben"},
Student{ID: "c", Name: "Carl"},
Student{ID: "a", Name: "Alexander"},
}
When I want to build MySQL query, is using ORDER BY FIELD(id,'b','c','a') an efficient way? Or if I don't use it, I will have code like this:
keys := []string{"b", "c", "a"}
...
students := make([]Student, 0)
for rows.Next() {
s := Student{}
err := rows.Scan(&s.ID, &s.Name)
if err != nil {
log.Fatal(err)
}
students = append(students, s)
}
mStudents := make(map[string]Student, 0)
for _, v := range students {
mStudents[v.ID] = v
}
finalData := make([]Student, 0)
for _, v := range keys {
if _, ok := mStudents[v]; ok {
finalData = append(finalData, mStudents[v])
}
}
But I think that's a very inefficient way. So, is there another way?
Thank you.
Using MySQL's ORDER BY FIELD(id,'b','c','a') is efficient and there's nothing wrong with it if you don't mind having to extend the query and having your logic in the query.
If you want to do it in Go: Go's standard lib provides a sort.Slice() function to sort any slice. You have to pass a less() function to it which must tell how 2 elements in the slice correlate to each other, if one is less than the other.
You want an order designated by another, sorted keys slice. So basically to tell if one student is "less" than another, you need to compare the indices of their keys.
To avoid having to linear-search the keys slice each time, you should build a map of them:
m := map[string]int{}
for i, k := range keys {
m[k] = i
}
And so the index that is the base of the "less" logic is a simple map lookup:
sort.Slice(students, func(i, j int) bool {
return m[students[i].ID] < m[students[j].ID]
})
Try this on the Go Playground.
I try to create bulk insert. I use gorm github.com/jinzhu/gorm
import (
"fmt"
dB "github.com/edwinlab/api/repositories"
)
func Update() error {
tx := dB.GetWriteDB().Begin()
sqlStr := "INSERT INTO city(code, name) VALUES (?, ?),(?, ?)"
vals := []interface{}{}
vals = append(vals, "XX1", "Jakarta")
vals = append(vals, "XX2", "Bandung")
tx.Exec(sqlStr, vals)
tx.Commit()
return nil
}
But I got an error:
Error 1136: Column count doesn't match value count at row 1 becuse i return wrong query
INSERT INTO city(code, name) VALUES ('XX1','Jakarta','XX2','Bandung', %!v(MISSING)),(%!v(MISSING), %!v(MISSING))
If I use manual query it works:
tx.Exec(sqlStr, "XX1", "Jakarta", "XX2", "Bandung")
It will generate:
INSERT INTO city(code, name) VALUES ('XX1', 'Jakarta'),('XX2', 'Bandung')
The problem is how to make array interface to generate string like "XX1", "Jakarta", ...
Thanks for help.
If you want to pass elements of a slice to a function with variadic parameter, you have to use ... to tell the compiler you want to pass all elements individually and not pass the slice value as a single argument, so simply do:
tx.Exec(sqlStr, vals...)
This is detailed in the spec: Passing arguments to ... parameters.
Tx.Exec() has the signature of:
func (tx *Tx) Exec(query string, args ...interface{}) (Result, error)
So you have to pass vals.... Also don't forget to check returned error, e.g.:
res, err := tx.Exec(sqlStr, vals...)
if err != nil {
// handle error
}