I'm trying to write tab separated values in a file using the tabwriter package in Go.
records map[string] []string
file, err := os.OpenFile(some_file, os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
log.Println(err)
}
w := new(tabwriter.Writer)
w.Init(file, 0, 4, 0, '\t', 0)
for _, v := range records {
fmt.Fprintln(w, v[0],"\t",v[1],"\t",v[2],"\t",v[3])
w.Flush()
}
The problem I'm facing is that the records written to the file have two additional spaces prepended to them. I added the debug flag and this is what I get in the file:
fname1 | mname1 | lname1 | age1
fname2 | mname2 | lname2 | age2
I'm unable to see where I'm going wrong. Any help is appreciated.
As SirDarius suggested encoding/csv is indeed the right choice. All you have to do is to set the Comma to a horizontal tab instead of the default value, which unsurprisingly is comma.
package tabulatorseparatedvalues
import (
"encoding/csv"
"io"
)
func NewWriter(w io.Writer) (writer *csv.Writer) {
writer = csv.NewWriter(w)
writer.Comma = '\t'
return
}
Related
I have the following query which returns results when run directly from mysql.
The same query returns 0 values, when run from golang program.
package main
import (
"github.com/rs/zerolog/log"
_ "github.com/go-sql-driver/mysql"
"github.com/jmoiron/sqlx"
)
var DB *sqlx.DB
func main() {
DB, err := sqlx.Connect("mysql", "root:password#(localhost:3306)/jsl2")
if err != nil {
log.Error().Err(err)
}
sqlstring := `SELECT
salesdetails.taxper, sum(salesdetails.gvalue),
sum(salesdetails.taxamt)
FROM salesdetails
Inner Join sales ON sales.saleskey = salesdetails.saleskey
where
sales.bdate >= '2021-12-01'
and sales.bdate <= '2021-12-31'
and sales.achead IN (401975)
group by salesdetails.taxper
order by salesdetails.taxper`
rows, err := DB.Query(sqlstring)
for rows.Next() {
var taxper int
var taxableValue float64
var taxAmount float64
err = rows.Scan(&taxper, &taxableValue, &taxAmount)
log.Print(taxper, taxableValue, taxAmount)
}
err = rows.Err()
if err != nil {
log.Error().Err(err)
}
}
On the console, running the program returns the following values.
In SQL browser, it returns 4 rows which is correct.
The result from the sql browser for the same query is
0 1278.00 0.00
5 89875.65 4493.78
12 3680.00 441.60
18 94868.73 17076.37
But in the program also return 4 rows with 0 value.
{"level":"debug","time":"2022-01-13T17:07:39+05:30","message":"0 0 0"}
{"level":"debug","time":"2022-01-13T17:07:39+05:30","message":"0 0 0"}
{"level":"debug","time":"2022-01-13T17:07:39+05:30","message":"0 0 0"}
{"level":"debug","time":"2022-01-13T17:07:39+05:30","message":"0 0 0"}
How to set the datatype for the aggregate functions.
I changed the datatype of taxper to float and it worked.
I found this after checking the err from rows.Scan( as suggested by #mkopriva
Found a strange inconsistency upon selecting rows from mysql 8.0.19 after some transaction on resultset (e.g. at mysqlworkbench by editing some rows).
(for reference this function: https://golang.org/pkg/database/sql/#DB.Query)
In others words db.Query(SQL) returns the old state of my resultset (before editing and committing).
MYSQL rows before editing:
105 admin
106 user1
107 user2
109 user3
MYSQL rows after editing:
105 admin
106 user11
107 user22
109 user33
But Golang db.Query(SQL) still continuously returns:
105 admin
106 user1
107 user2
109 user3
Does db.Query(SQL) require being committed to maintain consistency with current database state? Because after I have added db.Begin() and db.Commit() it started to work consistently. Haven't tried other databases, doesn't look like a driver issue, or variable copy issue. It is bit odd coming from JDBC. Autocommit disabled.
The code:
func FindAll(d *sql.DB) ([]*usermodel.User, error) {
const SQL = `SELECT * FROM users t ORDER BY 1`
//tx, _ := d.Begin()
rows, err := d.Query(SQL)
if err != nil {
return nil, err
}
defer func() {
err := rows.Close()
if err != nil {
log.Fatalln(err)
}
}()
l := make([]*usermodel.User, 0)
for rows.Next() {
t := usermodel.NewUser()
if err = rows.Scan(&t.UserId, &t.Username, &t.FullName, &t.PasswordHash, &t.Email, &t.ExpireDate,
&t.LastAuthDate, &t.StateId, &t.CreatedAt, &t.UpdatedAt); err != nil {
return nil, err
}
l = append(l, t)
}
if err = rows.Err(); err != nil {
return nil, err
}
//_ = tx.Commit()
return l, nil
}
This is purely about MySQL MVCC (see https://dev.mysql.com/doc/refman/8.0/en/innodb-multi-versioning.html and https://dev.mysql.com/doc/refman/8.0/en/innodb-transaction-isolation-levels.html) and not Go/DB driver.
In short, if you start a transaction, read some data, then another transaction changes it and commits, you may or may not see the results, depending on the transaction isolation level set on the MySQL server.
I referenced irbanana's answer about supporting Spatial data type for PostGIS. I'm using MySQL and am trying to implement Value() for the custom data type EWKBGeomPoint.
My Gorm model:
import (
"github.com/twpayne/go-geom"
"github.com/twpayne/go-geom/encoding/ewkb"
)
type EWKBGeomPoint geom.Point
type Tag struct {
Name string `json:"name"`
json:"siteID"` // forign key
Loc EWKBGeomPoint `json:"loc"`
}
From what I know, MySQL supports insertion like this:
INSERT INTO `tag` (`name`,`loc`) VALUES ('tag name',ST_GeomFromText('POINT(10.000000 20.000000)'))
or
INSERT INTO `tag` (`name`,`loc`) VALUES ('tag name', ST_GeomFromWKB(X'0101000000000000000000F03F000000000000F03F'))
If I do a my own Value() to satisfy the database/sql's Valuer interface:
func (g EWKBGeomPoint) Value() (driver.Value, error) {
log.Println("EWKBGeomPoint value called")
b := geom.Point(g)
bp := &b
floatArr := bp.Coords()
return fmt.Sprintf("ST_GeomFromText('POINT(%f %f)')", floatArr[0], floatArr[1]), nil
}
The entire value including ST_GeomFromText() is quoted in a single quote from Gorm, and so it won't work:
INSERT INTO `tag` (`name`,`loc`) VALUES ('tag name','ST_GeomFromText('POINT(10.000000 20.000000)')');
How do I make it work?
EDIT 1:
I trace into Gorm code, eventually it get's to callback_create.go's createCallback function. Inside it check for if primaryField == nil and it is true, it goes into calling scope.SQLDB().Exec then I failed to trace further.
scope.SQL is string INSERT INTOtag(name,loc) VALUES (?,?) and scope.SQLVars prints [tag name {{1 2 [10 20] 0}}]. It looks like interpolation happens inside this call.
Is this calling into database/sql code?
EDIT 2:
Found a similar Stackoverflow question here. But I do not understand the solution.
Here's another approach; use binary encoding.
According to this doc, MySQL stores geometry values using 4 bytes to indicate the SRID (Spatial Reference ID) followed by the WKB (Well Known Binary) representation of the value.
So a type can use WKB encoding and add and remove the four byte prefix in Value() and Scan() functions. The go-geom library found in other answers has a WKB encoding package, github.com/twpayne/go-geom/encoding/wkb.
For example:
type MyPoint struct {
Point wkb.Point
}
func (m *MyPoint) Value() (driver.Value, error) {
value, err := m.Point.Value()
if err != nil {
return nil, err
}
buf, ok := value.([]byte)
if !ok {
return nil, fmt.Errorf("did not convert value: expected []byte, but was %T", value)
}
mysqlEncoding := make([]byte, 4)
binary.LittleEndian.PutUint32(mysqlEncoding, 4326)
mysqlEncoding = append(mysqlEncoding, buf...)
return mysqlEncoding, err
}
func (m *MyPoint) Scan(src interface{}) error {
if src == nil {
return nil
}
mysqlEncoding, ok := src.([]byte)
if !ok {
return fmt.Errorf("did not scan: expected []byte but was %T", src)
}
var srid uint32 = binary.LittleEndian.Uint32(mysqlEncoding[0:4])
err := m.Point.Scan(mysqlEncoding[4:])
m.Point.SetSRID(int(srid))
return err
}
Defining a Tag using the MyPoint type:
type Tag struct {
Name string `gorm:"type:varchar(50);primary_key"`
Loc *MyPoint `gorm:"column:loc"`
}
func (t Tag) String() string {
return fmt.Sprintf("%s # Point(%f, %f)", t.Name, t.Loc.Point.Coords().X(), t.Loc.Point.Coords().Y())
}
Creating a tag using the type:
tag := &Tag{
Name: "London",
Loc: &MyPoint{
wkb.Point{
geom.NewPoint(geom.XY).MustSetCoords([]float64{0.1275, 51.50722}).SetSRID(4326),
},
},
}
err = db.Create(&tag).Error
if err != nil {
log.Fatalf("create: %v", err)
}
MySQL results:
mysql> describe tag;
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| name | varchar(50) | NO | PRI | NULL | |
| loc | geometry | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
mysql> select name, st_astext(loc) from tag;
+--------+------------------------+
| name | st_astext(loc) |
+--------+------------------------+
| London | POINT(0.1275 51.50722) |
+--------+------------------------+
(ArcGIS says 4326 is the most common spatial reference for storing a referencing data across the entire world. It serves as the default for both the PostGIS spatial database and the GeoJSON standard. It is also used by default in most web mapping libraries.)
Update: this approach didn't work.
Hooks may let you set the column to a gorm.Expr before Gorm's sql generation.
For example, something like this before insert:
func (t *Tag) BeforeCreate(scope *gorm.Scope) error {
x, y := .... // tag.Loc coordinates
text := fmt.Sprintf("POINT(%f %f)", x, y)
expr := gorm.Expr("ST_GeomFromText(?)", text)
scope.SetColumn("loc", expr)
return nil
}
I'm taking my first crack at using golang to query a MySQL database but I get the following error when I run the command go run main.go.
2017/10/22 21:06:58 sql: Scan error on column index 4: unsupported
Scan, storing driver.Value type into type *string exit status 1
Here's my main.go
main.go
package main
import (
"log"
"database/sql"
)
import _ "github.com/go-sql-driver/mysql"
var db *sql.DB
var err error
// main function to boot up everything
func main() {
var dbField,dbType,dbNull,dbKey,dbDefault,dbExtra string
// Create an sql.DB and check for errors
db, err = sql.Open("mysql", "username:password#/mydatabase")
if err != nil {
panic(err.Error())
}
rows, err := db.Query("DESCRIBE t_user")
if err != nil {
log.Fatal(err)
}
defer rows.Close()
for rows.Next() {
err := rows.Scan(&dbField,&dbType,&dbNull,&dbKey,&dbDefault,&dbExtra)
if err != nil {
log.Fatal(err)
}
log.Println(dbField,dbType,dbNull,dbKey,dbDefault,dbExtra)
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
// sql.DB should be long lived "defer" closes it once this function ends
defer db.Close()
}
When I run the DESCRIBE t_user from mysql terminal, I get these results:
+------------------------+---------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------------------+---------------------+------+-----+---------+----------------+
| user_id | int(11) unsigned | NO | PRI | NULL | auto_increment |
| first_name | varchar(50) | NO | | NULL | |
| middle_name | varchar(50) | NO | | NULL | |
| last_name | varchar(50) | NO | | NULL | |
| email | varchar(50) | NO | UNI | NULL | |
+------------------------+---------------------+------+-----+---------+----------------+
I can't seem to figure out what's causing the issue. Is it because the columns key, default, extra sometimes returns empty string or null values? I tried to do something like var dbNull nil but the compiler says nil is not a type, so if the issue is related to dbNull needing to accept nullable values, how does golang address this situation? If that isn't the problem, I appreciate any other insight
Index 4 is the Default column, which is nullable. You are giving it the address to place a string, but it needs to place NULL. The solution would be to use something like sql.NullString instead of just a string.
As Gavin mentioned,
you need to use a type that handles null values such as sql.NullString.
As an alternative you can also have a new type that implements the db.Scanner interface that handles nulls.
type dbNullString string
func (v *dbNullString) Scan(value interface{}) error {
if value == nil {
*v = ""
return nil
}
if str, ok := value.(string); ok {
*v = dbNullString(str)
return nil
}
return errors.New("failed to scan dbNullString")
}
Also note that you need to implement the driver.Valuer interface to insert values using this type.
I'm using postgres' now() as a default for my created timestamp, which is generates this:
id | user_id | title | slug | content | created
----+---------+-------+------+---------+----------------------------
1 | 1 | Foo | foo | bar | 2014-12-16 19:41:31.428883
2 | 1 | Bar | bar | whiz | 2014-12-17 02:03:31.566419
I tried to use json.Marshal and json.Unmarshal and ended up getting this error:
parsing time ""2014-12-16 19:41:31.428883"" as ""2006-01-02T15:04:05Z07:00"": cannot parse " 19:41:31.428883"" as "T"
So I decided to try and create a custom time, but can't seem to get anything working.
Post.go
package models
type Post struct {
Id int `json:"id"`
UserId int `json:"user_id"`
Title string `json:"title"`
Slug string `json:"slug"`
Content string `json:"content"`
Created Tick `json:"created"`
User User `json:"user"`
}
Tick.go
package models
import (
"fmt"
"time"
)
type Tick struct {
time.Time
}
var format = "2006-01-02T15:04:05.999999-07:00"
func (t *Tick) MarshalJSON() ([]byte, error) {
return []byte(t.Time.Format(format)), nil
}
func (t *Tick) UnmarshalJSON(b []byte) (err error) {
b = b[1 : len(b)-1]
t.Time, err = time.Parse(format, string(b))
return
}
Any help would be much appreciated, running what I've wrote here gives me this:
json: error calling MarshalJSON for type models.Tick: invalid character '0' after top-level value
JSON requires strings to be quoted (and in JSON a date is a string), however your MarshalJSON function returns an unquoted string.
I've slightly amended your code and it works fine now:
package models
import (
"fmt"
"time"
)
type Tick struct {
time.Time
}
var format = "2006-01-02T15:04:05.999999-07:00"
func (t *Tick) MarshalJSON() ([]byte, error) {
// using `append` to avoid string concatenation
b := make([]byte, 0, len(format)+2)
b = append(b, '"')
b = append(b, t.Time.Format(format)...)
b = append(b, '"')
return b, nil
}
func (t *Tick) UnmarshalJSON(b []byte) (err error) {
b = b[1 : len(b)-1]
t.Time, err = time.Parse(format, string(b))
return
}
it's seems like you're using the wrong format. Postgres uses RFC 3339, which is already defined in the time package.
This should work:
time.Parse(time.RFC3339, string(b))