I referenced irbanana's answer about supporting Spatial data type for PostGIS. I'm using MySQL and am trying to implement Value() for the custom data type EWKBGeomPoint.
My Gorm model:
import (
"github.com/twpayne/go-geom"
"github.com/twpayne/go-geom/encoding/ewkb"
)
type EWKBGeomPoint geom.Point
type Tag struct {
Name string `json:"name"`
json:"siteID"` // forign key
Loc EWKBGeomPoint `json:"loc"`
}
From what I know, MySQL supports insertion like this:
INSERT INTO `tag` (`name`,`loc`) VALUES ('tag name',ST_GeomFromText('POINT(10.000000 20.000000)'))
or
INSERT INTO `tag` (`name`,`loc`) VALUES ('tag name', ST_GeomFromWKB(X'0101000000000000000000F03F000000000000F03F'))
If I do a my own Value() to satisfy the database/sql's Valuer interface:
func (g EWKBGeomPoint) Value() (driver.Value, error) {
log.Println("EWKBGeomPoint value called")
b := geom.Point(g)
bp := &b
floatArr := bp.Coords()
return fmt.Sprintf("ST_GeomFromText('POINT(%f %f)')", floatArr[0], floatArr[1]), nil
}
The entire value including ST_GeomFromText() is quoted in a single quote from Gorm, and so it won't work:
INSERT INTO `tag` (`name`,`loc`) VALUES ('tag name','ST_GeomFromText('POINT(10.000000 20.000000)')');
How do I make it work?
EDIT 1:
I trace into Gorm code, eventually it get's to callback_create.go's createCallback function. Inside it check for if primaryField == nil and it is true, it goes into calling scope.SQLDB().Exec then I failed to trace further.
scope.SQL is string INSERT INTOtag(name,loc) VALUES (?,?) and scope.SQLVars prints [tag name {{1 2 [10 20] 0}}]. It looks like interpolation happens inside this call.
Is this calling into database/sql code?
EDIT 2:
Found a similar Stackoverflow question here. But I do not understand the solution.
Here's another approach; use binary encoding.
According to this doc, MySQL stores geometry values using 4 bytes to indicate the SRID (Spatial Reference ID) followed by the WKB (Well Known Binary) representation of the value.
So a type can use WKB encoding and add and remove the four byte prefix in Value() and Scan() functions. The go-geom library found in other answers has a WKB encoding package, github.com/twpayne/go-geom/encoding/wkb.
For example:
type MyPoint struct {
Point wkb.Point
}
func (m *MyPoint) Value() (driver.Value, error) {
value, err := m.Point.Value()
if err != nil {
return nil, err
}
buf, ok := value.([]byte)
if !ok {
return nil, fmt.Errorf("did not convert value: expected []byte, but was %T", value)
}
mysqlEncoding := make([]byte, 4)
binary.LittleEndian.PutUint32(mysqlEncoding, 4326)
mysqlEncoding = append(mysqlEncoding, buf...)
return mysqlEncoding, err
}
func (m *MyPoint) Scan(src interface{}) error {
if src == nil {
return nil
}
mysqlEncoding, ok := src.([]byte)
if !ok {
return fmt.Errorf("did not scan: expected []byte but was %T", src)
}
var srid uint32 = binary.LittleEndian.Uint32(mysqlEncoding[0:4])
err := m.Point.Scan(mysqlEncoding[4:])
m.Point.SetSRID(int(srid))
return err
}
Defining a Tag using the MyPoint type:
type Tag struct {
Name string `gorm:"type:varchar(50);primary_key"`
Loc *MyPoint `gorm:"column:loc"`
}
func (t Tag) String() string {
return fmt.Sprintf("%s # Point(%f, %f)", t.Name, t.Loc.Point.Coords().X(), t.Loc.Point.Coords().Y())
}
Creating a tag using the type:
tag := &Tag{
Name: "London",
Loc: &MyPoint{
wkb.Point{
geom.NewPoint(geom.XY).MustSetCoords([]float64{0.1275, 51.50722}).SetSRID(4326),
},
},
}
err = db.Create(&tag).Error
if err != nil {
log.Fatalf("create: %v", err)
}
MySQL results:
mysql> describe tag;
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| name | varchar(50) | NO | PRI | NULL | |
| loc | geometry | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
mysql> select name, st_astext(loc) from tag;
+--------+------------------------+
| name | st_astext(loc) |
+--------+------------------------+
| London | POINT(0.1275 51.50722) |
+--------+------------------------+
(ArcGIS says 4326 is the most common spatial reference for storing a referencing data across the entire world. It serves as the default for both the PostGIS spatial database and the GeoJSON standard. It is also used by default in most web mapping libraries.)
Update: this approach didn't work.
Hooks may let you set the column to a gorm.Expr before Gorm's sql generation.
For example, something like this before insert:
func (t *Tag) BeforeCreate(scope *gorm.Scope) error {
x, y := .... // tag.Loc coordinates
text := fmt.Sprintf("POINT(%f %f)", x, y)
expr := gorm.Expr("ST_GeomFromText(?)", text)
scope.SetColumn("loc", expr)
return nil
}
Related
I am trying to update some values using gorm library but bytes and ints with 0 value are not updated
var treatment model.TreatmentDB
err = json.Unmarshal(b, &treatment)
if err != nil {
http.Error(w, err.Error(), 500)
return
}
fmt.Println(&treatment)
db := db.DB.Table("treatment").Where("id = ?", nID).Updates(&treatment)
this print value is {0 3 1 0 0 0 2018-01-01 4001-01-01} and those 0 are the byte value (tinyint(1) in database, if I change to int also not working) which are not updated, the rest of values work fine
if I update them without using Gorm this way it's working perfectly with 0 values
var query = fmt.Sprintf("UPDATE `pharmacy_sh`.`treatment` SET `id_med` = '%d', `morning` = '%d', `afternoon` = '%d', `evening` = '%d', `start_treatment` = '%s', `end_treatment` = '%s' WHERE (`id` = '%s')", treatment.IDMed, treatment.Morning, treatment.Afternoon, treatment.Evening, treatment.StartTreatment, treatment.EndTreatment, nID)
update, err := dbConnector.Exec(query)
and this is my model obj
type TreatmentDB struct {
gorm.Model
ID int `json:"id"`
IDMed int `json:"id_med"`
IDUser int `json:"id_user"`
Morning byte `json:"morning"`
Afternoon byte `json:"afternoon"`
Evening byte `json:"evening"`
StartTreatment string `json:"start_treatment"`
EndTreatment string `json:"end_treatment"`
}
Thanks for any help!!
I found a very tricky way to solve this problem.You just need to change your struct field type into a ptr.
change
type Temp struct{
String string
Bool bool
}
to
type Temp struct{
String *string
Bool *bool
}
if you wish save zero you must to write "Select" and put the column to change
db.Model(&user).Select("Name", "Age").Updates(User{Name: "new_name", Age: 0})
reference: https://gorm.io/docs/update.html#Update-Selected-Fields
I get this error and tried everything available in the internet and stackoverlow to solve this. I am trying to run a query after connecting MySQL db using sqlx package and scan through the results. I have tried the solutions shared for similar questions but nothing worked for me.
type Trip struct {
ID int `db:"id"`
Type int `db:"type"`
DID int `db:"did"`
DUID int `db:"duid"`
VID int `db:"vid"`
Sts string `db:"sts"`
AM int `db:"am"`
Sdate null.Time `db:"sdate"`
}
func GetTripByID(db sqlx.Queryer, id int) (*Trip, error) {
row := db.QueryRowx("select ID,Type,DID,DUID,VID,Sts,AM,Sdate from mytbl where ID=123", id)
var t Trip
err := row.StructScan(&t)
if err != nil {
fmt.Println("Error during struct scan")
return nil, err
}
return &t, nil
}
The exact error that I get is
panic: sql: Scan error on column index 6, name "sdate": null:
cannot scan type []uint8 into null.Time: [50 48 49 56 45 49 50 45 48
55 32 48 50 58 48 56 58 53 49]
syntax wise the query is working perfectly fine and I am getting results when I run it in sql workbench. I have also tried ParseTime=true as suggested by one of the one of the links.
Try to use special types for null values in package "database/sql"
For example, when text or varchar can be null in db, use sql.NullString for var type.
As suggested above, I did null handling for the column "Sdate"
// NullTime defining nullTime
type NullTime mysql.NullTime
// Scan implements the Scanner interface for NullTime
func (nt *NullTime) Scan(value interface{}) error {
var t mysql.NullTime
if err := t.Scan(value); err != nil {
return err
}
// if nil then make Valid false
if reflect.TypeOf(value) == nil {
*nt = NullTime{t.Time, false}
} else {
*nt = NullTime{t.Time, true}
}
and changes in the struct
type Trip struct {
ID int `db:"id"`
Type int `db:"type"`
DID int `db:"did"`
DUID int `db:"duid"`
VID int `db:"vid"`
Sts string `db:"sts"`
AM int `db:"am"`
Sdate NullTime `db:"sdate"`
}
so the solution is not just defining the struct for handling null but also implementing the scanner interface.
I'm taking my first crack at using golang to query a MySQL database but I get the following error when I run the command go run main.go.
2017/10/22 21:06:58 sql: Scan error on column index 4: unsupported
Scan, storing driver.Value type into type *string exit status 1
Here's my main.go
main.go
package main
import (
"log"
"database/sql"
)
import _ "github.com/go-sql-driver/mysql"
var db *sql.DB
var err error
// main function to boot up everything
func main() {
var dbField,dbType,dbNull,dbKey,dbDefault,dbExtra string
// Create an sql.DB and check for errors
db, err = sql.Open("mysql", "username:password#/mydatabase")
if err != nil {
panic(err.Error())
}
rows, err := db.Query("DESCRIBE t_user")
if err != nil {
log.Fatal(err)
}
defer rows.Close()
for rows.Next() {
err := rows.Scan(&dbField,&dbType,&dbNull,&dbKey,&dbDefault,&dbExtra)
if err != nil {
log.Fatal(err)
}
log.Println(dbField,dbType,dbNull,dbKey,dbDefault,dbExtra)
}
err = rows.Err()
if err != nil {
log.Fatal(err)
}
// sql.DB should be long lived "defer" closes it once this function ends
defer db.Close()
}
When I run the DESCRIBE t_user from mysql terminal, I get these results:
+------------------------+---------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------------------+---------------------+------+-----+---------+----------------+
| user_id | int(11) unsigned | NO | PRI | NULL | auto_increment |
| first_name | varchar(50) | NO | | NULL | |
| middle_name | varchar(50) | NO | | NULL | |
| last_name | varchar(50) | NO | | NULL | |
| email | varchar(50) | NO | UNI | NULL | |
+------------------------+---------------------+------+-----+---------+----------------+
I can't seem to figure out what's causing the issue. Is it because the columns key, default, extra sometimes returns empty string or null values? I tried to do something like var dbNull nil but the compiler says nil is not a type, so if the issue is related to dbNull needing to accept nullable values, how does golang address this situation? If that isn't the problem, I appreciate any other insight
Index 4 is the Default column, which is nullable. You are giving it the address to place a string, but it needs to place NULL. The solution would be to use something like sql.NullString instead of just a string.
As Gavin mentioned,
you need to use a type that handles null values such as sql.NullString.
As an alternative you can also have a new type that implements the db.Scanner interface that handles nulls.
type dbNullString string
func (v *dbNullString) Scan(value interface{}) error {
if value == nil {
*v = ""
return nil
}
if str, ok := value.(string); ok {
*v = dbNullString(str)
return nil
}
return errors.New("failed to scan dbNullString")
}
Also note that you need to implement the driver.Valuer interface to insert values using this type.
I'm trying to write tab separated values in a file using the tabwriter package in Go.
records map[string] []string
file, err := os.OpenFile(some_file, os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
log.Println(err)
}
w := new(tabwriter.Writer)
w.Init(file, 0, 4, 0, '\t', 0)
for _, v := range records {
fmt.Fprintln(w, v[0],"\t",v[1],"\t",v[2],"\t",v[3])
w.Flush()
}
The problem I'm facing is that the records written to the file have two additional spaces prepended to them. I added the debug flag and this is what I get in the file:
fname1 | mname1 | lname1 | age1
fname2 | mname2 | lname2 | age2
I'm unable to see where I'm going wrong. Any help is appreciated.
As SirDarius suggested encoding/csv is indeed the right choice. All you have to do is to set the Comma to a horizontal tab instead of the default value, which unsurprisingly is comma.
package tabulatorseparatedvalues
import (
"encoding/csv"
"io"
)
func NewWriter(w io.Writer) (writer *csv.Writer) {
writer = csv.NewWriter(w)
writer.Comma = '\t'
return
}
I'm using postgres' now() as a default for my created timestamp, which is generates this:
id | user_id | title | slug | content | created
----+---------+-------+------+---------+----------------------------
1 | 1 | Foo | foo | bar | 2014-12-16 19:41:31.428883
2 | 1 | Bar | bar | whiz | 2014-12-17 02:03:31.566419
I tried to use json.Marshal and json.Unmarshal and ended up getting this error:
parsing time ""2014-12-16 19:41:31.428883"" as ""2006-01-02T15:04:05Z07:00"": cannot parse " 19:41:31.428883"" as "T"
So I decided to try and create a custom time, but can't seem to get anything working.
Post.go
package models
type Post struct {
Id int `json:"id"`
UserId int `json:"user_id"`
Title string `json:"title"`
Slug string `json:"slug"`
Content string `json:"content"`
Created Tick `json:"created"`
User User `json:"user"`
}
Tick.go
package models
import (
"fmt"
"time"
)
type Tick struct {
time.Time
}
var format = "2006-01-02T15:04:05.999999-07:00"
func (t *Tick) MarshalJSON() ([]byte, error) {
return []byte(t.Time.Format(format)), nil
}
func (t *Tick) UnmarshalJSON(b []byte) (err error) {
b = b[1 : len(b)-1]
t.Time, err = time.Parse(format, string(b))
return
}
Any help would be much appreciated, running what I've wrote here gives me this:
json: error calling MarshalJSON for type models.Tick: invalid character '0' after top-level value
JSON requires strings to be quoted (and in JSON a date is a string), however your MarshalJSON function returns an unquoted string.
I've slightly amended your code and it works fine now:
package models
import (
"fmt"
"time"
)
type Tick struct {
time.Time
}
var format = "2006-01-02T15:04:05.999999-07:00"
func (t *Tick) MarshalJSON() ([]byte, error) {
// using `append` to avoid string concatenation
b := make([]byte, 0, len(format)+2)
b = append(b, '"')
b = append(b, t.Time.Format(format)...)
b = append(b, '"')
return b, nil
}
func (t *Tick) UnmarshalJSON(b []byte) (err error) {
b = b[1 : len(b)-1]
t.Time, err = time.Parse(format, string(b))
return
}
it's seems like you're using the wrong format. Postgres uses RFC 3339, which is already defined in the time package.
This should work:
time.Parse(time.RFC3339, string(b))